This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Racial bias in artificial intelligence restricts vital access to health care and financial services, says data scientist

ai
Credit: CC0 Public Domain

Artificial intelligence is a pervasive part of modern-day life and is used by vital institutions from banks to police forces. But a growing mountain of evidence suggests that the AI used by these organizations can entrench systemic racism. 

This can negatively impact Black and ethnic people when they are applying for a mortgage or seeking health care, according to an industry expert. 

Confronting biases

Calvin D Lawrence is a Distinguished Engineer at IBM. He has gathered evidence to show that technology used by policing and judicial systems contain in-built biases stemming from human and systemic or institutional preferences. But, he says, there are steps AI developers and technologists can do to redress the balance. 

Lawrence said, "AI is an inescapable mechanism of modern society, and it affects everyone, yet its internal biases are rarely confronted—and I think it is time we address that." 

In his new book, "Hidden in White Sight," published today, Lawrence explores the breadth of AI use in the United States and Europe including health care services, policy, advertising, banking, education and applying for and getting loans.

"Hidden in White Sight" reveals the sobering reality that AI outcomes can restrict those most in need of these services.

Lawrence added, "Artificial Intelligence was meant to be the great social equalizer that helps promote fairness by removing human bias, but in fact I have found in my research and in my own life that this is far from the case." 

A tool for society

Lawrence has been designing and developing software for the last thirty years, working on many AI-based systems at the U.S. Army, NASA, Sun Microsystems, and IBM. 

With his expertise and experience, Lawrence advises readers on what they can do to fight against it and how developers and technologists can build fairer systems. 

These recommendations include rigorous quality testing of AI systems, full transparency of datasets, viable opt-outs and in-built "right to be forgotten." Lawrence also suggests that people should be able to easily check what data is held against their names and be given clear access to recourse if the data is inaccurate. 

Lawrence added, "This is not a problem that just affects one group of people, this is a societal issue. It is about who we want to be as a society and whether we want to be in control of technology, or whether we want it to control us. 

"I would urge anyone who has a seat at the table, whether you're a CEO or tech developer or somebody who uses AI in your , to be intentional with how you use this powerful tool." 

More information: Calvin D. Lawrence, Corporate Choice, Hidden in White Sight (2023). DOI: 10.1201/9781003368755-13

Provided by Taylor & Francis
Citation: Racial bias in artificial intelligence restricts vital access to health care and financial services, says data scientist (2023, March 9) retrieved 20 April 2024 from https://techxplore.com/news/2023-03-racial-bias-artificial-intelligence-restricts.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Australians with obesity unfairly blamed amid 'lazy' stigma

5 shares

Feedback to editors