Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism

Share

Explore Our Galleries

A man stands in front of the Djingareyber mosque on February 4, 2016 in Timbuktu, central Mali. 
Mali's fabled city of Timbuktu on February 4 celebrated the recovery of its historic mausoleums, destroyed during an Islamist takeover of northern Mali in 2012 and rebuilt thanks to UN cultural agency UNESCO.
TO GO WITH AFP STORY BY SEBASTIEN RIEUSSEC / AFP / SÉBASTIEN RIEUSSEC
African Peoples Before Captivity
Shackles from Slave Ship Henrietta Marie
Kidnapped: The Middle Passage
Image of the first black members of Congress
Reconstruction: A Brief Glimpse of Freedom
The Lynching of Laura Nelson_May_1911 200x200
One Hundred Years of Jim Crow
Civil Rights protest in Alabama
I Am Somebody! The Struggle for Justice
Black Lives Matter movement
NOW: Free At Last?
#15-Beitler photo best TF reduced size
Memorial to the Victims of Lynching
hands raised black background
The Freedom-Lovers’ Roll Call Wall
Frozen custard in Milwaukee's Bronzeville
Special Exhibits

Breaking News!

Today's news and culture by Black and other reporters in the Black and mainstream media.

Ways to Support ABHM?

By Crystal Grant, ACLU

Unclear regulation and a lack of transparency increase the risk that AI and algorithmic tools that exacerbate racial biases will be used in medical settings.

Racism in AI can prevent people of color from getting medical care

Artificial intelligence (AI) and algorithmic decision-making systems — algorithms that analyze massive amounts of data and make predictions about the future — are increasingly affecting Americans’ daily lives. People are compelled to include buzzwords in their resumes to get past AI-driven hiring software. Algorithms are deciding who will get housing or financial loan opportunities. And biased testing software is forcing students of color and students with disabilities to grapple with increased anxiety that they may be locked out of their exams or flagged for cheating. But there’s another frontier of AI and algorithms that should worry us greatly: the use of these systems in medical care and treatment.

The use of AI and algorithmic decision-making systems in medicine are increasing even though current regulation may be insufficient to detect harmful racial biases in these tools. Details about the tools’ development are largely unknown to clinicians and the public — a lack of transparency that threatens to automate and worsen racism in the health care system. Last week, the FDA issued guidance significantly broadening the scope of the tools it plans to regulate. This broadening guidance emphasizes that more must be done to combat bias and promote equity amid the growing number and increasing use of AI and algorithmic tools.

Bias in Medical and Public Health Tools

In 2019, a bombshell study found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias — Black patients had to be deemed much sicker than white patients to be recommended for the same care. This happened because the algorithm had been trained on past data on health care spending, which reflects a history in which Black patients had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities. While this algorithm’s bias was eventually detected and corrected, the incident raises the question of how many more clinical and medical tools may be similarly discriminatory.

Another algorithm, created to determine how many hours of aid Arkansas residents with disabilities would receive each week, was criticized after making extreme cuts to in-home care. Some residents attributed extreme disruptions to their lives and even hospitalization to the sudden cuts. A resulting lawsuit found that several errors in the algorithm — errors in how it characterized the medical needs of people with certain disabilities — were directly to blame for inappropriate cuts made. Despite this outcry, the group that developed the flawed algorithm still creates tools used in health care settings in nearly half of U.S. states as well as internationally.

One recent study found that an AI tool trained on medical images, like x-rays and CT scans, had unexpectedly learned to discern patients’ self-reported race. It learned to do this even when it was trained only with the goal of helping clinicians diagnose patient images. This technology’s ability to tell patients’ race — even when their doctor cannot — could be abused in the future, or unintentionally direct worse care to communities of color without detection or intervention.

Read more about how AI can exacerbate racism.

DALL·E 2 by OpenAI has specifically shown how AI can perpetuate bias.

Our breaking news includes more stories about the convergence of racism and tech.

Comments Are Welcome

Note: We moderate submissions in order to create a space for meaningful dialogue, a space where museum visitors – adults and youth –– can exchange informed, thoughtful, and relevant comments that add value to our exhibits.

Racial slurs, personal attacks, obscenity, profanity, and SHOUTING do not meet the above standard. Such comments are posted in the exhibit Hateful Speech. Commercial promotions, impersonations, and incoherent comments likewise fail to meet our goals, so will not be posted. Submissions longer than 120 words will be shortened.

See our full Comments Policy here.

Leave a Comment