As AI tools get smarter, they’re growing more covertly racist, experts find

Share

Explore Our Galleries

A man stands in front of the Djingareyber mosque on February 4, 2016 in Timbuktu, central Mali. 
Mali's fabled city of Timbuktu on February 4 celebrated the recovery of its historic mausoleums, destroyed during an Islamist takeover of northern Mali in 2012 and rebuilt thanks to UN cultural agency UNESCO.
TO GO WITH AFP STORY BY SEBASTIEN RIEUSSEC / AFP / SÉBASTIEN RIEUSSEC
African Peoples Before Captivity
Shackles from Slave Ship Henrietta Marie
Kidnapped: The Middle Passage
Image of the first black members of Congress
Reconstruction: A Brief Glimpse of Freedom
The Lynching of Laura Nelson_May_1911 200x200
One Hundred Years of Jim Crow
Civil Rights protest in Alabama
I Am Somebody! The Struggle for Justice
Black Lives Matter movement
NOW: Free At Last?
#15-Beitler photo best TF reduced size
Memorial to the Victims of Lynching
hands raised black background
The Freedom-Lovers’ Roll Call Wall
Frozen custard in Milwaukee's Bronzeville
Special Exhibits

Breaking News!

Today's news and culture by Black and other reporters in the Black and mainstream media.

Ways to Support ABHM?

By Ava Sasani, The Guardian

Job Interview
A new report reveals that AI discriminates against Black job applicants (Tima Miroshnichenko/Pexels)

Popular artificial intelligence tools are becoming more covertly racist as they advance, says an alarming new report.

A team of technology and linguistics researchers revealed this week that large language models like OpenAI’s ChatGPT and Google’s Gemini hold racist stereotypes about speakers of African American Vernacular English, or AAVE, an English dialect created and spoken by Black Americans.

“We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the recent paper, published this week in arXiv, an open-access research archive from Cornell University.

Hoffman explained that previously researchers “only really looked at what overt racial biases these technologies might hold” and never “examined how these AI systems react to less overt markers of race, like dialect differences”.

[…]

Hoffman and his colleagues asked the AI models to assess the intelligence and employability of people who speak using AAVE compared to people who speak using what they dub “standard American English”.

[…]

The models were significantly more likely to describe AAVE speakers as “stupid” and “lazy”, assigning them to lower-paying jobs.

Keep reading.

AI increasingly proves capable of replicating racial bias.

More breaking Black news.

Comments Are Welcome

Note: We moderate submissions in order to create a space for meaningful dialogue, a space where museum visitors – adults and youth –– can exchange informed, thoughtful, and relevant comments that add value to our exhibits.

Racial slurs, personal attacks, obscenity, profanity, and SHOUTING do not meet the above standard. Such comments are posted in the exhibit Hateful Speech. Commercial promotions, impersonations, and incoherent comments likewise fail to meet our goals, so will not be posted. Submissions longer than 120 words will be shortened.

See our full Comments Policy here.

Leave a Comment