New York Times: Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History

New York Times: Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. “Many Black artists are finding evidence of racial bias in artificial intelligence, both in the large data sets that teach machines how to generate images and in the underlying programs that run the algorithms. In some cases, A.I. technologies seem to ignore or distort artists’ text prompts, affecting how Black people are depicted in images, and in others, they seem to stereotype or censor Black history and culture.”

Chronicle of Philanthropy: Seeking to Curb Racial Bias in Medicine, Doris Duke Fund Awards $10 Million to Health Groups

Chronicle of Philanthropy: Seeking to Curb Racial Bias in Medicine, Doris Duke Fund Awards $10 Million to Health Groups. “The Doris Duke Charitable Foundation is awarding more than $10 million to five health organizations to reconsider the use of race in medical algorithms, which research shows can lead to potentially dangerous results for patients of color.”

New York Times: Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.

New York Times: Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.. “Google’s and Apple’s tools were clearly the most sophisticated when it came to image analysis. Yet Google, whose Android software underpins most of the world’s smartphones, has made the decision to turn off the ability to visually search for primates for fear of making an offensive mistake and labeling a person as an animal. And Apple, with technology that performed similarly to Google’s in our test, appeared to disable the ability to look for monkeys and apes as well.”

Nature: Why AI’s diversity crisis matters, and how to tackle it

Nature: Why AI’s diversity crisis matters, and how to tackle it. “Artificial intelligence (AI) is facing a diversity crisis. If it isn’t addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting ‘intelligence’ will be flawed, lacking varied social-emotional and cultural knowledge.”

The Guardian: Google launches new AI PaLM 2 in attempt to regain leadership of the pack

The Guardian: Google launches new AI PaLM 2 in attempt to regain leadership of the pack. “In its preliminary research, the company warned that systems built on PaLM 2 ‘continue to produce toxic language harms’, with some languages issuing ‘toxic’ responses to queries about black people in almost a fifth of all tests, part of the reason the Bard chatbot is only available in three languages at launch.”

PC World: Microsoft’s new AI Bing taught my son ethnic slurs, and I’m horrified

PC World: Microsoft’s new AI Bing taught my son ethnic slurs, and I’m horrified. “Yes, it prefaced the response by noting that some ethnic nicknames were neutral or positive, and others were racist and harmful. But I expected one of two outcomes: Either Bing would provide socially acceptable characterizations of ethnic groups (Black, Latino) or simply decline to respond. Instead, it started listing pretty much every ethnic description it knew, both good and very, very bad.”

WIRED: Algorithms Allegedly Penalized Black Renters. The US Government Is Watching

WIRED: Algorithms Allegedly Penalized Black Renters. The US Government Is Watching. “SafeRent had argued that algorithms used to screen tenants aren’t subject to the Fair Housing Act, because its scores only advise landlords and don’t make decisions. The DOJ’s brief, filed jointly with the Department of Housing and Urban Development, dismisses that claim, saying the act and associated case law leave no ambiguity.”

Mashable: Virtual rapper FN Meka underscores how AI perpetuates racial stereotyping

Mashable: Virtual rapper FN Meka underscores how AI perpetuates racial stereotyping. “On Aug. 12, AI-powered rapper FN Meka signed a record deal with Capitol Records, becoming the first digital artist to sign with a major label. Eleven days later, the deal was terminated amidst calls that the character promoted ‘gross stereotypes’ of Black culture, as reported by the New York Times.” There are so many great rappers out there who don’t get enough recommendation. Why make a fake one?

WIRED: How to Stop Robots From Becoming Racist

WIRED: How to Stop Robots From Becoming Racist. “The doll test was invented to better understand the evil consequences of separate and unequal treatment on the self-esteem of Black children in the United States. Lawyers from the NAACP used the results to successfully argue in favor of the desegregation of US schools. Now AI researchers say robots may need to undergo similar tests to ensure they treat all people fairly.”

Mashable: It took just one weekend for Meta’s new AI Chatbot to become racist

Mashable: It took just one weekend for Meta’s new AI Chatbot to become racist. “The company’s new BlenderBot 3 AI chatbot — which was released in the U.S. just days ago on Friday, August 5 — is already making a host of false statements based on interactions it had with real humans online. Some of the more egregious among those include claims Donald Trump won the 2020 U.S. presidential election and is currently president, anti-Semitic conspiracy theories, as well as comments calling out Facebook for all of its ‘fake news.’”

SlashGear: Study Shows Robots Using Internet-Based AI Exhibit Racist And Sexist Tendencies

SlashGear: Study Shows Robots Using Internet-Based AI Exhibit Racist And Sexist Tendencies. “A new study claims robots exhibit racist and sexist stereotyping when the artificial intelligence (AI) that powers them is modeled on data from the internet. The study, which researchers say is the first to prove the concept, was led by Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington, and published by the Association for Computing Machinery (ACM).”

The Verge: Google is using a new way to measure skin tones to make search results more inclusive

The Verge: Google is using a new way to measure skin tones to make search results more inclusive. “The tech giant is working with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST Scale is designed to replace outdated skin tone scales that are biased towards lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for people with darker coloring, says Monk.”

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias. “Facebook and its parent company, Meta, recently released a new tool that can be used to quickly develop state-of-the-art AI. But according to the company’s researchers, the system has the same problem as its predecessors: It’s extremely bad at avoiding results that reinforce racist and sexist stereotypes.”