The Next Web: Autonomous cars worse at detecting children and dark-skinned pedestrians, study finds

The Next Web: Autonomous cars worse at detecting children and dark-skinned pedestrians, study finds. “Researchers from King’s College London (KCL) tested the software on over 8,000 images of pedestrians. They found that the average detection accuracy was almost 20% higher for adults than it was for children. The systems were also 7.5% more accurate for light-skinned pedestrians than hey were for darker-skinned ones.” And for night driving conditions, as you might expect, it’s even worse.

University of Michigan: Building reliable AI models requires understanding the people behind the datasets, UMSI researchers say

University of Michigan: Building reliable AI models requires understanding the people behind the datasets, UMSI researchers say. “Social media companies are increasingly relying on complex algorithms and artificial intelligence to detect offensive behavior online. These algorithms and AI systems all rely on data to learn what is offensive. But who’s behind the data, and how do their backgrounds influence their decisions?”

New York Times: Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History

New York Times: Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. “Many Black artists are finding evidence of racial bias in artificial intelligence, both in the large data sets that teach machines how to generate images and in the underlying programs that run the algorithms. In some cases, A.I. technologies seem to ignore or distort artists’ text prompts, affecting how Black people are depicted in images, and in others, they seem to stereotype or censor Black history and culture.”

CFPB: Algorithms, artificial intelligence, and fairness in home appraisals

CFPB: Algorithms, artificial intelligence, and fairness in home appraisals. “Today, the CFPB is taking another step toward accountability for automated systems and models, sometimes marketed as artificial intelligence (AI). The CFPB is proposing a rule to make home appraisals computed by algorithms fairer and more accurate. This initiative is one of many steps we are taking to ensure that algorithms and AI are complying with existing law.”

Nature: Why AI’s diversity crisis matters, and how to tackle it

Nature: Why AI’s diversity crisis matters, and how to tackle it. “Artificial intelligence (AI) is facing a diversity crisis. If it isn’t addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting ‘intelligence’ will be flawed, lacking varied social-emotional and cultural knowledge.”

The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases (The Globe and Mail)

The Globe and Mail: The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases. “When Ms. Gebru – who’s 39 and holds a bachelor’s degree in electrical engineering and a PhD in computer vision from Stanford University – started her career, she stood out. She’s Black, a woman and works in an industry famously lacking in diversity. She moved to the U.S. as a teenager to escape the 1998-2000 Eritrean-Ethiopian War. The discrimination she faced after moving and throughout her career has left a lasting mark.”

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies (The Guardian)

The Guardian: ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies. “AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved.” Considering Facebook’s longstanding history of incorrectly moderating anything vaguely resembling a breast, I can’t say I’m shocked.

FedScoop: NTIA launches probe into discriminatory data practices and civil rights

FedScoop: NTIA launches probe into discriminatory data practices and civil rights. “[The National Telecommunications and Information Administration] will focus its inquiry on discriminatory data practices related to: online job discrimination based on demographic characteristics; apps that collect and sell location data about user movement, particularly dating and religious apps; and the heightened cost of data breaches on low-income communities.”

University of Alberta: AI researchers improve method for removing gender bias in natural language processing

University of Alberta: AI researchers improve method for removing gender bias in natural language processing. “Researchers have found a better way to reduce gender bias in natural language processing models while preserving vital information about the meanings of words, according to a recent study that could be a key step toward addressing the issue of human biases creeping into artificial intelligence.”

WIRED: How to Stop Robots From Becoming Racist

WIRED: How to Stop Robots From Becoming Racist. “The doll test was invented to better understand the evil consequences of separate and unequal treatment on the self-esteem of Black children in the United States. Lawyers from the NAACP used the results to successfully argue in favor of the desegregation of US schools. Now AI researchers say robots may need to undergo similar tests to ensure they treat all people fairly.”