WIRED: How to Stop Robots From Becoming Racist

WIRED: How to Stop Robots From Becoming Racist. “The doll test was invented to better understand the evil consequences of separate and unequal treatment on the self-esteem of Black children in the United States. Lawyers from the NAACP used the results to successfully argue in favor of the desegregation of US schools. Now AI researchers say robots may need to undergo similar tests to ensure they treat all people fairly.”

Exclusive: Meta to study race of Instagram users (Axios)

Axios: Exclusive: Meta to study race of Instagram users. “The company says it wants to make sure that its products and AI systems operate fairly across racial lines, but feels it can’t do that without better knowing its customers. By working with a third party it aims to both protect privacy and ensure customers are more comfortable sharing their information.”

UWM Report: Automated hiring systems could be making the worker shortage worse

UWM Report: Automated hiring systems could be making the worker shortage worse. “There’s a worker shortage in the United States. As the country recovers from the pandemic, companies are trying to bring their employees back into the workplace but are finding that many of those employees are quitting – a so-called ‘Great Resignation.’ There are many factors behind this worker shortage, but Noelle Chesley thinks there might be one going overlooked: the use of automated hiring systems to fill those open positions.”

The Miami Student: Facebook algorithm may favor the Republican party, study co-authored by Miami University professors finds

The Miami Student: Facebook algorithm may favor the Republican party, study co-authored by Miami University professors finds . “New research from Miami University has shown that a change in the Facebook algorithm may have increased the visibility of posts from local Republican parties. Professors from Miami and Wright State University (WSU) found that, despite posting more, Democratic parties received significantly less interaction on their posts.”

Rest of World: DALL·E mini has a mysterious obsession with women in saris

Rest of World: DALL·E mini has a mysterious obsession with women in saris. “[Fernando] Marés, a veteran hacktivist, began using DALL·E mini in early June. But instead of inputting text for a specific request, he tried something different: he left the field blank. Fascinated by the seemingly random results, Marés ran the blank search over and over. That’s when Marés noticed something odd: almost every time he ran a blank request, DALL·E mini generated portraits of brown-skinned women wearing saris, a type of attire common in South Asia.”

Engadget: Oregon is shutting down its controversial child welfare AI in June

Engadget: Oregon is shutting down its controversial child welfare AI in June. “A number of states across the country have already implemented, or are considering, similar algorithms within their child welfare agencies. But as with Northpointe’s COMPAS before them, their implementation have raised concerns about the transparency and reliability of the process as well as their clear tendency towards racial bias. However, the Allegheny developers did note that their tool was just that and was never intended to operate on its own without direct human oversight.”

New York Times: Accused of Cheating by an Algorithm, and a Professor She Had Never Met

New York Times: Accused of Cheating by an Algorithm, and a Professor She Had Never Met. “A Florida teenager taking a biology class at a community college got an upsetting note this year. A start-up called Honorlock had flagged her as acting suspiciously during an exam in February. She was, she said in an email to The New York Times, a Black woman who had been ‘wrongfully accused of academic dishonesty by an algorithm.’ What happened, however, was more complicated than a simple algorithmic mistake. It involved several humans, academic bureaucracy and an automated facial detection tool from Amazon called Rekognition.”

WIRED: Feds Warn Employers Against Discriminatory Hiring Algorithms

WIRED: Feds Warn Employers Against Discriminatory Hiring Algorithms. ” Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women’s college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard. All of this has gone widely unchecked. But now, the US Department of Justice and the Equal Employment Opportunity Commission have offered guidance on what businesses and government agencies must do to ensure their use of AI in hiring complies with the Americans with Disabilities Act.”

The Verge: Google is using a new way to measure skin tones to make search results more inclusive

The Verge: Google is using a new way to measure skin tones to make search results more inclusive. “The tech giant is working with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST Scale is designed to replace outdated skin tone scales that are biased towards lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for people with darker coloring, says Monk.”

University of Maryland: Researchers Work to Make Artificial Intelligence Genuinely Fair

University of Maryland: Researchers Work to Make Artificial Intelligence Genuinely Fair. “Artificial intelligence (AI) algorithms help make online shopping seamless, calculate credit scores, navigate vehicles and even offer judges criminal sentencing guidelines. But as the use of AI increases exponentially, so does the concern that biased data can result in flawed decisions or prejudiced outcomes. Now, backed by a combined $1.6 million in funding from the National Science Foundation (NSF) and Amazon, two teams of University of Maryland researchers are working to eliminate those biases by developing new algorithms and protocols that can improve the efficiency, reliability and trustworthiness of AI systems.”

Cornell Chronicle: Words used in text-mining research carry bias, study finds

Cornell Chronicle: Words used in text-mining research carry bias, study finds. “The word lists packaged and shared amongst researchers to measure for bias in online texts often carry words, or ‘seeds,’ with baked-in biases and stereotypes, which could skew their findings, new Cornell research finds. For instance, the presence of the seed term ‘mom’ in a text analysis exploring gender in domestic work would skew results female.”

Mashable: Twitter study says its algorithm favors right-wing parties and news outlets

Mashable: Twitter study says its algorithm favors right-wing parties and news outlets. “A Twitter study and accompanying blog post, published Thursday, show that the company’s algorithm tends to favor right-leaning news outlets and right-wing political parties. In other words, long-disputed claims of anti-conservative bias on social media couldn’t be further from the truth.”