Maryland Today: How AI Could Help Writers Spot Stereotypes

Maryland Today: How AI Could Help Writers Spot Stereotypes. “Studious Asians, sassy yet helpless women and greedy shopkeepers: These tired stereotypes of literature and film not only often offend the people they caricature, but can drag down what might otherwise have been a compelling narrative. Researchers at the University of Maryland’s Human-Computer Interaction Lab are working to combat these clichés with the creation of DramatVis Personae (DVP), a web-based visual analytics system powered by artificial intelligence that helps writers identify stereotypes they might be unwittingly giving fictional form among their cast of characters (or dramatis personae).”

University of Alberta: AI researchers improve method for removing gender bias in natural language processing

University of Alberta: AI researchers improve method for removing gender bias in natural language processing. “Researchers have found a better way to reduce gender bias in natural language processing models while preserving vital information about the meanings of words, according to a recent study that could be a key step toward addressing the issue of human biases creeping into artificial intelligence.”

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias. “Facebook and its parent company, Meta, recently released a new tool that can be used to quickly develop state-of-the-art AI. But according to the company’s researchers, the system has the same problem as its predecessors: It’s extremely bad at avoiding results that reinforce racist and sexist stereotypes.”

New York University: Researchers Outline Bias in Epidemic Research—And Offer New Simulation Tool to Guide Future Work

New York University: Researchers Outline Bias in Epidemic Research—And Offer New Simulation Tool to Guide Future Work. “A team of researchers unpacks a series of biases in epidemic research, ranging from clinical trials to data collection, and offers a game-theory approach to address them, in a new analysis. The work sheds new light on the pitfalls associated with technology development and deployment in combating global crises like COVID-19, with a look toward future pandemic scenarios.”

Penn Today: Bridging Wikipedia’s gender gap, one article at a time

Penn Today: Bridging Wikipedia’s gender gap, one article at a time. “A new study co-authored by Isabelle Langrock, a Ph.D. candidate at the Annenberg School for Communication, and Annenberg associate professor Sandra González-Bailón evaluates the work of two prominent feminist movements, finding that while these movements have been effective in adding a large volume of biographical content about women to Wikipedia, such content remains more difficult to find due to structural biases.”

University of Washington: Google’s ‘CEO’ image search gender bias hasn’t really been fixed

University of Washington: Google’s ‘CEO’ image search gender bias hasn’t really been fixed. “The researchers showed that for four major search engines from around the world, including Google, this bias is only partially fixed, according to a paper presented in February at the AAAI Conference of Artificial Intelligence. A search for an occupation, such as ‘CEO,’ yielded results with a ratio of cis-male and cis-female presenting people that matches the current statistics. But when the team added another search term — for example, ‘CEO + United States’ — the image search returned fewer photos of cis-female presenting people. In the paper, the researchers propose three potential solutions to this issue.”

The Conversation: Artificial intelligence can discriminate on the basis of race and gender, and also age

The Conversation: Artificial intelligence can discriminate on the basis of race and gender, and also age. “AI is often assumed to be more objective than humans. In reality, however, AI algorithms make decisions based on human-annotated data, which can be biased and exclusionary. Current research on bias in AI focuses mainly on gender and race. But what about age-related bias — can AI be ageist?”

Wired: Crime Prediction Keeps Society Stuck in the Past

Wired: Crime Prediction Keeps Society Stuck in the Past . “In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation ‘restrict the future to the past.’ In other words, these systems prevent the future in order to ‘predict’ it—they ensure that the future will be just the same as the past was.”

Washington Post: Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

Washington Post: Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show. “The Black audience on Facebook is in decline, according to data from a study Facebook conducted earlier this year that was revealed in documents obtained by whistleblower Frances Haugen. According to the February report, the number of Black monthly users fell 2.7 percent in one month to 17.3 million adults…. Civil rights groups have long claimed that Facebook’s algorithms and policies had a disproportionately negative impact on minorities, and particularly Black users. The ‘worst of the worst’ documents show that those allegations were largely true in the case of which hate speech remained online.”

Gizmodo: Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them

Gizmodo: Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. “Between 2018 and 2021, more than one in 33 U.S. residents were potentially subject to police patrol decisions directed by crime-prediction software called PredPol. The company that makes it sent more than 5.9 million of these crime predictions to law enforcement agencies across the country—from California to Florida, Texas to New Jersey—and we found those reports on an unsecured server. Gizmodo and The Markup analyzed them and found persistent patterns.”

Hello, Bias: The Third Party in Every Vaccination Conversation (Michigan Tech)

Michigan Tech: Hello, Bias: The Third Party in Every Vaccination Conversation. “This process of forming and solidifying biases happens every day. None of us are immune. Much of the time we’re unaware because most biases are subconscious. It takes an event or conversation to bring them to light. However, once confronted with information that contradicts our own interpretation of the world, it’s our responsibility to investigate — because new information and different perspectives are how we solve big challenges, like keeping us all healthy.”

NewsWise: Clinician peer networks remove race and gender bias

NewsWise: Clinician peer networks remove race and gender bias. “Using an experimental design, researchers showed that clinicians who initially exhibited significant race and gender bias in their treatment of a clinical case, could be influenced to change their clinical recommendations to exhibit no bias. ‘We found that by changing the structure of information-sharing networks among clinicians, we could change doctors’ biased perceptions of their patients’ clinical information,’ says [Professor Damon] Centola, who also directs the Network Dynamics Group at the Annenberg School and is a Senior Fellow of Health Economics at the Leonard Davis Institute. ‘Put simply, doctors tend to think differently in networks than they do when they are alone.’”

Cornell Chronicle: Words used in text-mining research carry bias, study finds

Cornell Chronicle: Words used in text-mining research carry bias, study finds. “The word lists packaged and shared amongst researchers to measure for bias in online texts often carry words, or ‘seeds,’ with baked-in biases and stereotypes, which could skew their findings, new Cornell research finds. For instance, the presence of the seed term ‘mom’ in a text analysis exploring gender in domestic work would skew results female.”