University of Texas at San Antonio: UTSA experts find bias in disease-tracking algorithms that analyze social media

University of Texas at San Antonio: UTSA experts find bias in disease-tracking algorithms that analyze social media. “Social media has become the latest method to monitor the spread of diseases such as influenza or coronavirus. However, machine learning algorithms used to train and classify tweets have an inherent bias because they do not account for how minority groups potentially communicate health information.”

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI. “In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind — the AI lab and sister company to Google — and the University of Oxford presents a vision to ‘decolonize’ artificial intelligence. The aim is to keep society’s ugly prejudices from being reproduced and amplified by today’s powerful machine learning systems.”

VentureBeat: Artie releases tool to measure bias in speech recognition models

VentureBeat: Artie releases tool to measure bias in speech recognition models. “Artie, a startup developing a platform for mobile games on social media that feature AI, today released a data set and tool for detecting demographic bias in voice apps. The Artie Bias Corpus (ABC), which consists of audio files along with their transcriptions, aims to diagnose and mitigate the impact of factors like age, gender, and accent in voice recognition systems.”

TechCrunch: We need a new field of AI to combat racial bias

TechCrunch: We need a new field of AI to combat racial bias. “Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to ‘put in place stronger regulations to govern the ethical use of facial recognition technology.’ But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.”

New York Times: Wrongfully Accused by an Algorithm

New York Times: Wrongfully Accused by an Algorithm. “On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.”

MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms

MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms. “An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it originally planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release.”

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers. “The technology that powers the nation’s leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.”

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results. “You may have an opinion that yes, Google is clearly biased toward big brands, or no, Google is just trying to give the users what they’re looking for and no one’s looking for someone’s dumb blog. But we don’t need opinions here because this is a claim about what sites show up in search, and we have a lot of data on that from SEMRush and other sites that rank the web according to how much organic traffic they likely get.”

The Verge: ICE rigged its algorithms to keep immigrants in jail, claims lawsuit

The Verge: ICE rigged its algorithms to keep immigrants in jail, claims lawsuit. “A new lawsuit claims Immigration and Customs Enforcement (ICE) rigged software to create a ‘secret no-release policy’ for people suspected of breaking immigration laws. ICE’s New York office uses a risk assessment algorithm to recommend that an arrestee be released or detained until a hearing. But the New York Civil Liberties Union and Bronx Defenders say the algorithm was changed in 2015 and again in 2017, removing the ability to recommend release, even for arrestees who posed no threat.”

Slate: How Algorithmic Bias Hurts People With Disabilities

Slate: How Algorithmic Bias Hurts People With Disabilities. “A hiring tool analyzes facial movements and tone of voice to assess job candidates’ video interviews. A study reports that Facebook’s algorithm automatically shows users job ads based on inferences about their gender and race. Facial recognition tools work less accurately on people with darker skin tones. As more instances of algorithmic bias hit the headlines, policymakers are starting to respond. But in this important conversation, a critical area is being overlooked: the impact on people with disabilities.”

The Next Web: Court orders moratorium on black box AI that detects welfare fraud amid human rights concerns

The Next Web: Court orders moratorium on black box AI that detects welfare fraud amid human rights concerns. “The Hague District Court in The Netherlands yesterday ordered the Dutch government to halt its use of a black box AI system designed to predict welfare fraud. The ruling was issued over privacy concerns and is being heralded as a civil rights victory by activists and privacy advocacy groups.”

Brookings: Assessing employer intent when AI hiring tools are biased

Brookings: Assessing employer intent when AI hiring tools are biased. “In this paper, I discuss how hiring is a multi-layered and opaque process and how it will become more difficult to assess employer intent as recruitment processes move online. Because intent is a critical aspect of employment discrimination law, I ultimately suggest four ways upon which to include it in the discussion surrounding algorithmic bias.”

Stanford: Search results not biased along party lines, Stanford scholars find

Stanford News: Search results not biased along party lines, Stanford scholars find . “According to newly published research by Stanford scholars, there appears to be no political favoritism for or against either major political party in the algorithm of a popular search engine.”

IFL Science: This Is Why Women Are Setting Their Gender To Male On Instagram

IFL Science: This Is Why Women Are Setting Their Gender To Male On Instagram. “The Instagram community guidelines state that nudity and inappropriate content is not allowed on the platform. ‘This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed.’ However back in April, the Instagram algorithm changed to demote certain posts, even if they don’t technically break the rules set by the platform itself, HuffPost reports. “

NIST research effort to measure bias in results we get from search engines: ‘Fair Ranking’ (The Sociable)

The Sociable: NIST research effort to measure bias in results we get from search engines: ‘Fair Ranking’. “As part of its long-running Text Retrieval Conference (TREC), which is taking place this week at NIST’s Gaithersburg, Maryland, campus, NIST has launched the Fair Ranking track this year, which is an incubator for a new area of study that aims to bring fairness in research. The track has been proposed and organized by researchers from Microsoft, Boise State University and NIST, who hope to find strategies for removing bias, by finding apt ways to measure the amount of bias in data and search techniques.”