Mother Jones: Facebook Manipulated the News You See to Appease Republicans, Insiders Say

Mother Jones: Facebook Manipulated the News You See to Appease Republicans, Insiders Say. “To be perfectly clear: Facebook used its monopolistic power to boost and suppress specific publishers’ content—the essence of every Big Brother fear about the platforms, and something Facebook and other companies have been strenuously denying for years. It’s also, ironically, what conservatives have consistently accused Facebook of doing to them, with the perverse but entirely intended effect of causing it to bend over backward for them instead.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally. “The battle over algorithms in healthcare has come into full view since last fall. The debate only intensified in the wake of the coronavirus pandemic, which has disproportionately devastated Black and Latino communities. In October, Science published a study that found one hospital unintentionally directed more white patients than Black patients to a high-risk care management program because it used an algorithm to predict the patients’ future healthcare costs as a key indicator of personal health. Optum, the company that sells the software product, told Mashable that the hospital used the tool incorrectly.”

Mashable: Twitter to investigate apparent racial bias in photo previews

Mashable: Twitter to investigate apparent racial bias in photo previews. “The first look a Twitter user gets at a tweet might be an unintentionally racially biased one. Twitter said Sunday that it would investigate whether the neural network that selects which part of an image to show in a photo preview favors showing the faces of white people over Black people.”

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough. “Instagram says it noticed that people were turning to the platform to raise awareness and promote the causes they were invested in, especially in the midst of the pandemic, racial tensions, and the 2020 election. So it created a new Instagram Equity team ‘that will focus on better understanding and addressing bias in our product development and people’s experiences on Instagram’—including fairness in algorithms.”

Not just A-levels: unfair algorithms are being used to make all sorts of government decisions (The Conversation)

The Conversation: Not just A-levels: unfair algorithms are being used to make all sorts of government decisions. “Algorithmic systems tend to be promoted for several reasons, including claims that they produce smarter, faster, more consistent and more objective decisions, and make more efficient use of government resources. The A-level fiasco has shown that this is not necessarily the case in practice. Even where an algorithm provides a benefit (fast, complex decision-making for a large amount of data), it may bring new problems (socio-economic discrimination).”

EurekAlert: New tool improves fairness of online search rankings

EurekAlert: New tool improves fairness of online search rankings. “When you search for something on the internet, do you scroll through page after page of suggestions – or pick from the first few choices? Because most people choose from the tops of these lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce. In a new paper, Cornell University researchers introduce a tool they’ve developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.”

University of Texas at San Antonio: UTSA experts find bias in disease-tracking algorithms that analyze social media

University of Texas at San Antonio: UTSA experts find bias in disease-tracking algorithms that analyze social media. “Social media has become the latest method to monitor the spread of diseases such as influenza or coronavirus. However, machine learning algorithms used to train and classify tweets have an inherent bias because they do not account for how minority groups potentially communicate health information.”

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI. “In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind — the AI lab and sister company to Google — and the University of Oxford presents a vision to ‘decolonize’ artificial intelligence. The aim is to keep society’s ugly prejudices from being reproduced and amplified by today’s powerful machine learning systems.”

VentureBeat: Artie releases tool to measure bias in speech recognition models

VentureBeat: Artie releases tool to measure bias in speech recognition models. “Artie, a startup developing a platform for mobile games on social media that feature AI, today released a data set and tool for detecting demographic bias in voice apps. The Artie Bias Corpus (ABC), which consists of audio files along with their transcriptions, aims to diagnose and mitigate the impact of factors like age, gender, and accent in voice recognition systems.”

TechCrunch: We need a new field of AI to combat racial bias

TechCrunch: We need a new field of AI to combat racial bias. “Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to ‘put in place stronger regulations to govern the ethical use of facial recognition technology.’ But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.”

New York Times: Wrongfully Accused by an Algorithm

New York Times: Wrongfully Accused by an Algorithm. “On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.”

MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms

MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms. “An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it originally planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release.”

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers. “The technology that powers the nation’s leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.”

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results. “You may have an opinion that yes, Google is clearly biased toward big brands, or no, Google is just trying to give the users what they’re looking for and no one’s looking for someone’s dumb blog. But we don’t need opinions here because this is a claim about what sites show up in search, and we have a lot of data on that from SEMRush and other sites that rank the web according to how much organic traffic they likely get.”