MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms

MIT Technology Review: AI researchers say scientific publishers help perpetuate racist algorithms. “An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it originally planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release.”

Arizona State University: ‘To Be Welcoming’ curriculum offers tools to counteract bias

Arizona State University: ‘To Be Welcoming’ curriculum offers tools to counteract bias. “Two years ago, Starbucks asked Arizona State University to develop an online curriculum for all Starbucks employees that is intended to drive reflection and conversation on the topic of bias. Now Starbucks is making those courses available to the public at no cost. The curriculum, a set of 15 modules, is called ‘To Be Welcoming’ and was rolled out in September 2019. The interactive courses were created by ASU faculty experts to share research and information that can help people to think about how they view the world and to consider how other people experience it. “

CNET: Instagram to review how its policies, algorithm impact black users

CNET: Instagram to review how its policies, algorithm impact black users. “Instagram plans to reevaluate its policies in an effort to ensure black voices are heard on the app. In a blog post on Tuesday, Instagram CEO Adam Mosseri promised to address inequities in the social media company’s approach to harassment, account verification, content distribution and algorithmic bias.”

NBC News: Current and ex-employees allege Google drastically rolled back diversity and inclusion programs

NBC News: Current and ex-employees allege Google drastically rolled back diversity and inclusion programs. “Since 2018, internal diversity and inclusion training programs have been scaled back or cut entirely, four Google employees and two people who recently left the company told NBC News in interviews. In addition, they said, the team responsible for those programs has been reduced in size, and positions previously held by full-time employees have been outsourced or not refilled after members of the diversity teams left the company.”

Wired: Protests Renew Scrutiny of Tech’s Ties to Law Enforcement

Wired: Protests Renew Scrutiny of Tech’s Ties to Law Enforcement. “THE COLLECTIVE OUTRAGE over the murder of George Floyd has led to nationwide protests, renewed calls for police reform, and uncharacteristically swift support for racial equity from Silicon Valley leaders. The backlash has been swift as well. Critics are calling out many companies now pledging support for Black Lives Matter, accusing them of failing to stop racist language on their platforms and, in some cases, enabling the over-policing and surveillance that protesters now march against.”

The Verge: Social media bias lawsuits keep failing in court

The Verge: Social media bias lawsuits keep failing in court. “An appeals court in Washington, DC just rejected a complaint by Laura Loomer, the conservative activist who was banned from Twitter for anti-Muslim tweets and later chained herself to the company’s headquarters in protest. Loomer argued that Facebook, Google, Twitter, and Apple had all colluded to suppress conservative content, violating Loomer’s First Amendment rights in the process. The court disagreed and threw out the suit.”

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers

Stanford News: Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers. “The technology that powers the nation’s leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.”

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results

Search Engine Journal: Data suggests there’s still no corporate or brand bias in Google results. “You may have an opinion that yes, Google is clearly biased toward big brands, or no, Google is just trying to give the users what they’re looking for and no one’s looking for someone’s dumb blog. But we don’t need opinions here because this is a claim about what sites show up in search, and we have a lot of data on that from SEMRush and other sites that rank the web according to how much organic traffic they likely get.”

Phys .org: ‘Data feminism’ examines problems of bias and power that beset modern information

Phys .org: ‘Data feminism’ examines problems of bias and power that beset modern information. “Suppose you would like to know mortality rates for women during childbirth, by country, around the world. Where would you look? One option is the WomanStats Project, the website of an academic research effort investigating the links between the security and activities of nation-states, and the security of the women who live in them.”

ScienceDaily: Researchers devise approach to reduce biases in computer vision data sets

ScienceDaily: Researchers devise approach to reduce biases in computer vision data sets. “Addressing problems of bias in artificial intelligence, computer scientists from Princeton and Stanford University have developed methods to obtain fairer data sets containing images of people. The researchers propose improvements to ImageNet, a database of more than 14 million images that has played a key role in advancing computer vision over the past decade.”

IdeaStream: Ohio’s Judges Considering Statewide Sentencing Database

IdeaStream: Ohio’s Judges Considering Statewide Sentencing Database. “Members of Ohio’s judicial system are calling for more uniformity in sentencing practices across courtrooms. The state’s criminal sentencing commission argues an online database of previous sentences could aid in that effort.”

NiemanLab: Americans of all political stripes expect 2020’s fake news to be biased against their side

NiemanLab: Americans of all political stripes expect 2020’s fake news to be biased against their side. “Fake news, misinformation, and disinformation will be major concerns in the 2020 presidential election. According to previous research by the Pew Research Center, half of American adults describe misinformation as a ‘very big problem’ — more than who say the same about climate change, racism, and terrorism (though fewer than who say healthcare affordability, the wealth gap, and drug addiction).”

Governing: University Offers Free Class on Artificial Intelligence Ethics

Governing: University Offers Free Class on Artificial Intelligence Ethics. “The course — developed by [Nathan] Colaner, law professor Mark Chinen and adjunct business and law professor Tracy Ann Kosa — explores the meaning of ethics in AI by looking at guiding principles proposed by some nonprofits and technology companies. A case study on facial recognition in the course encourages students to evaluate different uses of facial-recognition technology, such as surveillance or identification, and to determine how the technology should be regulated.” The course is being offered by Seattle University.

The fact-checker’s dilemma: Humans are hardwired to dismiss facts that don’t fit their worldview (NiemanLab)

NiemanLab: The fact-checker’s dilemma: Humans are hardwired to dismiss facts that don’t fit their worldview. “Motivated reasoning is what social scientists call the process of deciding what evidence to accept based on the conclusion one prefers. As I explain in my book The Truth About Denial: Bias and Self-Deception in Science, Politics, and Religion, this very human tendency applies to all kinds of facts about the physical world, economic history and current events.”