The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias

The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias. “Boffins from Denmark and the UK have measured the AI brain drain and found that private industry really is soaking up tech talent at the expense of academia and public organizations. In a paper [PDF] distributed via ArXiv, authors Roman Jurowetzki and Daniel Hain, from Aalborg University Business School, and Juan Mateos-Garcia and Konstantinos Stathoulopoulos, from British charity Nesta, describe how they analyzed over 786,000 AI research studies released between 2000 and 2020 to trace career shifts from academia to industry and less frequent reverse migrations.”

USA Today: Do Facebook, Twitter and YouTube censor conservatives? Claims ‘not supported by the facts,’ new research says

USA Today: Do Facebook, Twitter and YouTube censor conservatives? Claims ‘not supported by the facts,’ new research says. “Despite repeatedcharges of anti-conservative bias from former President Donald Trump and other GOP critics, Facebook, Twitter and Google’s YouTube are not slanted against right-leaning users, a new report out of New York University found. Like previous research, ‘False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives,’ concludes that rather than censoring conservatives, social media platforms amplify their voices.”

The Verge: ‘Pro Tools proficiency’ may be keeping us from diversifying audio

The Verge: ‘Pro Tools proficiency’ may be keeping us from diversifying audio. “Despite the no-doubt earnest efforts of many well-meaning individuals, podcasting, it would seem, has had — and continues to have — a diversity problem. And while there are many factors which contribute to maintaining the industry’s status quo, there is one culprit to which we can confidently point: Pro Tools.”

PsyPost: Implicit bias against Asians increased after Trump’s secretary of state and others popularized “Chinese virus”

PsyPost: Implicit bias against Asians increased after Trump’s secretary of state and others popularized “Chinese virus”. “New research suggests that the use of terms like ‘Wuhan flu’ and ‘Chinese virus’ by conservative media outlets and Republican figures had a measurable impact on unconscious bias against Asian Americans. The study, published in Health Education & Behavior, found that implicit bias increased after the use of such phrases went viral.”

TNW: Study shows how AI exacerbates recruitment bias against women

TNW: Study shows how AI exacerbates recruitment bias against women. “A new study from the University of Melbourne has demonstrated how hiring algorithms can amplify human gender biases against women. Researchers from the University of Melbourne gave 40 recruiters real-life resumés for jobs at UniBank, which funded the study. The resumés were for roles as a data analyst, finance officer, and recruitment officer, which Australian Bureau of Statistics data shows are respectively male-dominated, gender-balanced, and female-dominated positions.”

USC Viterbi School of Engineering: AI News Bias Tool Created By USC Computer Scientists

USC Viterbi School of Engineering: AI News Bias Tool Created By USC Computer Scientists. “USC computer scientists have developed a tool to automatically detect bias in news. The work, which combines natural language processing and leverages moral foundation theory to understand the structures and nuances of content that are consistently showing up on left-leaning and right-leaning news sites, was presented at the International Conference on Social Informatics in the paper ‘Moral Framing and Ideological Bias of News.’”

MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny. “In an interesting development in the wake of a bias controversy over its cropping algorithm, Twitter has said it’s considering giving users decision-making power over how tweet previews look, saying it wants to decrease its reliance on machine learning-based image cropping. Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally. “The battle over algorithms in healthcare has come into full view since last fall. The debate only intensified in the wake of the coronavirus pandemic, which has disproportionately devastated Black and Latino communities. In October, Science published a study that found one hospital unintentionally directed more white patients than Black patients to a high-risk care management program because it used an algorithm to predict the patients’ future healthcare costs as a key indicator of personal health. Optum, the company that sells the software product, told Mashable that the hospital used the tool incorrectly.”

Slate: Under the Gaze of Big Mother

Slate: Under the Gaze of Big Mother. “An artificial intelligence that can truly understand our behavior will be no better than us at dealing with humanity’s challenges. It’s not God in the machine. It’s just another flawed entity, doing its best with a given set of goals and circumstances. Right now we treat A.I.s like children, teaching them right from wrong. It could be that one day they’ll leapfrog us, and the children will become the parents. Most likely, our relationship with them will be as fraught as any intergenerational one. But what happens if parents never age, never grow senile, and never make room for new life? No matter how benevolent the caretaker, won’t that create a stagnant society?”

Column: Student probes alleged Google search bias (San Diego Union-Tribune)

San Diego Union-Tribune: Column: Student probes alleged Google search bias. “When Agastya Sridharan read in The Wall Street Journal last fall about some politicians’ complaints of suspected bias in Google online search results, he was upset and intrigued. Was it possible to re-order search results and, thus, influence voter preferences? Agastya, then a 13-year-old eighth-grader at Thurgood Marshall Middle School in Scripps Ranch, decided to conduct his own research as his entry in the 2020 Greater San Diego Science and Engineering Fair.”

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough. “Instagram says it noticed that people were turning to the platform to raise awareness and promote the causes they were invested in, especially in the midst of the pandemic, racial tensions, and the 2020 election. So it created a new Instagram Equity team ‘that will focus on better understanding and addressing bias in our product development and people’s experiences on Instagram’—including fairness in algorithms.”