Digiday: Recruitment tool TikTok Resumes risks magnifying unconscious biases, execs warn

Digiday: Recruitment tool TikTok Resumes risks magnifying unconscious biases, execs warn. “TikTok Resumes could become a recruitment tool that inadvertently encourages discrimination, especially in the wake of companies like Target and Chipotle signing on to the new initiative, senior executives in technology, HR and social responsibility roles told Digiday.”

GCN: Outside reviews can limit bias in forensic algorithms, GAO says

GCN: Outside reviews can limit bias in forensic algorithms, GAO says. “While technology can curtail subjective decisions and reduce the time it takes analysts to reach conclusions, it comes with its own set of challenges. In a follow-up to a May 2020 report on how forensic algorithms work, the Government Accountability Office outlined the key challenges affecting the use of these algorithms and the associated social and ethical implications.”

University of Washington News: Large computer language models carry environmental, social risks

University of Washington News: Large computer language models carry environmental, social risks. “Computer engineers at the world’s largest companies and universities are using machines to scan through tomes of written material. The goal? Teach these machines the gift of language. Do that, some even claim, and computers will be able to mimic the human brain. But this impressive compute capability comes with real costs, including perpetuating racism and causing significant environmental damage, according to a new paper, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜’”

MIT Technology Review: Predictive policing is still racist—whatever data it uses

MIT Technology Review: Predictive policing is still racist—whatever data it uses. “It’s no secret that predictive policing tools are racially biased. A number of studies have shown that racist feedback loops can arise if algorithms are trained on police data, such as arrests. But new research shows that training predictive tools in a way meant to lessen bias has little effect.”

TNW: Study shows how AI exacerbates recruitment bias against women

TNW: Study shows how AI exacerbates recruitment bias against women. “A new study from the University of Melbourne has demonstrated how hiring algorithms can amplify human gender biases against women. Researchers from the University of Melbourne gave 40 recruiters real-life resumĂ©s for jobs at UniBank, which funded the study. The resumĂ©s were for roles as a data analyst, finance officer, and recruitment officer, which Australian Bureau of Statistics data shows are respectively male-dominated, gender-balanced, and female-dominated positions.”

“There are still many questions that are not answered” – Nicolas Kayser-Bril on investigating algorithmic discrimination on Facebook (Online Journalism Blog)

Online Journalism Blog: “There are still many questions that are not answered” – Nicolas Kayser-Bril on investigating algorithmic discrimination on Facebook. “In a special guest post for OJB, Vanessa Fillis speaks to AlgorithmWatch’s Nicolas Kayser-Bril about his work on how online platforms optimise ad delivery, including his recent story on how Facebook draws on gender stereotypes.”

ScienceBlog: When Algorithms Compete, Who Wins?

ScienceBlog: When Algorithms Compete, Who Wins?. “James Zou, Stanford assistant professor of biomedical data science and an affiliated faculty member of the Stanford Institute for Human-Centered Artificial Intelligence, says that as algorithms compete for clicks and the associated user data, they become more specialized for subpopulations that gravitate to their sites. And that, he finds in a new paper with graduate student Antonio Ginart and undergraduate Eva Zhang, can have serious implications for both companies and consumers.”

Mother Jones: Facebook Manipulated the News You See to Appease Republicans, Insiders Say

Mother Jones: Facebook Manipulated the News You See to Appease Republicans, Insiders Say. “To be perfectly clear: Facebook used its monopolistic power to boost and suppress specific publishers’ content—the essence of every Big Brother fear about the platforms, and something Facebook and other companies have been strenuously denying for years. It’s also, ironically, what conservatives have consistently accused Facebook of doing to them, with the perverse but entirely intended effect of causing it to bend over backward for them instead.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally. “The battle over algorithms in healthcare has come into full view since last fall. The debate only intensified in the wake of the coronavirus pandemic, which has disproportionately devastated Black and Latino communities. In October, Science published a study that found one hospital unintentionally directed more white patients than Black patients to a high-risk care management program because it used an algorithm to predict the patients’ future healthcare costs as a key indicator of personal health. Optum, the company that sells the software product, told Mashable that the hospital used the tool incorrectly.”

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough. “Instagram says it noticed that people were turning to the platform to raise awareness and promote the causes they were invested in, especially in the midst of the pandemic, racial tensions, and the 2020 election. So it created a new Instagram Equity team ‘that will focus on better understanding and addressing bias in our product development and people’s experiences on Instagram’—including fairness in algorithms.”

Not just A-levels: unfair algorithms are being used to make all sorts of government decisions (The Conversation)

The Conversation: Not just A-levels: unfair algorithms are being used to make all sorts of government decisions. “Algorithmic systems tend to be promoted for several reasons, including claims that they produce smarter, faster, more consistent and more objective decisions, and make more efficient use of government resources. The A-level fiasco has shown that this is not necessarily the case in practice. Even where an algorithm provides a benefit (fast, complex decision-making for a large amount of data), it may bring new problems (socio-economic discrimination).”

EurekAlert: New tool improves fairness of online search rankings

EurekAlert: New tool improves fairness of online search rankings. “When you search for something on the internet, do you scroll through page after page of suggestions – or pick from the first few choices? Because most people choose from the tops of these lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce. In a new paper, Cornell University researchers introduce a tool they’ve developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.”