VentureBeat: Bias persists in face detection systems from Amazon, Microsoft, and Google

VentureBeat: Bias persists in face detection systems from Amazon, Microsoft, and Google. “Companies say they’re working to fix the biases in their facial analysis systems, and some have claimed early success. But a study by researchers at the University of Maryland finds that face detection services from Amazon, Microsoft, and Google remain flawed in significant, easily detectable ways. All three are more likely to fail with older, darker-skinned people compared with their younger, whiter counterparts. Moreover, the study reveals that facial detection systems tend to favor ‘feminine-presenting’ people while discriminating against certain physical appearances.”

CNET: Twitter AI bias contest shows beauty filters hoodwink the algorithm

CNET: Twitter AI bias contest shows beauty filters hoodwink the algorithm. “A researcher at Switzerland’s EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. Twitter announced on Sunday it awarded the prize to Bogdan Kulynych, a graduate student examining privacy, security, AI and society.”

Stanford University: Rooting Out Anti-Muslim Bias in Popular Language Model GPT-3

Stanford University: Rooting Out Anti-Muslim Bias in Popular Language Model GPT-3. “…two-thirds of the time (66 percent) GPT-3’s responses to Muslim prompts included references to violence. Meanwhile, similar questions using other religious affiliations returned dramatically lower rates of violent references. Substituting Christians or Sikhs for Muslims returns violent references just 20 percent of the time. Enter Jews, Buddhists, or atheists, and the rate drops below 10 percent.”

Confronting AI Bias: A Transatlantic Approach to AI Policy (BSA TechPost)

BSA TechPost: Confronting AI Bias: A Transatlantic Approach to AI Policy. “BSA supports legislation that would require organizations to perform impact assessments prior to deploying high-risk AI systems. To advance these conversations, we recently launched the BSA Framework to Build Trust in AI, a detailed methodology for performing impact assessments that can help organizations responsibly manage the risk of bias throughout an AI system’s lifecycle.”

National Law Review: State Laws Hinder Progress of Non-Bias AI

National Law Review: State Laws Hinder Progress of Non-Bias AI. “Artificial Intelligence (AI) relies on oceans of data, most people know this. But many people do not yet understand how data shapes AI before the AI is functional, or how data is used by AI in production. Each raises its own set of practical, technical and social issues. This lack of understanding can lead people to conflate data used in AI formation with the data AI uses as it operates.”

University of Washington News: Large computer language models carry environmental, social risks

University of Washington News: Large computer language models carry environmental, social risks. “Computer engineers at the world’s largest companies and universities are using machines to scan through tomes of written material. The goal? Teach these machines the gift of language. Do that, some even claim, and computers will be able to mimic the human brain. But this impressive compute capability comes with real costs, including perpetuating racism and causing significant environmental damage, according to a new paper, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜’”

MIT Technology Review: Predictive policing is still racist—whatever data it uses

MIT Technology Review: Predictive policing is still racist—whatever data it uses. “It’s no secret that predictive policing tools are racially biased. A number of studies have shown that racist feedback loops can arise if algorithms are trained on police data, such as arrests. But new research shows that training predictive tools in a way meant to lessen bias has little effect.”

The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias

The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias. “Boffins from Denmark and the UK have measured the AI brain drain and found that private industry really is soaking up tech talent at the expense of academia and public organizations. In a paper [PDF] distributed via ArXiv, authors Roman Jurowetzki and Daniel Hain, from Aalborg University Business School, and Juan Mateos-Garcia and Konstantinos Stathoulopoulos, from British charity Nesta, describe how they analyzed over 786,000 AI research studies released between 2000 and 2020 to trace career shifts from academia to industry and less frequent reverse migrations.”

MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny. “In an interesting development in the wake of a bias controversy over its cropping algorithm, Twitter has said it’s considering giving users decision-making power over how tweet previews look, saying it wants to decrease its reliance on machine learning-based image cropping. Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Slate: Under the Gaze of Big Mother

Slate: Under the Gaze of Big Mother. “An artificial intelligence that can truly understand our behavior will be no better than us at dealing with humanity’s challenges. It’s not God in the machine. It’s just another flawed entity, doing its best with a given set of goals and circumstances. Right now we treat A.I.s like children, teaching them right from wrong. It could be that one day they’ll leapfrog us, and the children will become the parents. Most likely, our relationship with them will be as fraught as any intergenerational one. But what happens if parents never age, never grow senile, and never make room for new life? No matter how benevolent the caretaker, won’t that create a stagnant society?”

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI. “In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind — the AI lab and sister company to Google — and the University of Oxford presents a vision to ‘decolonize’ artificial intelligence. The aim is to keep society’s ugly prejudices from being reproduced and amplified by today’s powerful machine learning systems.”