MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny. “In an interesting development in the wake of a bias controversy over its cropping algorithm, Twitter has said it’s considering giving users decision-making power over how tweet previews look, saying it wants to decrease its reliance on machine learning-based image cropping. Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Slate: Under the Gaze of Big Mother

Slate: Under the Gaze of Big Mother. “An artificial intelligence that can truly understand our behavior will be no better than us at dealing with humanity’s challenges. It’s not God in the machine. It’s just another flawed entity, doing its best with a given set of goals and circumstances. Right now we treat A.I.s like children, teaching them right from wrong. It could be that one day they’ll leapfrog us, and the children will become the parents. Most likely, our relationship with them will be as fraught as any intergenerational one. But what happens if parents never age, never grow senile, and never make room for new life? No matter how benevolent the caretaker, won’t that create a stagnant society?”

Forward: Search ‘Jewish baby carriage,’ Google will return images of ovens

Forward: Search ‘Jewish baby carriage,’ Google will return images of ovens. “Enter ‘Jewish baby carriages’ into a Google Search and the first results to appear are images of ovens. Historical images of Jewish women pushing strollers and more recent images of Hasidic Jewish women are interspersed with disturbing photos of large black ovens.”

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI

Engadget: DeepMind and Oxford University researchers on how to ‘decolonize’ AI. “In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind — the AI lab and sister company to Google — and the University of Oxford presents a vision to ‘decolonize’ artificial intelligence. The aim is to keep society’s ugly prejudices from being reproduced and amplified by today’s powerful machine learning systems.”

TechCrunch: We need a new field of AI to combat racial bias

TechCrunch: We need a new field of AI to combat racial bias. “Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to ‘put in place stronger regulations to govern the ethical use of facial recognition technology.’ But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.”

The Register: Boffins bash Google Translate for sexism

The Register: Boffins bash Google Translate for sexism. “In a research paper distributed through pre-printer service ArXiv, ‘Assessing Gender Bias in Machine Translation – A Case Study with Google Translate,’ Marcelo Prates, Pedro Avelar, and Luis Lamb from Brazil’s Federal University of Rio Grande do Sul, explore how Google Translate renders gender pronouns in English from sentences written in a dozen different gender-neutral languages.”