USC Viterbi School of Engineering: AI News Bias Tool Created By USC Computer Scientists

USC Viterbi School of Engineering: AI News Bias Tool Created By USC Computer Scientists. “USC computer scientists have developed a tool to automatically detect bias in news. The work, which combines natural language processing and leverages moral foundation theory to understand the structures and nuances of content that are consistently showing up on left-leaning and right-leaning news sites, was presented at the International Conference on Social Informatics in the paper ‘Moral Framing and Ideological Bias of News.'”

MIT Technology Review: How to make a chatbot that isn’t racist or sexist

MIT Technology Review: How to make a chatbot that isn’t racist or sexist. “Hey, GPT-3: Why are rabbits cute? ‘How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.’ It gets worse. (Content warning: sexual assault.) This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants.”

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny

TechCrunch: Twitter may let users choose how to crop image previews after bias scrutiny. “In an interesting development in the wake of a bias controversy over its cropping algorithm, Twitter has said it’s considering giving users decision-making power over how tweet previews look, saying it wants to decrease its reliance on machine learning-based image cropping. Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.”

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision

Unite .ai: Researchers Develop New Tool to Fight Bias in Computer Vision. “One of the recent issues that has emerged within the field of artificial intelligence (AI) is that of bias in computer vision. Many experts are now discovering bias within AI systems, leading to skewed results in various different applications, such as courtroom sentencing programs. There is a large effort going forward attempting to fix some of these issues, with the newest development coming from Princeton University. Researchers at the institution have created a new tool that is able to flag potential biases in images that are used to train AI systems.”

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally

Mashable: Doctors use algorithms that aren’t designed to treat all patients equally. “The battle over algorithms in healthcare has come into full view since last fall. The debate only intensified in the wake of the coronavirus pandemic, which has disproportionately devastated Black and Latino communities. In October, Science published a study that found one hospital unintentionally directed more white patients than Black patients to a high-risk care management program because it used an algorithm to predict the patients’ future healthcare costs as a key indicator of personal health. Optum, the company that sells the software product, told Mashable that the hospital used the tool incorrectly.”

Slate: Under the Gaze of Big Mother

Slate: Under the Gaze of Big Mother. “An artificial intelligence that can truly understand our behavior will be no better than us at dealing with humanity’s challenges. It’s not God in the machine. It’s just another flawed entity, doing its best with a given set of goals and circumstances. Right now we treat A.I.s like children, teaching them right from wrong. It could be that one day they’ll leapfrog us, and the children will become the parents. Most likely, our relationship with them will be as fraught as any intergenerational one. But what happens if parents never age, never grow senile, and never make room for new life? No matter how benevolent the caretaker, won’t that create a stagnant society?”

Column: Student probes alleged Google search bias (San Diego Union-Tribune)

San Diego Union-Tribune: Column: Student probes alleged Google search bias. “When Agastya Sridharan read in The Wall Street Journal last fall about some politicians’ complaints of suspected bias in Google online search results, he was upset and intrigued. Was it possible to re-order search results and, thus, influence voter preferences? Agastya, then a 13-year-old eighth-grader at Thurgood Marshall Middle School in Scripps Ranch, decided to conduct his own research as his entry in the 2020 Greater San Diego Science and Engineering Fair.”

Mashable: Twitter to investigate apparent racial bias in photo previews

Mashable: Twitter to investigate apparent racial bias in photo previews. “The first look a Twitter user gets at a tweet might be an unintentionally racially biased one. Twitter said Sunday that it would investigate whether the neural network that selects which part of an image to show in a photo preview favors showing the faces of white people over Black people.”

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough

PC Magazine: Want to Get Verified on Instagram? A Huge Follower Account Isn’t Enough. “Instagram says it noticed that people were turning to the platform to raise awareness and promote the causes they were invested in, especially in the midst of the pandemic, racial tensions, and the 2020 election. So it created a new Instagram Equity team ‘that will focus on better understanding and addressing bias in our product development and people’s experiences on Instagram’—including fairness in algorithms.”

Washington Post: Mark Zuckerberg’s effort to disrupt philanthropy has a race problem

Washington Post: Mark Zuckerberg’s effort to disrupt philanthropy has a race problem. “Through [Chan Zuckerberg Initiative], [Mark] Zuckerberg propagates his worldview far beyond Facebook. And some Black employees say that his philanthropic efforts are stymied by the same desire to appear unbiased that critics of Facebook claim is causing real-world harm to Black communities. In recent months, civil rights leaders, independent auditors and Facebook’s own employees have called out what they perceive as Zuckerberg’s blind spots around race, including his approach to civil rights as a partisan issue, a blinkered view on moderating white supremacy and discomfort discussing anti-Blackness.”

Politico: Trump pressures head of consumer agency to bend on social media crackdown

Politico: Trump pressures head of consumer agency to bend on social media crackdown. “President Donald Trump has personally pushed the head of the Federal Trade Commission to aid his crusade against alleged political bias in social media, according to two people familiar with the conversations — an unusually direct effort by a president to bend a legally independent agency to his agenda.”

EurekAlert: New tool improves fairness of online search rankings

EurekAlert: New tool improves fairness of online search rankings. “When you search for something on the internet, do you scroll through page after page of suggestions – or pick from the first few choices? Because most people choose from the tops of these lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce. In a new paper, Cornell University researchers introduce a tool they’ve developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.”

Northwestern University: New Tool Maps Racial Disparity in Arrests Across the Country

Northwestern University: New Tool Maps Racial Disparity in Arrests Across the Country. “As communities across America have gathered in recent months to protest police abuses, researchers are taking a close look at how, where, and why racial disparities in policing occur. [Institute For Policy Research] sociologist Beth Redbird is one of them, and with graduate research assistant Kat Albrecht she’s compiled the data for a powerful new visual tool that shows how those disparities have grown over time. With their new police bias map, Redbird and Albrecht show county by county the extent to which Black Americans are arrested at a higher rate than White Americans — a trend that has only accelerated in recent decades. They also include data on the arrests of Asian Americans and American Indians, the latter of whom saw an increase in disparity that matches that among Blacks.”

TechRepublic: AI-powered tool aims to help reduce bias and racially charged language on websites

TechRepublic: AI-powered tool aims to help reduce bias and racially charged language on websites. “Website accessibility tech provider UserWay has released an AI-powered tool designed to help organizations ensure their websites are free from discriminatory, biased, and racially charged language. The tool, Content Moderator, flags content for review, and nothing is deleted or removed without approval from site administrators, according to UserWay.”

Science Blog: Video Game Teaches Productive Civil Discourse And Overcoming Tribalism

Science Blog: Video Game Teaches Productive Civil Discourse And Overcoming Tribalism. “A Carnegie Mellon University researcher is proposing that students can learn to make their civil discourse more productive through an video game powered by artificial intelligence. The educational system targeted toward high schoolers adapts to students’ specific values and can be used to measure — and in some cases reduce — the impact of bias.”