Stanford Engineering: How to design algorithms with fairness in mind

Stanford Engineering: How to design algorithms with fairness in mind. “In this episode of Stanford Engineering’s The Future of Everything, computer science professor Omer Reingold explains how we can create definitions of fairness that can be incorporated into computer algorithms. Reingold and host, bioengineer Russ Altman, also discuss how flawed historic data may result in algorithms making unfair decisions and how a technique called multi-group fairness can improve health predictions for individuals.” Audio link and YouTube video with excellent captions.

USC Viterbi: Busting Anti-Queer Bias in Text Prediction

USC Viterbi: Busting Anti-Queer Bias in Text Prediction. “A team of researchers from the USC Viterbi School of Engineering Information Sciences Institute and the USC Annenberg School for Communication and Journalism, led by Katy Felkner, a USC Viterbi Ph.D. in computer science student and National Science Foundation Graduate Research Fellowship recipient, has developed a system to quantify and fix anti-queer bias in the artificial intelligence behind text prediction.”

SlashGear: Study Shows Robots Using Internet-Based AI Exhibit Racist And Sexist Tendencies

SlashGear: Study Shows Robots Using Internet-Based AI Exhibit Racist And Sexist Tendencies. “A new study claims robots exhibit racist and sexist stereotyping when the artificial intelligence (AI) that powers them is modeled on data from the internet. The study, which researchers say is the first to prove the concept, was led by Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington, and published by the Association for Computing Machinery (ACM).”

NBC News: Facebook’s 2018 algorithm change boosted local GOP groups, research finds

NBC News: Facebook’s 2018 algorithm change boosted local GOP groups, research finds. “A change to Facebook’s recommendation system likely accounted for a disproportionate boost in visibility and engagement to conservative political groups on the social media platform starting in 2018, according to research published Wednesday. The research, published in the journal Research & Politics, looked at posts from the pages of nearly every county party in the U.S. and found a marked increase in shares, comments and reactions to Republican posts.”

WIRED: Feds Warn Employers Against Discriminatory Hiring Algorithms

WIRED: Feds Warn Employers Against Discriminatory Hiring Algorithms. ” Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women’s college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard. All of this has gone widely unchecked. But now, the US Department of Justice and the Equal Employment Opportunity Commission have offered guidance on what businesses and government agencies must do to ensure their use of AI in hiring complies with the Americans with Disabilities Act.”

The Verge: Google is using a new way to measure skin tones to make search results more inclusive

The Verge: Google is using a new way to measure skin tones to make search results more inclusive. “The tech giant is working with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST Scale is designed to replace outdated skin tone scales that are biased towards lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for people with darker coloring, says Monk.”

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias

Motherboard: Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias. “Facebook and its parent company, Meta, recently released a new tool that can be used to quickly develop state-of-the-art AI. But according to the company’s researchers, the system has the same problem as its predecessors: It’s extremely bad at avoiding results that reinforce racist and sexist stereotypes.”

University of Maryland: Researchers Work to Make Artificial Intelligence Genuinely Fair

University of Maryland: Researchers Work to Make Artificial Intelligence Genuinely Fair. “Artificial intelligence (AI) algorithms help make online shopping seamless, calculate credit scores, navigate vehicles and even offer judges criminal sentencing guidelines. But as the use of AI increases exponentially, so does the concern that biased data can result in flawed decisions or prejudiced outcomes. Now, backed by a combined $1.6 million in funding from the National Science Foundation (NSF) and Amazon, two teams of University of Maryland researchers are working to eliminate those biases by developing new algorithms and protocols that can improve the efficiency, reliability and trustworthiness of AI systems.”

Tech Xplore: ‘Off label’ use of imaging databases could lead to bias in AI algorithms, study finds

Tech Xplore: ‘Off label’ use of imaging databases could lead to bias in AI algorithms, study finds. “Significant advances in artificial intelligence (AI) over the past decade have relied upon extensive training of algorithms using massive, open-source databases. But when such datasets are used ‘off label’ and applied in unintended ways, the results are subject to machine learning bias that compromises the integrity of the AI algorithm, according to a new study by researchers at the University of California, Berkeley, and the University of Texas at Austin.”

New York Times: How Native Americans Are Trying to Debug A.I.’s Biases

New York Times: How Native Americans Are Trying to Debug A.I.’s Biases. “Ms. [Chamisa] Edmo explained that tagging results are often ‘outlandish’ and ‘offensive,’ recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. [Davar] Ardalan noted as an example, because of the abundance of data on the topic. As Mr. [Tracy] Monteith put it, A.I. is only as good as the data it is fed. And data on cultures that have long been marginalized, like Native ones, are simply not at the levels they need to be.”

CBC: Can better tech really fix darker-skin bias in smartphone cameras? Google thinks so

CBC: Can better tech really fix darker-skin bias in smartphone cameras? Google thinks so. “The tech giant Google used the biggest platform it could find to make a statement during Black History Month. In a one-minute ad that cost millions, Google told Super Bowl fans about something Black people have known for a long time: most cameras aren’t great at capturing darker skin.”

University of Washington: Google’s ‘CEO’ image search gender bias hasn’t really been fixed

University of Washington: Google’s ‘CEO’ image search gender bias hasn’t really been fixed. “The researchers showed that for four major search engines from around the world, including Google, this bias is only partially fixed, according to a paper presented in February at the AAAI Conference of Artificial Intelligence. A search for an occupation, such as ‘CEO,’ yielded results with a ratio of cis-male and cis-female presenting people that matches the current statistics. But when the team added another search term — for example, ‘CEO + United States’ — the image search returned fewer photos of cis-female presenting people. In the paper, the researchers propose three potential solutions to this issue.”

The Conversation: Artificial intelligence can discriminate on the basis of race and gender, and also age

The Conversation: Artificial intelligence can discriminate on the basis of race and gender, and also age. “AI is often assumed to be more objective than humans. In reality, however, AI algorithms make decisions based on human-annotated data, which can be biased and exclusionary. Current research on bias in AI focuses mainly on gender and race. But what about age-related bias — can AI be ageist?”