Brookings: Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. “In the pre-algorithm world, humans and organizations made decisions in hiring, advertising, criminal sentencing, and lending. These decisions were often governed by federal, state, and local laws that regulated the decision-making processes in terms of fairness, transparency, and equity. Today, some of these decisions are entirely made or influenced by machines whose scale and statistical rigor promise unprecedented efficiencies.”
Harvard Business Review: Voice Recognition Still Has Significant Race and Gender Biases. “Voice AI is becoming increasingly ubiquitous and powerful. Forecasts suggest that voice commerce will be an $80 billion business by 2023. Google reports that 20% of their searches are made by voice query today — a number that’s predicted to climb to 50% by 2020. In 2017, Google announced that their speech recognition had a 95% accuracy rate. While that’s an impressive number, it begs the question: 95% accurate for whom?”
CNET: Border officials don’t have data to address racial bias in facial recognition tech. “Facial recognition technology is prone to errors, but when it comes to racial bias at airports, there’s a good chance it’s not learning from its mistakes. Debra Danisek, a privacy officer with the US Customs and Border Protection, talked to an audience Friday at the International Association of Privacy Professionals Summit about what data its facial recognition tech collects — but more importantly, about what data it doesn’t collect.”
Harvard Business Review: All the Ways Hiring Algorithms Can Introduce Bias. “Our analysis of predictive tools across the hiring process helps to clarify just what ‘hiring algorithms’ do, and where and how bias can enter into the process. Unfortunately, we found that most hiring algorithms will drift toward bias by default. While their potential to help reduce interpersonal bias shouldn’t be discounted, only tools that proactively tackle deeper disparities will offer any hope that predictive technology can help promote equity, rather than erode it.”
The Conversation: Google’s algorithms discriminate against women and people of colour. “At the start of Black History Month 2019, Google designed its daily-changing homepage logo to include an image of African-American activist Sojourner Truth, the great 19th-century abolitionist and women’s rights activist. But what would Truth say about Google’s continual lack of care and respect toward people of colour? While bringing more attention to Sojourner Truth is venerable, Google can do better. As a professor and researcher of digital cultures, I have found that a lack of care and investment by tech companies towards users who are not white and male allows racism and sexism to creep into search engines, social networks and other algorithmic technologies.”
News@Northeastern: Your Gender And Race Might Be Determining Which Facebook Ads You See. “The research was troubling. It showed that the group of users to whom Facebook chose to show ads can be skewed along gender and racial lines, in potential violation of federal laws that prevent discrimination in ads for employment, housing, and credit. A Northeastern team tested Facebook’s advertising system with a series of online advertisements. As the researchers tweaked the images, Facebook’s system presented the ads more predominantly to specific racial and gender groups.” This is not the researchers intentionally microtargeting. This is Facebook’s own algorithm doing this.
Washington Post: Senate Republicans renew their claims that Facebook, Google and Twitter censor conservatives . “Republicans led by Sen. Ted Cruz on Wednesday pilloried Facebook, Google and Twitter over allegations they censor conservative users and content online, threatening federal regulation in response to claims that Democrats long have described as a hoax and a distraction.”