Digiday: Recruitment tool TikTok Resumes risks magnifying unconscious biases, execs warn

Digiday: Recruitment tool TikTok Resumes risks magnifying unconscious biases, execs warn. “TikTok Resumes could become a recruitment tool that inadvertently encourages discrimination, especially in the wake of companies like Target and Chipotle signing on to the new initiative, senior executives in technology, HR and social responsibility roles told Digiday.”

Confronting AI Bias: A Transatlantic Approach to AI Policy (BSA TechPost)

BSA TechPost: Confronting AI Bias: A Transatlantic Approach to AI Policy. “BSA supports legislation that would require organizations to perform impact assessments prior to deploying high-risk AI systems. To advance these conversations, we recently launched the BSA Framework to Build Trust in AI, a detailed methodology for performing impact assessments that can help organizations responsibly manage the risk of bias throughout an AI system’s lifecycle.”

Google Blog: A Dataset for Studying Gender Bias in Translation

Google AI Blog: A Dataset for Studying Gender Bias in Translation. “To help facilitate progress against the common challenges on contextual translation (e.g., pronoun drop, gender agreement and accurate possessives), we are releasing the Translated Wikipedia Biographies dataset, which can be used to evaluate the gender bias of translation models. Our intent with this release is to support long-term improvements on ML systems focused on pronouns and gender in translation by providing a benchmark in which translations’ accuracy can be measured pre- and post-model changes.”

National Law Review: State Laws Hinder Progress of Non-Bias AI

National Law Review: State Laws Hinder Progress of Non-Bias AI. “Artificial Intelligence (AI) relies on oceans of data, most people know this. But many people do not yet understand how data shapes AI before the AI is functional, or how data is used by AI in production. Each raises its own set of practical, technical and social issues. This lack of understanding can lead people to conflate data used in AI formation with the data AI uses as it operates.”

Poynter: Brand over substance may determine the public’s perception of news articles, study says

Poynter: Brand over substance may determine the public’s perception of news articles, study says . “The public’s perception of a news outlet’s trustworthiness may come down to branding rather than content, according to a recently released study from the Knight Foundation and Gallup. The study used data from a specially designed news aggregation platform called NewsLense to test participants’ perceptions and interactions with articles from outlets identified as either ‘sympathetic,’ ‘no lean,’ or ‘adversarial.’”

ABA Journal: High tech can heighten discrimination; here are some policy recommendations for its ethical use

ABA Journal: High tech can heighten discrimination; here are some policy recommendations for its ethical use. “From federal surveillance of social justice protests to facial recognition technology that results in inordinately high false positives for certain demographic groups, recent surveillance trends have deep historical roots and troubling future implications for traditionally marginalized groups. These trends threaten our core constitutional values, democratic principles and the rule of law.”

Penn State News: New tool could help lessen bias in live television broadcasts

Penn State News: New tool could help lessen bias in live television broadcasts. “From Sunday morning news shows to on-air pregame commentary in sports, live telecasts draw viewers into real-time content on televisions around the world. But in these often-unscripted productions, what the audience sees is not always what the producer intends — especially in regard to equity of on-air time for subjects based on their race or gender. A team of researchers, which includes Syed Billah from Penn State’s College of Information Sciences and Technology, has developed an interactive tool called Screen-Balancer, designed to assist media producers in balancing the presence of different phenotypes — an individual’s observable physical traits — in live telecasts.”

Chicago Booth Review: Law and order and data

Chicago Booth Review: Law and order and data. “Algorithms are already being used in criminal-justice applications in many places, helping decide where police departments should send officers for patrol, as well as which defendants should be released on bail and how judges should hand out sentences. Research is exploring the potential benefits and dangers of these tools, highlighting where they can go wrong and how they can be prevented from becoming a new source of inequality. The findings of these studies prompt some important questions such as: Should artificial intelligence play some role in policing and the courts? If so, what role should it play? The answers, it appears, depend in large part on small details.”

The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias

The Register: AI brain drain to Google and pals threatens public sector’s ability to moderate machine-learning bias. “Boffins from Denmark and the UK have measured the AI brain drain and found that private industry really is soaking up tech talent at the expense of academia and public organizations. In a paper [PDF] distributed via ArXiv, authors Roman Jurowetzki and Daniel Hain, from Aalborg University Business School, and Juan Mateos-Garcia and Konstantinos Stathoulopoulos, from British charity Nesta, describe how they analyzed over 786,000 AI research studies released between 2000 and 2020 to trace career shifts from academia to industry and less frequent reverse migrations.”

USA Today: Do Facebook, Twitter and YouTube censor conservatives? Claims ‘not supported by the facts,’ new research says

USA Today: Do Facebook, Twitter and YouTube censor conservatives? Claims ‘not supported by the facts,’ new research says. “Despite repeatedcharges of anti-conservative bias from former President Donald Trump and other GOP critics, Facebook, Twitter and Google’s YouTube are not slanted against right-leaning users, a new report out of New York University found. Like previous research, ‘False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives,’ concludes that rather than censoring conservatives, social media platforms amplify their voices.”

The Verge: ‘Pro Tools proficiency’ may be keeping us from diversifying audio

The Verge: ‘Pro Tools proficiency’ may be keeping us from diversifying audio. “Despite the no-doubt earnest efforts of many well-meaning individuals, podcasting, it would seem, has had — and continues to have — a diversity problem. And while there are many factors which contribute to maintaining the industry’s status quo, there is one culprit to which we can confidently point: Pro Tools.”

PsyPost: Implicit bias against Asians increased after Trump’s secretary of state and others popularized “Chinese virus”

PsyPost: Implicit bias against Asians increased after Trump’s secretary of state and others popularized “Chinese virus”. “New research suggests that the use of terms like ‘Wuhan flu’ and ‘Chinese virus’ by conservative media outlets and Republican figures had a measurable impact on unconscious bias against Asian Americans. The study, published in Health Education & Behavior, found that implicit bias increased after the use of such phrases went viral.”