Cornell University: Gamers help highlight disparities in algorithm data

Cornell University: Gamers help highlight disparities in algorithm data. “Is The Witcher immersive? Is The Sims a role-playing game? Gamers from around the world may have differing opinions, but this diversity of thought makes for better algorithms that help audiences everywhere pick the right games, according to new research from Cornell, Xbox and Microsoft Research.”

Yale Insights: A Better Algorithm Can Bring Volunteers to More Organizations

Yale Insights: A Better Algorithm Can Bring Volunteers to More Organizations . “An online platform was connecting millions of volunteers with opportunities—but many organizations were not finding any volunteers at all. Yale SOM’s Vahideh Manshadi and her collaborators found that the platform was steering volunteers toward a small group of opportunities. By building equity into the algorithm, they were able to help more organizations find the volunteers they need.”

El País: ‘When regulating artificial intelligence, we must place race and gender at the center of the debate’

El País: ‘When regulating artificial intelligence, we must place race and gender at the center of the debate’. “One of the most recent research projects organized by 32-year-old Brazilian anthropologist Fernanda K. Martins found that platforms such as Spotify recommend more male artists to users than women, regardless of the musical genre being searched for. This is what academics call ‘algorithmic discrimination.'”

Cornell University: Library gets grant to raise algorithmic literacy

Cornell University: Library gets grant to raise algorithmic literacy. “Cornell University Library has been awarded a grant by the Institute of Museum and Library Sciences (IMLS) to support a project aimed at creating open educational resources on algorithmic literacy—building the public’s knowledge about what algorithms are, how they function, and how they shape modern life.”

WIRED: How to Escape the YouTube Algorithm

WIRED: How to Escape the YouTube Algorithm. “We’ve all heard about how the YouTube algorithm can go terribly wrong, mostly in the context of radicalization. But even without that extreme result, the algorithm is a huge time sink—a system designed to keep you watching videos for as long as possible. Some people like this, but if you want more control over how you spend your time, I recommend avoiding the algorithm entirely. And Google just made that easier: Now, if you turn off your YouTube watch history, there will be no recommended videos on the homepage. Here’s how to set that up.”

NewsWise: New study shows algorithms promote bias–and that consumers cooperate

NewsWise: New study shows algorithms promote bias–and that consumers cooperate. “Every time you engage with Amazon, Facebook, Instagram, Netflix and other online sites, algorithms are busy behind the scenes chronicling your activities and queuing up recommendations tailored to what they know about you. The invisible work of algorithms and recommendation systems spares people from a deluge of information and ensures they receive relevant responses to searches. But Sachin Banker says a new study shows that subtle gender biases shape the information served up to consumers.”

The Conversation: Heritage algorithms combine the rigors of science with the infinite possibilities of art and design

The Conversation: Heritage algorithms combine the rigors of science with the infinite possibilities of art and design. “The model of democracy in the 1920s is sometimes called ‘the melting pot’ – the dissolution of different cultures into an American soup. An update for the 2020s might be “open source,” where cultural mixing, sharing and collaborating can build bridges between people rather than create divides. Our research on heritage algorithms aims to build such a bridge. We develop digital tools to teach students about the complex mathematical sequences and patterns present in different cultures’ artistic, architectural and design practices.”

Stanford Law School: Rethinking Algorithmic Decision-Making

Stanford Law School: Rethinking Algorithmic Decision-Making. “In a new paper, Stanford University authors, including Stanford Law Associate Professor Julian Nyarko, illuminate how algorithmic decisions based on ‘fairness’ don’t always lead to equitable or desirable outcomes.”

Search Engine Roundtable: Google Search Ranking Algorithm Update & Volatility Explodes This Weekend

Search Engine Roundtable: Google Search Ranking Algorithm Update & Volatility Explodes This Weekend. “I often post about Google ranking volatility and search ranking algorithm updates but I rarely post about them on a weekend. But I just had to this Sunday morning; the tools are literally all reporting massive and explosive volatility this weekend and the SEO chatter is also very high.”

MIT News: A new way to look at data privacy

MIT News: A new way to look at data privacy. “MIT researchers created a new data privacy metric, Probably Approximately Correct (PAC) Privacy, and built an algorithm based on this metric that can automatically determine the minimal amount of randomness that needs to be added to a machine-learning model to protect sensitive data, like sensitive lung scan images, from an adversary.”

Nature: Computer algorithms infer gender, race and ethnicity. Here’s how to avoid their pitfalls

Nature: Computer algorithms infer gender, race and ethnicity. Here’s how to avoid their pitfalls. “Publications don’t usually include demographic data such as the gender, race and ethnicity of their authors; researchers impute them from people’s names using algorithms: ‘Molly’ is probably a woman, ‘Jeff’ is probably a man, and so on. Outside academia, these algorithms are widely used as well, to study harassment in online forums and infer the demographics of political donors, for instance. But what do these algorithms really do? And how reliable are they? We take a deep dive into this technology and its limitations in an article that we published in April in Nature Human Behaviour.”

Chronicle of Philanthropy: Seeking to Curb Racial Bias in Medicine, Doris Duke Fund Awards $10 Million to Health Groups

Chronicle of Philanthropy: Seeking to Curb Racial Bias in Medicine, Doris Duke Fund Awards $10 Million to Health Groups. “The Doris Duke Charitable Foundation is awarding more than $10 million to five health organizations to reconsider the use of race in medical algorithms, which research shows can lead to potentially dangerous results for patients of color.”

The Hill: Social media algorithms are not protected speech

The Hill: Social media algorithms are not protected speech . “Platforms claim the recommendations they deliver to users are a form of free speech protected by the First Amendment. That argument fails to distinguish between the videos posted to the platform and the output of the AI algorithms. The former typically do enjoy First Amendment protection, even where they promote harmful reactions. But the latter — the actual recommendations and their manner of delivery — are products of autonomous machines.”

MIT Technology Review: Google DeepMind’s game-playing AI just found another way to make code faster

MIT Technology Review: Google DeepMind’s game-playing AI just found another way to make code faster . “DeepMind’s run of discoveries in fundamental computer science continues. Last year the company used a version of its game-playing AI AlphaZero to find new ways to speed up the calculation of a crucial piece of math at the heart of many different kinds of code, beating a 50-year-old record. Now it has pulled the same trick again—twice.”