The Guardian: How can we stop algorithms telling lies? “The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.” This is not one of those “oo oo algos are bad hide the children” articles, but in-depth and with a lot of examples. Recommended read.
Chris Aldrich: The Facebook Algorithm Mom Problem. “I can post about arcane areas like Lie algebras or statistical thermodynamics, and my mom, because she’s my mom, will like all of it–whether or not she understands what I’m talking about or not. And isn’t this what moms do?! What they’re supposed to do? Of course it is! She’s my mom, she’s supposed to love me unconditionally this way! The problem is: Facebook, despite the fact that they know she’s my mom, doesn’t take this fact into account in their algorithm.”
University of Texas at Austin: Comparison of algorithms for Twitter sentiment analysis. “Sentiment Analysis has gained attention in recent years owing to the massive increase in personal statements made at the individual level, spread across vast geographic and demographic ranges. That data has become vastly more accessible as micro-blog sites such as Twitter and Facebook have released public, free interfaces. This research seeks to understand the processes behind Sentiment Analysis and to compare statistical methodologies for classifying Twitter sentiments.” This is a Master’s thesis.
The Next Web: Googling ‘black baby portraits’ reveals yet another problem with AI. “In today’s episode of AI Screws Up Again, try typing in ‘black baby portraits’ into Google. You will not get many photos of black babies.” The article points out that the same issue occurs with Bing.
Columbia Journalism Review: How to report on algorithms even if you’re not a data whiz. “THERE’S A NEW BEAT in town: algorithms. From formulas that determine what you see on social media to equations that dictate government operations, algorithms are increasingly powerful and pervasive. As an important new field of influence, algorithms are ripe for journalistic investigation. But investigating computer code can come across as dry and technical. Researchers often talk about ‘auditing’ and ‘reverse engineering’ algorithms—activities requiring heavy data analysis. But algorithmic accountability reporting projects don’t have to be this way. There are many possible approaches that draw on traditional reporting as well.”
TechCrunch: Facebook News Feed change demotes sketchy links overshared by spammers . “Technically, Facebook can’t suspend people’s accounts just for sharing 50-plus false, sensational or clickbaity news articles per day. It doesn’t want to trample anyone’s right to share. But there’s nothing stopping it from burying those links low in the News Feed so few people ever see them.”
New Scientist: DeepMind’s neural network teaches AI to reason about the world. “The world is a confusing place, especially for an AI. But a neural network developed by UK artificial intelligence firm DeepMind that gives computers the ability to understand how different objects are related to each other could help bring it into focus.”