The Verge: Facebook shelved a feature intended to promote civil political discourse

The Verge: Facebook shelved a feature intended to promote civil political discourse. “The Wall Street Journal reports that Facebook had begun working on a feature that would encourage users of opposing political beliefs to interact in a more positive way. But the project — known as ‘Common Ground’ — was reportedly halted after Facebook’s global head of policy raised concerns that it could lead to accusations that the site was biased against conservatives.”

The New Yorker: The Search for Anti-Conservative Bias on Google

The New Yorker: The Search for Anti-Conservative Bias on Google. “Algorithmic neutrality is a common Silicon Valley refrain. But an algorithm built without favoring one political party or another, or constructed without intentionally championing a particular ideology, is actually designed to deliver culturally biased results. A search engine runs on algorithms and artificial intelligence to instantaneously sift through the Internet’s nearly two billion Web sites. Google’s engineers have embedded something they call ‘authoritativeness’ into their search algorithm to deliver its results, though what this is, exactly, is challenging to understand, because it appears to be based on a tautology: an authoritative source is a source that a lot of other sources consider to be authoritative. “

Nieman Lab: Few people are actually trapped in filter bubbles. Why do they like to say that they are?

Nieman Lab: Few people are actually trapped in filter bubbles. Why do they like to say that they are?. “We’re not trapped in filter bubbles, but we like to act as if we are. Few people are in complete filter bubbles in which they only consume, say, Fox News, Matt Grossmann writes in a new report for Knight (and there’s a summary version of it on Medium here). But the ‘popular story of how media bubbles allegedly undermine democracy’ is one that people actually seem to enjoy clinging to.”

The Verge: The Long, Tortured Quest To Make Google Unbiased

The Verge: The Long, Tortured Quest To Make Google Unbiased. “More than any other infrastructure, search engines reshape the web in profound and often invisible ways. It’s a potentially frightening power, particularly when 90 percent of the market belongs to a single company. So it’s understandable to ask Google to be impartial — but can a search engine, whose goal is ranking pages, ever be meaningfully neutral? If it can, should a government be in charge of regulating it? And if it can’t, what recourse do sites have if Google decides to remake the web without them in it?”

TechCrunch: Three ways to avoid bias in machine learning

TechCrunch: Three ways to avoid bias in machine learning. “Because AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves. Exposing human data to algorithms exposes bias, and if we are considering the outputs rationally, we can use machine learning’s aptitude for spotting anomalies.”

EurekAlert: Can social media lead to labor market discrimination?

EurekAlert: Can social media lead to labor market discrimination? . “A new Journal of Economics & Management Strategy study investigates whether social media may be used as a source of information for recruiters to discriminate against job applicants. For the study, researchers set up an experiment that involved sending more than 800 applications from two fictitious applicants who differed in their cities of origin, a typical French town (Brives-la-Gaillarde) or Marrakesh, Morocco. This information is available only on their Facebook profiles, not on the resumes or the cover letters sent to recruiters. The investigators selected job openings published in over several months in mid-2012 on the French public agency for employment website Pôle emploi.”

Google Blog: A new course to teach people about fairness in machine learning

Google Blog: A new course to teach people about fairness in machine learning. “As [machine learning] practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias. To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of our popular Machine Learning Crash Course (MLCC).”