MIT Technology Review: Giving algorithms a sense of uncertainty could make them more ethical

MIT Technology Review: Giving algorithms a sense of uncertainty could make them more ethical. “Algorithms are increasingly being used to make ethical decisions. Perhaps the best example of this is a high-tech take on the ethical dilemma known as the trolley problem: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car’s control software choose who live and who dies?”

The New Yorker: The Search for Anti-Conservative Bias on Google

The New Yorker: The Search for Anti-Conservative Bias on Google. “Algorithmic neutrality is a common Silicon Valley refrain. But an algorithm built without favoring one political party or another, or constructed without intentionally championing a particular ideology, is actually designed to deliver culturally biased results. A search engine runs on algorithms and artificial intelligence to instantaneously sift through the Internet’s nearly two billion Web sites. Google’s engineers have embedded something they call ‘authoritativeness’ into their search algorithm to deliver its results, though what this is, exactly, is challenging to understand, because it appears to be based on a tautology: an authoritative source is a source that a lot of other sources consider to be authoritative. “

Washington Post: Facebook, Twitter crack down on AI babysitter-rating service

Washington Post: Facebook, Twitter crack down on AI babysitter-rating service. “Predictim, a California-based start-up, analyzes babysitters’ online histories, including on Facebook and Twitter, and offers ratings of whether they are at risk of drug abuse, bullying or having a ‘bad attitude.’ Facebook said it dramatically limited Predictim’s access to users’ information on Instagram and Facebook a few weeks ago for violating a ban on developers’ use of personal data to evaluate a person for decisions on hiring or eligibility.”

Harvard Business Review: Why We Need to Audit Algorithms

Harvard Business Review: Why We Need to Audit Algorithms . “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.”

Independent: Airlines Face Crack Down On Use Of ‘Exploitative’ Algorithm That Splits Up Families On Flights

Independent: Airlines Face Crack Down On Use Of ‘Exploitative’ Algorithm That Splits Up Families On Flights. “Algorithms used by airlines to split up those travelling together unless they pay more to sit next to each other have been called ‘exploitative’ by a government minister. Speaking to a parliamentary communications committee, Digital Minister Margot James described the software as ‘a very cynical, exploitative means… to hoodwink the general public’.”

Make Tech Easier: What Does an Algorithm Look Like?

Make Tech Easier: What Does an Algorithm Look Like?. “We know that Facebook, Google, and Amazon have algorithms that give us updates, search results, and product recommendations, but what does that actually mean? What qualifies as an algorithm? Can you write one down? What would it look like if you did? Since they run so many parts of our daily lives, it’s important to have a basic sense of what exactly is going on under the hood – and it’s really not as intimidating as it often seems.”