Online Journalism Blog: 10 principles for data journalism in its second decade

Online Journalism Blog: 10 principles for data journalism in its second decade. “In 2007 Bill Kovach and Tom Rosenstiel published The Elements of Journalism. With the concept of ‘journalism’ increasingly challenged by the fact that anyone could now publish to mass audiences, their principles represented a welcome platform-neutral attempt to articulate exactly how journalism could be untangled from the vehicles that carried it and the audiences it commanded. In this extract from a forthcoming book chapter* I attempt to use Kovach and Rosenstiel’s principles (outlined in part 1 here) as the basis for a set that might form a basis for (modern) data journalism as it enters its second and third decades.”

Engadget: Google’s comment-ranking system will be a hit with the alt-right

Engadget: Google’s comment-ranking system will be a hit with the alt-right. “A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who’s ever heard the phrase ‘don’t read the comments.’ According to ‘The Great Tech Panic: Trolls Across America,’ Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia, ‘is the least-toxic city in the US.’ There’s just one problem.”

New York Times: How to Regulate Artificial Intelligence

New York Times: How to Regulate Artificial Intelligence. “I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the ‘three laws of robotics’ that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws. These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.”

Fast Company Design: This Breakthrough Tool Detects Racism And Sexism In Software

Fast Company Design: This Breakthrough Tool Detects Racism And Sexism In Software. “Last year, Amazon was figuring out where it should offer free same-day delivery service to reach the greatest number of potential Prime customers. So the company did what you’d expect: It used software to analyze all sorts of undisclosed metrics about each neighborhood, ultimately selecting the ‘best’ based on its calculations. But soon journalists discovered that, time and time again, Amazon was excluding black neighborhoods.”

Wired: When Government Rules by Software, Citizens Are Left in the Dark

Wired: When Government Rules by Software, Citizens Are Left in the Dark. “IN JULY, SAN Francisco Superior Court Judge Sharon Reardon considered whether to hold Lamonte Mims, a 19-year-old accused of violating his probation, in jail. One piece of evidence before her: the output of algorithms known as PSA that scored the risk that Mims, who had previously been convicted of burglary, would commit a violent crime or skip court. Based on that result, another algorithm recommended that Mims could safely be released, and Reardon let him go. Five days later, police say, he robbed and murdered a 71-year old man. On Monday, the San Francisco District Attorney’s Office said staffers using the tool had erroneously failed to enter Mims’ prior jail term. Had they done so, PSA would have recommended he be held, not released.”

MIT Technology Review: AI Programs Are Learning to Exclude Some African-American Voices

MIT Technology Review: AI Programs Are Learning to Exclude Some African-American Voices. “All too often people make snap judgments based on how you speak. Some AI systems are also learning to be prejudiced against some dialects. And as language-based AI systems become ever more common, some minorities may automatically be discriminated against by machines, warn researchers studying the issue.”

Wired: Google’s New Algorithm Perfects Photos Before You Even Take Them

Wired: Google’s New Algorithm Perfects Photos Before You Even Take Them. “TAKING INSTAGRAM-WORTHY PHOTOS is one thing—editing them is another. Most of us just upload a pic, tap a filter, tweak the saturation, and post. If you want to make a photo look good without the instant gratification of the Reyes filter, enlist a professional. Or a really smart algorithm.”