ZDNet: AI’s big problem: Lazy humans just trust the algorithms too much

ZDNet: AI’s big problem: Lazy humans just trust the algorithms too much. “It’s all well and good to recommend that humans consistently monitor the decisions made by AI systems, especially if those decisions impact decisive fields like warfare or policing. But in reality, how good are humans at catching the flaws of those systems?”

Smarter government or data-driven disaster: the algorithms helping control local communities (MuckRock)

MuckRock: Smarter government or data-driven disaster: the algorithms helping control local communities. “Does handing government decisions over to algorithms save time and money? Can algorithms be fairer or less biased than human decision making? Do they make us safer? Automation and artificial intelligence could improve the notorious inefficiencies of government, and it could exacerbate existing errors in the data being used to power it. MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) have compiled a collection of algorithms used in communities across the country to automate government decision-making.”

TechCrunch: AI desperately needs regulation and public accountability, experts say

TechCrunch: AI desperately needs regulation and public accountability, experts say. “Artificial intelligence systems and creators are in dire need of direct intervention by governments and human rights watchdogs, according to a new report from researchers at Google, Microsoft and others at AI Now. Surprisingly, it looks like the tech industry just isn’t that good at regulating itself.”

Penn State News: IST researchers develop tool to expand deep learning into security domains

Penn State News: IST researchers develop tool to expand deep learning into security domains. “Deep learning is a segment of artificial intelligence that focuses on algorithms that can learn the characteristics of text, images or sound from annotated examples provided to it. The team’s technique, named LEMNA, could help security analysts and machine-learning developers to establish trust in deep learning models by correctly identifying, explaining and correcting errors that the models make.”

Wired: Free Speech Is Not The Same As Free Reach

Wired: Free Speech Is Not The Same As Free Reach . “…the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.”

The Regulatory Review: Improving Federal Regulation of Medical Algorithms

The Regulatory Review: Improving Federal Regulation of Medical Algorithms. “In emergency situations, doctors have little time to save the lives of trauma patients. Gunshot wounds, car crashes, and other life-threatening harms often cause severe blood loss, which is the leading cause of preventable death when trauma puts patients’ lives on the line. To manage the demands of these emergency cases, physicians today complement their medical skill-set with a new tool: algorithms. But in a recent paper, a legal scholar argues that federal regulatory reforms must occur to unleash the full lifesaving potential of algorithms in health care. Nicholson Price, a professor at University of Michigan Law School, claims that the U.S. Food and Drug Administration (FDA) lacks the necessary expertise in computer science to apply its current regulations to medical algorithms and, as a result, could discourage much-needed innovation.”