Wired: AI Algorithms Need FDA-style Drug Trials

Wired: AI Algorithms Need FDA-style Drug Trials. “Intelligent systems at scale need regulation because they are an unprecedented force multiplier for the promotion of the interests of an individual or a group. For the first time in history, a single person can customize a message for billions and share it with them within a matter of days. A software engineer can create an army of AI-powered bots, each pretending to be a different person, promoting content on behalf of political or commercial interests. Unlike broadcast propaganda or direct marketing, this approach also uses the self-reinforcing qualities of the algorithm to learn what works best to persuade and nudge each individual.”

Reuters: U.S. senators say social media letting algorithms ‘run wild’

Reuters: U.S. senators say social media letting algorithms ‘run wild’. “A U.S. Senate panel on Tuesday questioned how major social media companies like Facebook Inc and Alphabet Inc’s Google unit use algorithms and artificial intelligence to serve up new content to keep users engaged.”

Sydney Morning Herald: Google search ranking boss warns against algorithm oversight

Sydney Morning Herald: Google search ranking boss warns against algorithm oversight. “Search giant Google has warned that the Australian competition watchdog’s proposal for a regulator to oversee its algorithm could increase risks from spammers. One of the Google’s top executives, vice-president of search Pandu Nayak, said the Australian Competition and Consumer Commission’s proposal to impose oversight on the way search engines rank information and news articles through a review authority could invite trouble.”

KSEN: MSU Researchers Receive Grant To Build ‘Algorithmic Awareness’ As Form Of Digital Literacy

KSEN: MSU Researchers Receive Grant To Build ‘Algorithmic Awareness’ As Form Of Digital Literacy. “To help increase awareness of algorithms, the [Montana State University] Library received a $50,000 grant for ‘Unpacking the Algorithms That Shape our User Experience.’ The project includes three main parts, all with a goal of introducing “algorithmic awareness” as a form of digital literacy: researching algorithms and writing a report for users, developing a teaching tool in order to give transparency to common algorithms, and creating a curriculum and pilot class. “

Ars Technica: Yes, “algorithms” can be biased. Here’s why

Ars Technica: Yes, “algorithms” can be biased. Here’s why. “Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition ‘algorithms’ (and by extension all ‘algorithms’) ‘always have these racial inequities that get translated’ and that ‘those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.’ She was mocked for this claim on the grounds that ‘algorithms’ are ‘driven by math’ and thus can’t be biased—but she’s basically right. Let’s take a look at why.”

New York Times Magazine: How Secrecy Fuels Facebook Paranoia

New York Times Magazine: How Secrecy Fuels Facebook Paranoia. “The biggest internet platforms are businesses built on asymmetric information. They know far more about their advertising, labor and commerce marketplaces than do any of the parties participating in them. We can guess, but can’t know, why we were shown a friend’s Facebook post about a divorce, instead of another’s about a child’s birth. We can theorize, but won’t be told, why YouTube thinks we want to see a right-wing polemic about Islam in Europe after watching a video about travel destinations in France. Everything that takes place within the platform kingdoms is enabled by systems we’re told must be kept private in order to function. We’re living in worlds governed by trade secrets. No wonder they’re making us all paranoid.”

Harvard Business Review: Why We Need to Audit Algorithms

Harvard Business Review: Why We Need to Audit Algorithms . “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.”