First Draft News: Why we need a Google Trends for Facebook, Instagram, Twitter, TikTok and Reddit

First Draft News: Why we need a Google Trends for Facebook, Instagram, Twitter, TikTok and Reddit. “When it comes to data voids, a distinction is usually drawn between search engines and social media platforms. Whereas the primary interface of search engines is the search bar, the primary interface of social media platforms is the feed: algorithmic encounters with posts based on general interest, not a specific question you’re searching to answer. It’s therefore easy to miss the fact that data voids exist here, too: Even though search isn’t the primary interface, it’s still a major feature. And with billions of users, they may be creating major social vulnerabilities.”

Sydney Morning Herald: Google and Facebook face fines and algorithm transparency under new code

Sydney Morning Herald: Google and Facebook face fines and algorithm transparency under new code. “Google and Facebook will have three months to agree to revenue-sharing deals with Australian media companies before independent arbitrators intervene under a new landmark code designed to tackle the market power amassed by the US tech giants. Draft laws unveiled by the Morrison government and competition watchdog on Friday will impose a raft of conditions on the digital platforms, forcing them to compensate news media businesses for using their content and be more transparent about their data and algorithms.”

Search Engine Journal: India Proposes Access to Google and Amazon Algorithms

Search Engine Journal: India Proposes Access to Google and Amazon Algorithms . “India’s government has rules in draft form that will require tech companies like Google, Amazon and Facebook to provide source code and algorithms. The goal of the proposed rules is to build a wall against unfair monopolistic practices and create a more competitive business environment for local businesses.”

ZDNet: Google’s new AI tool could help decode the mysterious algorithms that decide everything

ZDNet: Google’s new AI tool could help decode the mysterious algorithms that decide everything. “While most people come across algorithms every day, not that many can claim that they really understand how AI actually works. A new tool unveiled by Google, however, hopes to help common humans grasp the complexities of machine learning.”

Harvard Business Review: When Algorithms Decide Whose Voices Will Be Heard

Harvard Business Review: When Algorithms Decide Whose Voices Will Be Heard. “Are we giving up our freedom of expression and action in the name of convenience? While we may have the perceived power to express ourselves digitally, our ability to be seen is increasingly governed by algorithms — with lines of codes and logic — programmed by fallible humans. Unfortunately, what dictates and controls the outcomes of such programs is more often than not a black box.”

Search Engine Land: Senate bill seeks to compel tech giants to offer ‘unfiltered’ versions of their content

Search Engine Land: Senate bill seeks to compel tech giants to offer ‘unfiltered’ versions of their content. “There’s a new bill circulating in the Senate that would require large internet companies to disclose that their results are using ‘opaque algorithms’ and offer consumers an option to see non-personalized search results or content, the Wall Street Journal (WSJ) first reported. It’s called ‘The Filter Bubble Transparency Act.'”

TechCrunch: Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

TechCrunch: Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage. “The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site. But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline.”

Wired: AI Algorithms Need FDA-style Drug Trials

Wired: AI Algorithms Need FDA-style Drug Trials. “Intelligent systems at scale need regulation because they are an unprecedented force multiplier for the promotion of the interests of an individual or a group. For the first time in history, a single person can customize a message for billions and share it with them within a matter of days. A software engineer can create an army of AI-powered bots, each pretending to be a different person, promoting content on behalf of political or commercial interests. Unlike broadcast propaganda or direct marketing, this approach also uses the self-reinforcing qualities of the algorithm to learn what works best to persuade and nudge each individual.”

Reuters: U.S. senators say social media letting algorithms ‘run wild’

Reuters: U.S. senators say social media letting algorithms ‘run wild’. “A U.S. Senate panel on Tuesday questioned how major social media companies like Facebook Inc and Alphabet Inc’s Google unit use algorithms and artificial intelligence to serve up new content to keep users engaged.”

Sydney Morning Herald: Google search ranking boss warns against algorithm oversight

Sydney Morning Herald: Google search ranking boss warns against algorithm oversight. “Search giant Google has warned that the Australian competition watchdog’s proposal for a regulator to oversee its algorithm could increase risks from spammers. One of the Google’s top executives, vice-president of search Pandu Nayak, said the Australian Competition and Consumer Commission’s proposal to impose oversight on the way search engines rank information and news articles through a review authority could invite trouble.”

KSEN: MSU Researchers Receive Grant To Build ‘Algorithmic Awareness’ As Form Of Digital Literacy

KSEN: MSU Researchers Receive Grant To Build ‘Algorithmic Awareness’ As Form Of Digital Literacy. “To help increase awareness of algorithms, the [Montana State University] Library received a $50,000 grant for ‘Unpacking the Algorithms That Shape our User Experience.’ The project includes three main parts, all with a goal of introducing “algorithmic awareness” as a form of digital literacy: researching algorithms and writing a report for users, developing a teaching tool in order to give transparency to common algorithms, and creating a curriculum and pilot class. “

Ars Technica: Yes, “algorithms” can be biased. Here’s why

Ars Technica: Yes, “algorithms” can be biased. Here’s why. “Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition ‘algorithms’ (and by extension all ‘algorithms’) ‘always have these racial inequities that get translated’ and that ‘those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.’ She was mocked for this claim on the grounds that ‘algorithms’ are ‘driven by math’ and thus can’t be biased—but she’s basically right. Let’s take a look at why.”

New York Times Magazine: How Secrecy Fuels Facebook Paranoia

New York Times Magazine: How Secrecy Fuels Facebook Paranoia. “The biggest internet platforms are businesses built on asymmetric information. They know far more about their advertising, labor and commerce marketplaces than do any of the parties participating in them. We can guess, but can’t know, why we were shown a friend’s Facebook post about a divorce, instead of another’s about a child’s birth. We can theorize, but won’t be told, why YouTube thinks we want to see a right-wing polemic about Islam in Europe after watching a video about travel destinations in France. Everything that takes place within the platform kingdoms is enabled by systems we’re told must be kept private in order to function. We’re living in worlds governed by trade secrets. No wonder they’re making us all paranoid.”

Harvard Business Review: Why We Need to Audit Algorithms

Harvard Business Review: Why We Need to Audit Algorithms . “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.”

USA Today: Google employees discussed changing search results after Trump travel ban

USA Today: Google employees discussed changing search results after Trump travel ban. “Google employees debated ways to alter search results to direct users to pro-immigration organizations and to contact lawmakers and government agencies after President Donald Trump’s immigration travel ban against predominantly Muslim countries.”