ZDNet: On Facebook, quoting ‘Dune’ gets you suspended while posting COVID and vaccine misinformation gets you recommended

ZDNet: On Facebook, quoting ‘Dune’ gets you suspended while posting COVID and vaccine misinformation gets you recommended. “Quoting movies doesn’t hurt or result in the death of anyone. But do you know what does? Spreading misinformation about vaccines and COVID-19. That absolutely will kill people. On July 20, the internet news watchdog NewsGuard presented a report to the World Health Organization. The report’s conclusion: Not only has Facebook failed to be proactive in the removal of misinformation about vaccines and COVID-19, but the social platform is actively enabling and accelerating its spread.”

Lawfare Blog: Is the Facebook Oversight Board an International Human Rights Tribunal?

Lawfare Blog: Is the Facebook Oversight Board an International Human Rights Tribunal?. “Key Oversight Board design features—such as its ability to issue binding rulings and nonbinding recommendations, as well as the standards it applies—resemble those of international human rights tribunals. In addition, the board is developing answers to procedural questions that resemble the responses these institutions have adopted. The Trump decision also reveals that the board faces challenges to its authority and legitimacy similar to those that new international review bodies have confronted.”

BuzzFeed News: Instagram Labeled One Of Islam’s Holiest Mosques A Terrorist Organization

BuzzFeed News: Instagram Labeled One Of Islam’s Holiest Mosques A Terrorist Organization. “Instagram removed posts and blocked hashtags about one of Islam’s holiest mosques because its content moderation system mistakenly associated the site with a designation the company reserves for terrorist organizations, according to internal employee communications seen by BuzzFeed News.”

Wired: Here’s how to fix online harassment. No, seriously

Wired: Here’s how to fix online harassment. No, seriously. “This entire framing of the problem of ‘content moderation’ is flawed. Someone’s experience on a platform is much more than the abuse-likelihood score of each piece of content they see. It is affected by every feature and design choice. Explicit product decisions and machine learning algorithms determine what is given distribution and prominence in timelines and recommendation modules. Prompts and nudges like text composers and big buttons are designed to encourage certain behavior  –  which is not always good, for instance if they end up motivating quickly-fired retorts and thoughtless replies.”

BuzzFeed News: Facebook Stopped Employees From Reading An Internal Report About Its Role In The Insurrection. You Can Read It Here.

BuzzFeed News: Facebook Stopped Employees From Reading An Internal Report About Its Role In The Insurrection. You Can Read It Here.. “Titled ‘Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement,’ the report is one of the most important analyses of how the insurrectionist effort to overturn a free and fair US presidential election spread across the world’s largest social network — and how Facebook missed critical warning signs. The report examines how the company was caught flat-footed as the Stop the Steal Facebook group supercharged a movement to undermine democracy, and concludes the company was unprepared to stop people from spreading hate and incitement to violence on its platform.”

The Atlantic: What Facebook Did for Chauvin’s Trial Should Happen All the Time

The Atlantic: What Facebook Did for Chauvin’s Trial Should Happen All the Time. “Discussion about content moderation tends to focus on binary decisions concerning whether individual pieces of content are left up or taken down. But content moderation is much more about knobs and dials that regulate the overall flow of posts. An individual piece of content is a mere drop in the ocean of Facebook content; the underlying systems that move this content around are the tides. The public discussion about content moderation typically fixates on the drops—what should Facebook have done with Donald Trump’s posts?—but when you’re weathering a storm, what matters is the tides.”

Politico: Facebook’s ‘supreme court’ struggles to set global free speech rules

Politico: Facebook’s ‘supreme court’ struggles to set global free speech rules. “Roughly two months since a group of outside experts started ruling on what people could post on Facebook, cracks in the so-called Oversight Board are already starting to show. So far, the independent body of human rights experts, free speech supporters and legal scholars that rules on what content Facebook must take down or put back up has reversed the social media giant’s decisions in four out of its first five cases.”

The New Republic: How a Bunch of Revolutionary War Reenactors Got Caught Up in Facebook’s Purge of Militia Groups

The New Republic: How a Bunch of Revolutionary War Reenactors Got Caught Up in Facebook’s Purge of Militia Groups. “[Rory] Nolan belongs to historical reenactment groups that sometimes dramatize Revolutionary War-era militias (you can begin to see the problem), and he manages the Facebook and Instagram pages for several of them. He tried to establish new accounts under new email addresses, but they didn’t last long before getting swept up in the same moderation process. Again, they were banned with no possibility of appeal. And like that, Nolan’s social media presence—and much of his social life—quietly winked out of existence.”

New York Times: For Political Cartoonists, the Irony Was That Facebook Didn’t Recognize Irony

New York Times: For Political Cartoonists, the Irony Was That Facebook Didn’t Recognize Irony. “In recent years, the company has become more proactive at restricting certain kinds of political speech, clamping down on posts about fringe extremist groups and on calls for violence. In January, Facebook barred Mr. Trump from posting on its site altogether after he incited a crowd that stormed the U.S. Capitol. At the same time, misinformation researchers said, Facebook has had trouble identifying the slipperiest and subtlest of political content: satire. While satire and irony are common in everyday speech, the company’s artificial intelligence systems — and even its human moderators — can have difficulty distinguishing them.”