TechDirt: Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is ‘Dangerous Or Derogatory’

Techdirt: Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is ‘Dangerous Or Derogatory’. “Well, well. A few weeks back I had a big post all about the impossibility of moderating large content platforms at scale. It got a fair bit of attention, and has kicked off multiple discussions that are continuing to this day. However, earlier this week, it appears that Google’s ad content moderation team decided to help prove my point about the impossibility of moderating content at scale when… it decided that post was somehow ‘dangerous or derogatory.'” Hmmm. I think ResearchBuzz needs a “womp womp” tag.

Unpaid and abused: Moderators speak out against Reddit (Engadget)

Engadget: Unpaid and abused: Moderators speak out against Reddit . “Somewhere out there, a man wants to rape Emily. She knows this because he was painfully clear in typing out his threat. In fact, he’s just one of a group of people who wish her harm.
For the past four years, Emily has volunteered to moderate the content on several sizable subreddits — large online discussion forums — including r/news, with 16.3 million subscribers, and r/london, with 114,000 subscribers. But Reddit users don’t like to be moderated.”

Wired: Free Speech Is Not The Same As Free Reach

Wired: Free Speech Is Not The Same As Free Reach . “…the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.”

The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People (Motherboard)

Motherboard: The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People. “Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.”

Majority of Americans think social media platforms censor political views: Pew survey (KFGO)

KFGO: Majority of Americans think social media platforms censor political views: Pew survey. “About seven out of ten Americans think social media platforms intentionally censor political viewpoints, the Pew Research Center found in a study released on Thursday. The study comes amid an ongoing debate over the power of digital technology companies and the way they do business. Social media companies in particular, including Facebook Inc and Alphabet Inc’s Google, have recently come under scrutiny for failing to promptly tackle the problem of fake news as more Americans consume news on their platforms.” This doesn’t surprise me at all, considering how inconsistently social media platforms apply their own rules.

Motherboard: Leaked Documents Show Facebook’s Post-Charlottesville Reckoning with American Nazis

Motherboard: Leaked Documents Show Facebook’s Post-Charlottesville Reckoning with American Nazis. “‘James Fields did nothing wrong,’ the post on Facebook read, referring to the man who drove a car through a crowd protesting against white supremacy in Charlottesville in August 2017, killing one. The post accompanied an article from Squawker.org, a conservative website. In training materials given to its army of moderators, Facebook says the post is an example of content ‘praising hate crime,’ and it and others like it should be removed. But after Charlottesville Facebook had something of an internal reckoning around hate speech, and pushed to re-educate its moderators about American white supremacists in particular, according to a cache of Facebook documents obtained by Motherboard.”