Techdirt: Florida Presents Its Laughable Appeal For Its Unconstitutional Social Media Content Moderation Law

Techdirt: Florida Presents Its Laughable Appeal For Its Unconstitutional Social Media Content Moderation Law . “Now that Texas has signed its unconstitutional social media content moderation bill into law, the action shifts back to Florida’s similar law that was already declared unconstitutional in an easy decision by the district court. Florida has filed its opening brief in its appeal before the 11th Circuit and… it’s bad. I mean, really, really bad.”

The New Republic: How a Bunch of Revolutionary War Reenactors Got Caught Up in Facebook’s Purge of Militia Groups

The New Republic: How a Bunch of Revolutionary War Reenactors Got Caught Up in Facebook’s Purge of Militia Groups. “[Rory] Nolan belongs to historical reenactment groups that sometimes dramatize Revolutionary War-era militias (you can begin to see the problem), and he manages the Facebook and Instagram pages for several of them. He tried to establish new accounts under new email addresses, but they didn’t last long before getting swept up in the same moderation process. Again, they were banned with no possibility of appeal. And like that, Nolan’s social media presence—and much of his social life—quietly winked out of existence.”

May it Please the Court: Exploring Facebook’s Oversight Board Formation and Decisions (Loyola University Chicago School of Law)

Loyola University Chicago School of Law: May it Please the Court: Exploring Facebook’s Oversight Board Formation and Decisions. “Last Friday, Facebook’s Oversight Board (‘the Board’) issued its latest verdict, overturning the company’s decision to remove a post that moderators alleged violated Facebook’s Violence and Incitement Community Standard. This judgment brings the Board’s total number of decisions to seven, with the Board overturning the Facebook’s own decision in five out of the six substantive rulings it has issued. The Board’s cases have covered several topics so far, including nudity and hate speech. Because Facebook’s Oversight Board does not have any modern equivalents, it is worth exploring what went into this experiment’s formation.”

Techdirt: A Few More Thoughts On The Total Deplatforming Of Parler & Infrastructure Content Moderation

Techdirt: A Few More Thoughts On The Total Deplatforming Of Parler & Infrastructure Content Moderation. “I’ve delayed writing deeper thoughts on the total deplatforming of Parler, in part because there was so much else happening (including some more timely posts about Parler’s lawsuit regarding it), but more importantly because for years I’ve been calling for people to think more deeply about content moderation at the infrastructure layer, rather than at the edge. Because those issues are much more complicated than the usual content moderation debates.”

MIT Technology Review: Why social media can’t keep moderating content in the shadows

MIT Technology Review: Why social media can’t keep moderating content in the shadows. “In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.”

“So many times we forget to listen”: How Spaceship Media moderated a Facebook group of 400 political women without it going off the rails (Nieman Lab)

Nieman Lab: “So many times we forget to listen”: How Spaceship Media moderated a Facebook group of 400 political women without it going off the rails. “When I spoke with Spaceship Media’s cofounders a year ago, they were about to embark on creating arguably the most ambitious news-centric Facebook group in existence: A goal of 5,000 women with diverse views in one group, talking about politics without everything self-imploding.” Forum moderation is an incredibly tough and thankless job, but when it’s done well, it’s amazing.

BBC News: Facebook bans Britain First pages

BBC News: Facebook bans Britain First pages. “Facebook has removed the pages of the anti-Islamic group Britain First and its leaders. The social media company said the group had repeatedly violated its community standards. Earlier this month, Britain First’s leader and deputy leader, Paul Golding and Jayda Fransen, were jailed after being found guilty of religiously aggravated harassment.”

Gizmodo: How a Video Game Chat Client Became the Web’s New Cesspool of Abuse

Gizmodo: How a Video Game Chat Client Became the Web’s New Cesspool of Abuse. “Over 25 million users have flocked to Discord, a text and voice platform for gamers, since its launch in May of 2015. Despite the company raising at least 30 million in venture capital funding, the company has only five ‘customer experience’ personnel and no moderators on its staff. ” This is called a recipe for disaster.

The Guardian: Civil rights groups urge Facebook to fix ‘racially biased’ moderation system

The Guardian: Civil rights groups urge Facebook to fix ‘racially biased’ moderation system. “Facebook allows white supremacists to spread violent threats while censoring Black Lives Matter posts and activists of color, according to civil rights groups that called on the technology company to fix its ‘racially biased’ moderation system.”

YouTube Kicks Off “YouTube Heroes” Program

YouTube is asking for help in moderating itself. “The company has announced the launch of a new, crowdsourced moderation program called ‘YouTube Heroes,’ which asks volunteers to perform tasks like flagging inappropriate content, adding captions and subtitles, and responding to questions on the YouTube Help forum, among other things.” I thought Google was making huge strides in AI etc. Why is this necessary? If you need more eyes to review what AI-based tools flag, why not hire them?