The Verge: Telepath is a buzzy new social network trying to fix what’s broken on Twitter

The Verge: Telepath is a buzzy new social network trying to fix what’s broken on Twitter. “The app, which like Clubhouse is available only in private beta and requires an invitation to use, resembles a hybrid of Twitter and Reddit. As on Twitter, the app opens to a central scrolling feed of updates from people and topics that you follow. And as on Reddit, every post must be created within a group, which Telepath calls a ‘network.’ But what stands out about Telepath is its approach to moderation — which is both more aggressive and more constructive than any I have ever seen in a venture-backed social app at this stage of development.”

Mashable: YouTube puts human content moderators back to work

Mashable: YouTube puts human content moderators back to work. “YouTube is re-assigning the work of content moderation to more actual humans, Neal Mohan, YouTube’s chief product officer, told the Financial Times. At the start of the pandemic, YouTube had to reduce the staff and workload of in-office human moderators. So rather relying on that 10,000-person workforce, the company gave broader content moderation power to automated systems that are be able to recognize videos with harmful content and remove them immediately.”

CNBC: Google is tightening rules on internal message boards as ‘new world creates urgency’

CNBC: Google is tightening rules on internal message boards as ‘new world creates urgency’. “Google is asking employees to take a more active role in moderating internal message boards, as those discussions get more heated and employees remain working from home, according to documents obtained by CNBC.”

Poynter: In the battle over content moderation, transparency seems to be all anyone can agree on

Poynter: In the battle over content moderation, transparency seems to be all anyone can agree on . “This week brought two new visions for how to reform Section 230 of the Communications Decency Act. The law, which shields large tech companies from legal liability for content posted by third parties on their platforms, has drawn fire from politicians in both parties.”

Burnout, splinter factions and deleted posts: Unpaid online moderators struggle to manage divided communities (Washington Post)

Washington Post: Burnout, splinter factions and deleted posts: Unpaid online moderators struggle to manage divided communities. “From Facebook, Reddit and Nextdoor to homes for more niche topics like fan fiction, many online communities and groups are kept afloat by volunteer armies of moderators. The people who moderate these groups often start as founders or enthusiastic members, interested in helping shape and police the communities they’re already a part of. They are both cleaning crew and den parent. Moderators take down spam and misinformation. They mediate petty disagreements and volatile civil wars. They carefully decide between reminding people of the rules, freezing conversations, removing members or letting drama subside on its own.”

BuzzFeed News: Facebook Said It Removed A Militia Event Page Threatening Violence In Kenosha. It Didn’t.

BuzzFeed News: Facebook Said It Removed A Militia Event Page Threatening Violence In Kenosha. It Didn’t.. “[Sandra] Fiehrer’s complaint was one of the 455 sent to Facebook warning of a militia event violating the company’s policies. Together, they inspired four manual and numerous automated reviews of the event page by Facebook’s content moderators, which all concluded it did not violate the company’s rules. CEO Mark Zuckerberg would later tell employees it was ‘an operational mistake.’ In those same remarks, which were made public after being reported by BuzzFeed News, Zuckerberg suggested to employees that the company had removed the event and militia page from the platform the next day. But internal company discussions obtained by BuzzFeed News show that’s not true. The event was actually deleted the day after the shooting, not by Facebook, but by a page administrator for the Kenosha Guard.”

Economic Times: Officials debate whether India should have its own social media content moderation rules

Economic Times: Officials debate whether India should have its own social media content moderation rules. “India provides immunity, or safe harbour, to intermediaries under Section 79(2) of the I-T Act on the condition that the platforms do not modify the content in any form. India is concerned about the lack of transparency around the moderation practices followed by social media platforms.”

CNN: YouTube is banking on tech to clean up controversial content, as moderators stay home

CNN: YouTube is banking on tech to clean up controversial content, as moderators stay home. “YouTube said Tuesday that it is increasingly relying on technology to moderate content, resulting in a sharp rise in removed videos, including some that didn’t violate its policies. The Google-owned company said that between April and June it removed more than 11.4 million videos for violating its policies. That’s more than double what it took down in the previous three months.” Oh, why not. Auto-regulating content HAS WORKED SO WELL SO FAR…

Exclusive: Facebook employees internally question policy after India content controversy – sources, memos (Reuters)

Reuters: Exclusive: Facebook employees internally question policy after India content controversy – sources, memos. “The world’s largest social network is battling a public-relations and political crisis in India after the Wall Street Journal reported that Das opposed applying the company’s hate-speech rules to a politician from Prime Minister Narendra Modi’s party who had in posts called Muslims traitors.”

The Verge: Facebook chose not to act on militia complaints before Kenosha shooting

The Verge: Facebook chose not to act on militia complaints before Kenosha shooting. “In the wake of an apparent double murder Tuesday night in Kenosha, Facebook has faced a wave of scrutiny over posts by a self-proclaimed militia group called Kenosha Guard, which issued a ‘call to arms’ to in advance of the protest. Facebook took down Kenosha Guard’s Facebook page Wednesday morning, identifying the posts as violating community standards. But while the accounts were ultimately removed, new evidence suggests the platform had ample warning about the account before the shooting brought the group to prominence.”

Slate: Confederate Groups Are Thriving on Facebook. What Does That Mean for the Platform?

Slate: Confederate Groups Are Thriving on Facebook. What Does That Mean for the Platform?. “In the wake of Black Lives Matter protests, demands for Facebook to address hate speech have escalated, coinciding with a nationwide movement to remove Confederate statues and flags from cities, states, and institutions long imbued with Confederate symbolism…. These movements, intertwined and mutually reinforcing, pose a particular threat to those who consider themselves present-day Confederates. From their perspective, Facebook has become more essential than ever to amplifying their message at a critical moment in history—just as Facebook has shown a new willingness to police their speech.”

Los Angeles Times: Reddit moderators spent years asking for help fighting hate. The company may finally be listening

Los Angeles Times: Reddit moderators spent years asking for help fighting hate. The company may finally be listening. “When [Jefferson] Kelley, a Reddit moderator, booted hateful users off threads where Black people discussed sensitive personal experiences, racial slurs piled up in his inbox. Crude remarks about women filled the comment sections under his favorite ‘Star Trek’ GIFs. The proliferation of notorious forums, including one that perpetuated a vicious racist stereotype about Black fathers, stung Kelley, a Black father himself. Kelley and other moderators repeatedly pleaded with the company to back them up and take stronger action against harassment and hate speech. But Reddit never quite came through. Then, all of a sudden, that seemed to change.

CNN: Facebook’s future keeps getting murkier

CNN: Facebook’s future keeps getting murkier. “It’s not the first time Facebook’s content moderation policies have been under the microscope, but this time feels different. Voices inside the company have publicly expressed dismay over its actions, and hundreds of corporations are using the power of their ad dollars to lobby for change from the outside. The pressure could challenge CEO Mark Zuckerberg’s long held desire to preserve free expression on the platform, especially by public figures. But Facebook’s size and power — and Zuckerberg’s outsized influence within the company — mean it’s not yet clear to what extent things might change.”

E&E News: Denial expands on Facebook as scientists face restrictions

E&E News: Denial expands on Facebook as scientists face restrictions. “A climate scientist says Facebook is restricting her ability to share research and fact-check posts containing climate misinformation. Those constraints are occuring as groups that reject climate science increasingly use the platform to promote misleading theories about global warming. The groups are using Facebook to mischaracterize mainstream research by claiming that reduced consumption of fossil fuels won’t help address climate change. Some say the planet and people are benefitting from the rising volume of carbon dioxide that’s being released into the atmosphere.”

The Drum: We’re having the wrong conversation to fix social media

The Drum: We’re having the wrong conversation to fix social media. “Debates are raging around social media boycotts, algorithmic biases, and content moderation. While most people seem to agree that they want ‘bad content’ removed, it’s less clear what ‘bad’ actually is and what the consequence of that removal would be. Clearly things need to change, and systemic reforms are needed yet the problem is, we’re all debating the wrong issue. We need to stop arguing about freedom of speech vs. content moderation. The real problem is freedom of reach.”