Bloomberg: Twitter’s trust and safety head ditches protocol for Elon Musk’s whims

Bloomberg: Twitter’s trust and safety head ditches protocol for Elon Musk’s whims. “Twitter’s decisions are often later probed by politicians and regulators, and so they are typically made with careful documentation pointing to specific policy justifications for the action, the current and former employees say. But now, internal documentation shows a decision-making process amounting to little more than unilateral directives issued by Twitter’s new owner. In late November, an account belonging to the leftist activist Chad Loder was banned from the platform. In Twitter’s internal system, a note read, ‘Suspension: direct request from Elon Musk,’ according to a screenshot viewed by Bloomberg.”

Bloomberg: Twitter Sued in Germany Over Vetting of Anti-Semitic Posts

Bloomberg: Twitter Sued in Germany Over Vetting of Anti-Semitic Posts. “Twitter Inc. was hit by a lawsuit in Germany by an anti-hate speech organization and a European group of Jewish students in a bid to force the platform to remove antisemitic content. HateAid and the European Union of Jewish Students filed the suit against Twitter to require the platform to clarify basic obligations when moderating criminal content, according to a statement on Wednesday.”

Financial Times: Can Big Tech make livestreams safe?

Financial Times: Can Big Tech make livestreams safe?. “As well as self-harm and child sexual exploitation, livestreaming also featured in the racially motivated killing of 10 black people in Buffalo, New York, last year and the deadly mosque shootings of 51 in Christchurch, New Zealand, in 2019. These issues are coming to a head in the UK in particular, as the government plans new legislation this year to force internet companies to police illegal content, as well as material that is legal but deemed harmful to children.” This article contains disturbing content including references to harm and self-harm.

Washington Post: Hate speech rises on Twitter in its largest markets after Musk takeover

Washington Post: Hate speech rises on Twitter in its largest markets after Musk takeover. “Musk has fired or accepted resignations from about three-fourths of Twitter’s employees since his $44 billion takeover at the end of October. He has also terminated thousands of contractors who were monitoring the site for slurs and threats. Those cuts went deepest outside North America, where more than 75 percent of the company’s 280 million daily users live and where Twitter already had fewer moderators who understood local languages and cultural references and where the political landscape could be chaotic and prone to violence.”

Wall Street Journal: Facebook Wanted Out of Politics. It Was Messier Than Anyone Expected.

Wall Street Journal: Facebook Wanted Out of Politics. It Was Messier Than Anyone Expected.. “The plan was in line with calls from some of the company’s harshest critics, who have alleged that Facebook is either politically biased or commercially motivated to amplify hate and controversy. For years, advertisers and investors have pressed the company to clean up its messy role in politics, according to people familiar with those discussions. It became apparent, though, that the plan to mute politics would have unintended consequences, according to internal research and people familiar with the project.”

TechCrunch: Questions linger over Facebook, Twitter, TikTok’s commitment to uphold election integrity in Africa, as countries head to polls

TechCrunch: Questions linger over Facebook, Twitter, TikTok’s commitment to uphold election integrity in Africa, as countries head to polls. “A dozen countries in Africa, including Nigeria, the continent’s biggest economy and democracy, are expected to hold their presidential elections next year, and questions linger on how well social media platforms are prepared to curb misinformation and disinformation after claims of botched content moderation during Kenya’s polls last August.”

MIT News: Empowering social media users to assess content helps fight misinformation

MIT News: Empowering social media users to assess content helps fight misinformation. “Using their findings, the researchers developed a prototype platform that enables users to assess the accuracy of content, indicate which users they trust to assess accuracy, and filter posts that appear in their feed based on those assessments. Through a field study, they found that users were able to effectively assess misinforming posts without receiving any prior training. Moreover, users valued the ability to assess posts and view assessments in a structured way.”

TechCrunch: Musk’s impact on content moderation at Twitter faces early test in Germany

TechCrunch: Musk’s impact on content moderation at Twitter faces early test in Germany. “A German law requiring social media platforms to promptly respond to reports of hate speech — and in some cases remove illegal speech within 24 hours of it being brought to their attention — looks like it will provide an early test for whether Elon Musk-owned Twitter will face meaningful legal consequences over how recklessly he’s operating the company.”

WIRED: Twitter’s Moderation System Is in Tatters

WIRED: Twitter’s Moderation System Is in Tatters. “EVEN BEFORE TWITTER cut some 4,400 contract workers on November 12, the platform was showing signs of strain. After Elon Musk bought the company and laid off 7,500 full time employees, disinformation researchers and activists say, the team that took down toxic and fake content vanished. Now, after years of developing relationships within those teams, researchers say no one is responding to their reports of disinformation on the site, even as data suggests Twitter is becoming more toxic.”