Engadget: YouTube’s tweaks to recommend fewer conspiracy videos seem to be working

Engadget: YouTube’s tweaks to recommend fewer conspiracy videos seem to be working. “As of January of 2019 — and after facing public backlash — YouTube promised to curb the amount of conspiracy videos it pushes to users. A study published by the University of California, Berkeley states that these efforts do seem to be working, and that their analyses show a 40% reduction in the likelihood of YouTube suggesting conspiracy-based content.”

New York Times: UK to Make Social Media Platforms Responsible for Harmful Content

New York Times: UK to Make Social Media Platforms Responsible for Harmful Content. “Britain said it would make social media companies such as Facebook, Twitter and Snap responsible for blocking or removing harmful content on their platforms. A duty of care will be imposed to ensure all companies had systems in place to react to concerns over harmful content and improve the safety for their users, the government said.”

TechCrunch: Study of YouTube comments finds evidence of radicalization effect

TechCrunch: Study of YouTube comments finds evidence of radicalization effect. “The study, carried out by researchers at Switzerland’s Ecole polytechnique fédérale de Lausanne and the Federal University of Minas Gerais in Brazil, found evidence that users who engaged with a middle ground of extreme right-wing content migrated to commenting on the most fringe far-right content.”

Report and repeat: Investigating Facebook’s hate speech removal process (First Monday)

First Monday: Report and repeat: Investigating Facebook’s hate speech removal process . “Social media is rife with hate speech. Although Facebook prohibits this content on its site, little is known about how much of the hate speech reported by users is actually removed by the company. Given the enormous power Facebook has to shape the universe of discourse, this study sought to determine what proportion of reported hate speech is removed from the platform and whether patterns exist in Facebook’s decision-making process. To understand how the company is interpreting and applying its own Community Standards regarding hate speech, the authors identified and reported hundreds of comments, posts, and images featuring hate speech to the company (n=311) and recorded Facebook’s decision regarding whether or not to remove the reported content. A qualitative content analysis was then performed on the content that was and was not removed to identify trends in Facebook’s content moderation decisions about hate speech. Of particular interest was whether the company’s 2018 policy update resulted in any meaningful change.”

BBC: Facebook and YouTube moderators sign PTSD disclosure

BBC: Facebook and YouTube moderators sign PTSD disclosure. “Content moderators are being asked to sign forms stating they understand the job could cause post-traumatic stress disorder (PTSD), according to reports. The Financial Times and The Verge reported moderators for Facebook and YouTube, hired by the contractor Accenture, were sent the documents.”

Content Moderation At Scale Is Impossible: YouTube Says That Frank Capra’s US Government WWII Propaganda Violates Community Guidelines (Techdirt)

Techdirt: Content Moderation At Scale Is Impossible: YouTube Says That Frank Capra’s US Government WWII Propaganda Violates Community Guidelines. “The film, which gives a US government-approved history of the lead up to World War II includes a bunch of footage of Adolf Hitler and the Nazis. Obviously, it wasn’t done to glorify them. The idea is literally the opposite. However, as you may recall, last summer when everyone was getting mad (again) at YouTube for hosting ‘Nazi’ content, YouTube updated its policies to ban ‘videos that promote or glorify Nazi ideology.’ We already covered how this was shutting down accounts of history professors. And, now, it’s apparently leading them to take US propaganda offline as well.”

BBC: Twitter apologises for letting ads target neo-Nazis and bigots

BBC: Twitter apologises for letting ads target neo-Nazis and bigots. “Twitter has apologised for allowing adverts to be micro-targeted at certain users such as neo-Nazis, homophobes and other hate groups. The BBC discovered the issue and that prompted the tech firm to act.”