Mashable: This website lets you see how conspiracy theorists fall down the YouTube rabbit hole

Mashable: This website lets you see how conspiracy theorists fall down the YouTube rabbit hole. “Ever wonder how your dear Aunt Karen got radicalized into believing the bizarre conspiracy theories she shares on social media? What about your apolitical college buddy who suddenly can’t seem to stop complaining about social justice and ‘cancel culture’? Well, there’s a good chance they fell down the YouTube rabbit hole. And a new website, TheirTube, wants to show you how that happened.”

Engadget: YouTube’s tweaks to recommend fewer conspiracy videos seem to be working

Engadget: YouTube’s tweaks to recommend fewer conspiracy videos seem to be working. “As of January of 2019 — and after facing public backlash — YouTube promised to curb the amount of conspiracy videos it pushes to users. A study published by the University of California, Berkeley states that these efforts do seem to be working, and that their analyses show a 40% reduction in the likelihood of YouTube suggesting conspiracy-based content.”

New York Times: UK to Make Social Media Platforms Responsible for Harmful Content

New York Times: UK to Make Social Media Platforms Responsible for Harmful Content. “Britain said it would make social media companies such as Facebook, Twitter and Snap responsible for blocking or removing harmful content on their platforms. A duty of care will be imposed to ensure all companies had systems in place to react to concerns over harmful content and improve the safety for their users, the government said.”

TechCrunch: Study of YouTube comments finds evidence of radicalization effect

TechCrunch: Study of YouTube comments finds evidence of radicalization effect. “The study, carried out by researchers at Switzerland’s Ecole polytechnique fédérale de Lausanne and the Federal University of Minas Gerais in Brazil, found evidence that users who engaged with a middle ground of extreme right-wing content migrated to commenting on the most fringe far-right content.”

Report and repeat: Investigating Facebook’s hate speech removal process (First Monday)

First Monday: Report and repeat: Investigating Facebook’s hate speech removal process . “Social media is rife with hate speech. Although Facebook prohibits this content on its site, little is known about how much of the hate speech reported by users is actually removed by the company. Given the enormous power Facebook has to shape the universe of discourse, this study sought to determine what proportion of reported hate speech is removed from the platform and whether patterns exist in Facebook’s decision-making process. To understand how the company is interpreting and applying its own Community Standards regarding hate speech, the authors identified and reported hundreds of comments, posts, and images featuring hate speech to the company (n=311) and recorded Facebook’s decision regarding whether or not to remove the reported content. A qualitative content analysis was then performed on the content that was and was not removed to identify trends in Facebook’s content moderation decisions about hate speech. Of particular interest was whether the company’s 2018 policy update resulted in any meaningful change.”

BBC: Facebook and YouTube moderators sign PTSD disclosure

BBC: Facebook and YouTube moderators sign PTSD disclosure. “Content moderators are being asked to sign forms stating they understand the job could cause post-traumatic stress disorder (PTSD), according to reports. The Financial Times and The Verge reported moderators for Facebook and YouTube, hired by the contractor Accenture, were sent the documents.”

Content Moderation At Scale Is Impossible: YouTube Says That Frank Capra’s US Government WWII Propaganda Violates Community Guidelines (Techdirt)

Techdirt: Content Moderation At Scale Is Impossible: YouTube Says That Frank Capra’s US Government WWII Propaganda Violates Community Guidelines. “The film, which gives a US government-approved history of the lead up to World War II includes a bunch of footage of Adolf Hitler and the Nazis. Obviously, it wasn’t done to glorify them. The idea is literally the opposite. However, as you may recall, last summer when everyone was getting mad (again) at YouTube for hosting ‘Nazi’ content, YouTube updated its policies to ban ‘videos that promote or glorify Nazi ideology.’ We already covered how this was shutting down accounts of history professors. And, now, it’s apparently leading them to take US propaganda offline as well.”

BBC: Twitter apologises for letting ads target neo-Nazis and bigots

BBC: Twitter apologises for letting ads target neo-Nazis and bigots. “Twitter has apologised for allowing adverts to be micro-targeted at certain users such as neo-Nazis, homophobes and other hate groups. The BBC discovered the issue and that prompted the tech firm to act.”

Mashable: Does YouTube radicalize users? This study says not —but it’s deeply flawed.

Mashable: Does YouTube radicalize users? This study says not —but it’s deeply flawed.. “A new study of YouTube’s algorithm attracting mainstream attention this weekend claims that the online video giant ‘actively discourages’ radicalization on the platform. And if that sounds suspect to you, it should.”

ThePrint: In 2020, Google will have to wrangle the beast it created with Youtube

ThePrint: In 2020, Google will have to wrangle the beast it created with Youtube. “As 2020 begins, the largest online video service is being dragged deeper into political fights over privacy, copyright and content moderation. In response, YouTube is trying to preserve the sanctity of its status as an online platform with little liability for what happens on its site. Instead, that burden is increasingly falling on the shoulders of regulators, video creators and other partners.”

The Daily Beast: Instagram Won’t Pull These Racist, Violent, Russian-Inspired Accounts

The Daily Beast: Instagram Won’t Pull These Racist, Violent, Russian-Inspired Accounts. “Memes published by some of the worst Kremlin-backed trolls of the 2016 campaign are being echoed online by American neo-Confederates. The Russian accounts, overseen by the Russian Internet Research Agency (IRA), have since been taken down. But American parrot accounts running some of the same racist crap—and worse—are still live on Instagram, an investigation by The Daily Beast and the Atlantic Council’s Digital Forensic Research Lab found. At least one of these live accounts claims to belong to a Russian network persona.”

The Verge: The Terror Queue

The Verge: The Terror Queue. “Peter, who has done this job for nearly two years, worries about the toll that the job is taking on his mental health. His family has repeatedly urged him to quit. But he worries that he will not be able to find another job that pays as well as this one does: $18.50 an hour, or about $37,000 a year. Since he began working in the violent extremism queue, Peter noted, he has lost hair and gained weight. His temper is shorter. When he drives by the building where he works, even on his off days, a vein begins to throb in his chest.” This is about moderating content at Google and YouTube. It left me in tears. A disturbing but important read.

Ars Technica: Why can’t Internet companies stop awful content?

Ars Technica: Why can’t Internet companies stop awful content?. “Many of us are baffled by the degradation of the Internet. We have the ingenuity to put men on the Moon (unfortunately, only men so far), so it defies logic that the most powerful companies on Earth can’t fix this. With their wads of cash and their smart engineers, they should nerd harder. So why does the Internet feel like it’s getting worse, not better? And, more importantly, what do we do about it?”

Mashable: 8chan returns with a new name and a reminder not to do illegal stuff

Mashable: 8chan returns with a new name and a reminder not to do illegal stuff . “Controversial imageboard 8chan has been revived under a new name, 8kun — with the front page of the site now bearing a warning that ‘Any content that violates the laws of the United States of America will be deleted and the poster will be banned.'”

Al Monitor: Can Egypt’s newest search engine root out extremism?

Al Monitor: Can Egypt’s newest search engine root out extremism?. “Dar al-Ifta, Egypt’s Islamic authority with the power to issue fatwas (religious edicts), recently launched a search engine to track fatwas from terrorist groups and extremists and to help Al-Azhar scholars tackle Islamophobia. The data collected will enable clerics to develop indicators to help decision-makers better understand terrorist networks and guide policymakers in formulating effective counterterrorism strategies.”