New York Times: Germany Acts to Tame Facebook, Learning From Its Own History of Hate

New York Times: Germany Acts to Tame Facebook, Learning From Its Own History of Hate. “Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: ‘Everybody without a badge is a potential spy!’ Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week. They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.”

Forbes: The Problem With Using AI To Fight Terrorism On Social Media

Forbes: The Problem With Using AI To Fight Terrorism On Social Media. “Social media has a terrorism problem. From Twitter’s famous 2015 letter to Congress that it would never restrict the right of terrorists to use its platform, to its rapid about-face in the face of public and governmental outcry, Silicon Valley has had a change of heart in how it sees its role in curbing the use of its tools by those who wish to commit violence across the world. Today Facebook released a new transparency report that emphasizes its efforts to combat terroristic use of its platform and the role AI is playing in what it claims are significant successes. Yet, that narrative of AI success has been increasingly challenged, from academic studies suggesting that not only is content not being deleted, but that other Facebook tools may actually be assisting terrorists, to a Bloomberg piece last week that demonstrates just how readily terrorist content can still be found on Facebook. Can we really rely on AI to curb terroristic use of social media?”

Tubefilter: Cisco Announces YouTube Ad Boycott, Citing Fear Of A “Brand-Tarnishing Experience”

Tubefilter: Cisco Announces YouTube Ad Boycott, Citing Fear Of A “Brand-Tarnishing Experience”. “After 300 brands — including telecom giant Cisco — were discovered to have run YouTube ads last month on videos promoting Nazism, pedophilia, and conspiracy theories, the video giant is once again facing advertiser fallout. Cisco announced in a blog post Wednesday that it was pulling all ads from YouTube due to brand safety concerns — though it promptly removed and re-edited the post 24 hours later.”

The Drum: Google removes Singaporean YouTuber Amos Yee channel over brand safety fears

The Drum: Google removes Singaporean YouTuber Amos Yee channel over brand safety fears. “YouTube vlogger Amos Yee is in the news again after his YouTube channel was taken down by Google over brand safety concerns after he posted videos defending paedophilia. The Singaporean was charged for six charges related to the anti-religion posts on his YouTube channel and two for failing to show up to court, two years ago. He then sought asylum in the United States after finishing his sentence.”

New York Times: How Everyday Social Media Users Become Real-World Extremists

New York Times: How Everyday Social Media Users Become Real-World Extremists. “When they talk about incitement to violence on Facebook — a growing problem in developing markets — representatives and critics of the platform alike tend to describe it as a problem created by small factions of extremists. The extremists, in this view, push out rumors and inflammatory claims to everyday users, who become ideologically infected. So stopping the violence should be as simple as silencing the extremists…. But a reconstruction of how Facebook-based misinformation and hate speech contributed to anti-Muslim riots in Sri Lanka last month, along with research on how people use social media, suggests that those who set out to be provocateurs are not the only danger — or even the biggest one.”

Neowin: YouTube removed over eight million videos in last quarter of 2017

Neowin: YouTube removed over eight million videos in last quarter of 2017. “YouTube released a transparency report on how it is enforcing its community guidelines, which do not allow contents related to ‘pornography, incitement to violence, harassment, or hate speech’, for example. With the help of machine learning algorithms, the company announced it removed almost 8.3 million videos from its platform in the period covering October to December of 2017.”