The Telegraph: Government develops artificial intelligence program to stop online extremism

The Telegraph: Government develops artificial intelligence program to stop online extremism. “The £600,000 software can automatically detect Isil propaganda and stop it from going online, and ministers claim the new tool can detect 94 per cent of Isil propaganda with 99.9 per cent accuracy.” For the purposes of this article, Isil = ISIS, as far as I can tell.

YouTube: Expanding our work against abuse of our platform

YouTube: Expanding our work against abuse of our platform. “In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies. Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content. Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”

The Guardian: Artist’s ‘sexual’ robin redbreast Christmas cards banned by Facebook

The Guardian: Artist’s ‘sexual’ robin redbreast Christmas cards banned by Facebook. “Facebook has blocked the sale of a pack of Christmas cards featuring a robin redbreast because of its ‘sexual’ and ‘adult’ nature. The artist, Jackie Charley, said she ‘could not stop laughing’ when she discovered the reason the social media company would not approve the product last month.”

When filters fail: These cases show we can’t trust algorithms to clean up the internet (Julia Reda)

Julia Reda: When filters fail: These cases show we can’t trust algorithms to clean up the internet. “Installing censorship infrastructure that surveils everything people upload and letting algorithms make judgement calls about what we all can and cannot say online is an attack on our fundamental rights. But there’s another key question: Does it even work? The [European Commission] claims that where automatic filters have already been implemented voluntarily – like YouTube’s Content ID system – ‘these practices have shown good results’. Oh, really? Here are examples of filters getting it horribly wrong, ranging from hilarious to deeply worrying…”

CNBC: Facebook shuts down 1 million accounts per day but can’t stop all ‘threat actors,’ security chief says

CNBC: Facebook shuts down 1 million accounts per day but can’t stop all ‘threat actors,’ security chief says. “Facebook turns off more than 1 million accounts a day as it struggles to keep spam, fraud and hate speech off its platform, its chief security officer says. Still, the sheer number of interactions among its 2 billion global users means it can’t catch all ‘threat actors,’ and it sometimes removes text posts and videos that it later finds didn’t break Facebook rules, says Alex Stamos.”

Google Report: 99.95 Percent Of DMCA Takedown Notices Are Bot-Generated Bullshit Buckshot (Techdirt)

From Techdirt, which is not into word-mincing: Google Report: 99.95 Percent Of DMCA Takedown Notices Are Bot-Generated Bullshit Buckshot. “…Google noted that more than half the takedown notices it was receiving in 2009 were mere attempts by one business targeting a competitor, while over a third of the notices contained nothing in the way of a valid copyright dispute. But if those numbers were striking in 2009, Google’s latest comment to the Copyright Office (see our own comment here) on what’s happening in the DMCA 512 notice-and-takedown world shows some stats for takedown notices received through its Trusted Copyright Removal Program… and makes the whole ordeal look completely silly.”

Forbes: Do Social Media Platforms Really Care About Online Abuse?

Forbes: Do Social Media Platforms Really Care About Online Abuse? “Each time the platforms miss something, the typical response from the companies tends to be along the line of limited resources – that the platforms process so much content that they simply lack the human review resources to go through all that content. Yet, when it comes to other fields like food safety, we don’t argue that salmonella outbreaks are perfectly acceptable because it would cost too much for companies to invest in the equipment, training and processes to avoid it. We understand that there is always a risk of an outbreak, but we expect that food processing companies will pay the costs to avoid it to the best that technology and human capability permits today.”