TechCrunch: Twitter algorithm changes will hide more bad tweets and trolls . “Twitter is making some new changes that calls on how the collective Twitterverse is responding to tweets to influence how often people see them. With these upcoming changes, tweets in conversations and search will be ranked based on a greater variety of data that takes into account things like the number of accounts registered to that user, whether that tweet prompted people to block the accounts and the IP address.”
Forbes: The Problem With Using AI To Fight Terrorism On Social Media. “Social media has a terrorism problem. From Twitter’s famous 2015 letter to Congress that it would never restrict the right of terrorists to use its platform, to its rapid about-face in the face of public and governmental outcry, Silicon Valley has had a change of heart in how it sees its role in curbing the use of its tools by those who wish to commit violence across the world. Today Facebook released a new transparency report that emphasizes its efforts to combat terroristic use of its platform and the role AI is playing in what it claims are significant successes. Yet, that narrative of AI success has been increasingly challenged, from academic studies suggesting that not only is content not being deleted, but that other Facebook tools may actually be assisting terrorists, to a Bloomberg piece last week that demonstrates just how readily terrorist content can still be found on Facebook. Can we really rely on AI to curb terroristic use of social media?”
BuzzFeed: Silicon Valley Can’t Be Trusted With Our History. “It’s the paradox of the internet age: Smartphones and social media have created an archive of publicly available information unlike any in human history — an ocean of eyewitness testimony. But while we create almost everything on the internet, we control almost none of it. In the summer of 2017, observers of the Syrian civil war realized that YouTube was removing dozens of channels and tens of thousands of videos documenting the conflict. The deletions occurred after YouTube announced that it had deployed ‘cutting-edge machine learning technology … to identify and remove violent extremism and terrorism-related content.’ But the machines went too far.”
Techdirt: Again, Algorithms Suck At Determining ‘Bad’ Content, Often To Hilarious Degrees. Warning: there is a swear word in this quote. “A few weeks back, Mike wrote a post detailing how absolutely shitty algorithms can be at determining what is ‘bad’ or ‘offensive’ or otherwise ‘undesirable’ content. While his post detailed failings in algorithms judging such weighty content as war-crime investigations versus terrorist propaganda, and Nazi hate-speech versus legitimate news reporting, the central thesis in all of this is that relying on platforms to host our speech and content when those platforms employ very, very imperfect algorithms as gatekeepers is a terrible idea. And it leads to undesirable outcomes at levels far below those of Nazis and terrorism.”
The Outline: How Platforms Alter History. “After Nasim Aghdam opened fire at YouTube’s San Bruno headquarters last week, injuring three and killing herself, social media platforms swiftly moved to scour her from the web. Her four YouTube channels disappeared, replaced by a message saying they’d been removed for “multiple or severe violations” of the site’s policies. Instagram and Facebook both deleted her profiles as well. Even her personal website is gone.”
The Telegraph: Government develops artificial intelligence program to stop online extremism. “The £600,000 software can automatically detect Isil propaganda and stop it from going online, and ministers claim the new tool can detect 94 per cent of Isil propaganda with 99.9 per cent accuracy.” For the purposes of this article, Isil = ISIS, as far as I can tell.
YouTube: Expanding our work against abuse of our platform. “In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies. Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content. Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”