KFGO: Majority of Americans think social media platforms censor political views: Pew survey. “About seven out of ten Americans think social media platforms intentionally censor political viewpoints, the Pew Research Center found in a study released on Thursday. The study comes amid an ongoing debate over the power of digital technology companies and the way they do business. Social media companies in particular, including Facebook Inc and Alphabet Inc’s Google, have recently come under scrutiny for failing to promptly tackle the problem of fake news as more Americans consume news on their platforms.” This doesn’t surprise me at all, considering how inconsistently social media platforms apply their own rules.
Motherboard: Leaked Documents Show Facebook’s Post-Charlottesville Reckoning with American Nazis. “‘James Fields did nothing wrong,’ the post on Facebook read, referring to the man who drove a car through a crowd protesting against white supremacy in Charlottesville in August 2017, killing one. The post accompanied an article from Squawker.org, a conservative website. In training materials given to its army of moderators, Facebook says the post is an example of content ‘praising hate crime,’ and it and others like it should be removed. But after Charlottesville Facebook had something of an internal reckoning around hate speech, and pushed to re-educate its moderators about American white supremacists in particular, according to a cache of Facebook documents obtained by Motherboard.”
The Verge: Twitter is treating Bulgarians tweeting in Cyrillic like Russian bots . “A week ago, Twitter announced it would become more aggressive in pursuing trolls on its service, a move which seems to have had some unforeseen consequences, judging by the present upheaval in the Bulgarian Twitter community. An increasingly large and unhappy number of people have had their Twitter accounts suspended and messages filtered out of conversations, apparently for the offense of merely tweeting in Cyrillic.”
New York Times: Germany Acts to Tame Facebook, Learning From Its Own History of Hate. “Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: ‘Everybody without a badge is a potential spy!’ Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week. They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.”
TechCrunch: Twitter algorithm changes will hide more bad tweets and trolls . “Twitter is making some new changes that calls on how the collective Twitterverse is responding to tweets to influence how often people see them. With these upcoming changes, tweets in conversations and search will be ranked based on a greater variety of data that takes into account things like the number of accounts registered to that user, whether that tweet prompted people to block the accounts and the IP address.”
From Motherboard, with a warning for the headline: Leaked Documents Show Facebook’s Struggles With Dick Pics. “Facebook’s moderators are grappling with the challenges of revenge porn, sextortion, and people sharing unsolicited dick pics with users, according to newly leaked documents obtained by Motherboard. The company’s moderators have recently been told to stop punishing people who complain about receiving dick pics, which shows how Facebook’s own policies around nudity are constantly evolving.”
BuzzFeed: Silicon Valley Can’t Be Trusted With Our History. “It’s the paradox of the internet age: Smartphones and social media have created an archive of publicly available information unlike any in human history — an ocean of eyewitness testimony. But while we create almost everything on the internet, we control almost none of it. In the summer of 2017, observers of the Syrian civil war realized that YouTube was removing dozens of channels and tens of thousands of videos documenting the conflict. The deletions occurred after YouTube announced that it had deployed ‘cutting-edge machine learning technology … to identify and remove violent extremism and terrorism-related content.’ But the machines went too far.”