The Verge: Twitter is treating Bulgarians tweeting in Cyrillic like Russian bots

The Verge: Twitter is treating Bulgarians tweeting in Cyrillic like Russian bots . “A week ago, Twitter announced it would become more aggressive in pursuing trolls on its service, a move which seems to have had some unforeseen consequences, judging by the present upheaval in the Bulgarian Twitter community. An increasingly large and unhappy number of people have had their Twitter accounts suspended and messages filtered out of conversations, apparently for the offense of merely tweeting in Cyrillic.”

New York Times: Germany Acts to Tame Facebook, Learning From Its Own History of Hate

New York Times: Germany Acts to Tame Facebook, Learning From Its Own History of Hate. “Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: ‘Everybody without a badge is a potential spy!’ Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week. They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.”

TechCrunch: Twitter algorithm changes will hide more bad tweets and trolls

TechCrunch: Twitter algorithm changes will hide more bad tweets and trolls . “Twitter is making some new changes that calls on how the collective Twitterverse is responding to tweets to influence how often people see them. With these upcoming changes, tweets in conversations and search will be ranked based on a greater variety of data that takes into account things like the number of accounts registered to that user, whether that tweet prompted people to block the accounts and the IP address.”

Motherboard: Leaked Documents Show Facebook’s Struggles With Dick Pics

From Motherboard, with a warning for the headline: Leaked Documents Show Facebook’s Struggles With Dick Pics. “Facebook’s moderators are grappling with the challenges of revenge porn, sextortion, and people sharing unsolicited dick pics with users, according to newly leaked documents obtained by Motherboard. The company’s moderators have recently been told to stop punishing people who complain about receiving dick pics, which shows how Facebook’s own policies around nudity are constantly evolving.”

BuzzFeed: Silicon Valley Can’t Be Trusted With Our History

BuzzFeed: Silicon Valley Can’t Be Trusted With Our History. “It’s the paradox of the internet age: Smartphones and social media have created an archive of publicly available information unlike any in human history — an ocean of eyewitness testimony. But while we create almost everything on the internet, we control almost none of it. In the summer of 2017, observers of the Syrian civil war realized that YouTube was removing dozens of channels and tens of thousands of videos documenting the conflict. The deletions occurred after YouTube announced that it had deployed ‘cutting-edge machine learning technology … to identify and remove violent extremism and terrorism-related content.’ But the machines went too far.”

Inc.: Facebook Released Its Content Guidelines for the First Time. Here’s What You Need to Know

Inc.: Facebook Released Its Content Guidelines for the First Time. Here’s What You Need to Know. “For years, Facebook has faced harsh criticism for not doing enough to moderate hate speech, promote terrorism, or broadcast violence on its site. Now it hopes to clear up any confusion about its post-removing policies, guidelines for which were just released.” This is fine, I suppose, but my problem is that Facebook never consistently followed the content removal guidelines THAT WERE ALREADY AVAILABLE.

Techdirt: Again, Algorithms Suck At Determining ‘Bad’ Content, Often To Hilarious Degrees

Techdirt: Again, Algorithms Suck At Determining ‘Bad’ Content, Often To Hilarious Degrees. Warning: there is a swear word in this quote. “A few weeks back, Mike wrote a post detailing how absolutely shitty algorithms can be at determining what is ‘bad’ or ‘offensive’ or otherwise ‘undesirable’ content. While his post detailed failings in algorithms judging such weighty content as war-crime investigations versus terrorist propaganda, and Nazi hate-speech versus legitimate news reporting, the central thesis in all of this is that relying on platforms to host our speech and content when those platforms employ very, very imperfect algorithms as gatekeepers is a terrible idea. And it leads to undesirable outcomes at levels far below those of Nazis and terrorism.”