Forbes: Why Don’t Social Media Companies Stop Violent Imagery?

Forbes: Why Don’t Social Media Companies Stop Violent Imagery?. “The intense media coverage this past week of the so-called ‘Facebook killer’ drew attention once again to the horrific ways in which social media platforms can provide a global audience to people who wish to do themselves or others grievous harm and indeed begs the question of whether in the absence of such instant fame would at least some of these acts have been prevented?”

The Atlantic: Social Media’s Silent Filter

The Atlantic: Social Media’s Silent Filter. “Thus far, much of the post-election discussion of social-media companies has focused on algorithms and automated mechanisms that are often assumed to undergird most content-dissemination processes online. But algorithms are not the whole story. In fact, there is a profound human aspect to this work. I call it commercial content moderation, or CCM.” This is the second large-scale story I have read about content moderators having to view absolutely heinous material with no kind of support – and, as I see from this article, with an NDA forbidding them to discuss what they’re seeing – but I haven’t heard a peep from the large social media networks. Have you?