The Verge: YouTube limits moderators to viewing four hours of disturbing content per day. “YouTube CEO Susan Wojcicki said today that the video platform has started limiting the number of hours its part-time content moderators can view disturbing videos to four hours per day. The news, announced during a Q&A session during Wojcicki’s South by Southwest Interactive talk here in Austin, comes as companies like YouTube are struggling to parse through the sheer volume of user-uploaded content and ensure it abides by its policies. Platforms including YouTube, Facebook, Reddit, and Twitter have faced criticism for subjecting low-paid contractors to content that can be extremely disturbing.”
Ars Technica: YouTube to crack down on inappropriate content masked as kids’ cartoons. “Recent news stories and blog posts highlighted the underbelly of YouTube Kids, Google’s children-friendly version of the wide world of YouTube. While all content on YouTube Kids is meant to be suitable for children under the age of 13, some inappropriate videos using animations, cartoons, and child-focused keywords manage to get past YouTube’s algorithms and in front of kids’ eyes. Now, YouTube will implement a new policy in an attempt to make the whole of YouTube safer: it will age-restrict inappropriate videos masquerading as children’s content in the main YouTube app.”
ReviewGeek: PSA: Parents, YouTube Is Littered with Creepy Pseudo “Kid-Friendly” Videos. “The issue recently came to our attention when a friend with small children mentioned that he was increasingly finding very weird videos, on both the general YouTube site and on the YouTube Kids app, while searching for kid-friendly content. What kind of weird? Dozens and dozens of videos that looked otherwise kid-friendly but with popular characters acting violent, getting hurt, or engaging in inappropriate behavior no parent would want their child to emulate.” Kind of surprised these would end up on YouTube Kids.
Wired: When YouTube Removes Violent Videos, It Impedes Justice. “When the International Criminal Court issued an arrest warrant for Mahmoud al-Werfelli in August for the war crime of murder in Libya, it marked a watershed moment for open-source investigations. For those of us who embrace the promise of the digital landscape for justice and accountability, it came as welcome validation that content found on Facebook and YouTube form a good deal of the evidence before the Court. But this relatively new path to justice is at risk of becoming a dead-end.”
Buzzfeed: Violence On Facebook Live Is Worse Than You Thought. “Facebook Live has a violence problem, one far more troubling than national headlines make clear. At least 45 instances of violence — shootings, rapes, murders, child abuse, torture, suicides, and attempted suicides — have been broadcast via Live since its debut in December 2015, a new BuzzFeed News analysis found. That’s an average rate of about two instances per month.”
Reuters: Google tightens measures to remove extremist content on YouTube. “Google said it would take a tougher position on videos containing supremacist or inflammatory religious content by issuing a warning and not monetizing or recommending them for user endorsements, even if they do not clearly violate its policies.”
The Harvard Crimson: Harvard Rescinds Acceptances for At Least Ten Students for Obscene Memes. “Harvard College rescinded admissions offers to at least ten prospective members of the Class of 2021 after the students traded sexually explicit memes and messages that sometimes targeted minority groups in a private Facebook group chat. A handful of admitted students formed the messaging group—titled, at one point, ‘Harvard memes for horny bourgeois teens’ —on Facebook in late December, according to two incoming freshmen.”