TechCrunch: TikTok joins Europe’s code on tackling hate speech. “TikTok, the popular short video sharing app, has joined the European Union’s Code of Conduct on Countering Illegal Hate Speech. In a statement on joining the code, TikTok’s head of trust and safety for EMEA, Cormac Keenan, said: ‘We have never allowed hate on TikTok, and we believe it’s important that internet platforms are held to account on an issue as crucial as this.'”
New York Times: Trolls Flood Social Media in Pakistan Amid Virus Lockdown. “Toxic trending on Twitter has also taken aim at minorities, blaming the ethnic Hazaras for allegedly bringing the coronavirus to Pakistan from neighboring Iran. Like most Iranians, Hazaras are Shiites, and traditionally make pilgrimages to holy sites in Iran, which has the deadliest virus outbreak in the region. Some Pakistani pilgrims returning home were among the first reported cases of COVID-19 in Pakistan.”
New Zealand Herald: How Facebook, Google algorithms feed on hate speech, rage. “Notice how those unsavoury posts liked by some long-forgotten friend always seem to float to the top of your curated social media feeds Wonder how an incitement to violence can stay on your screen for days? What about that infuriating conspiracy that keeps getting forced down your throat According to an Australian digital security researcher, it’s no bug. It’s a feature. It’s a subliminal mechanism designed to extract maximum revenue out of your inbox.”
Reuters: Internet giants could be fined up to $12 million under Austrian hate speech law. “Austria plans to oblige large internet platforms like Facebook and Google to delete illegal content within days and impose fines of up to 10 million euros ($12 million) in case of non-compliance, the government said on Thursday.”
TIME: How Far-Right Personalities and Conspiracy Theorists Are Cashing in on the Pandemic Online. “[Nick] Fuentes, 22, a prolific podcaster who on his shows has compared the Holocaust to a cookie-baking operation, argued that the segregation of Black Americans ‘was better for them,’ and that the First Amendment was ‘not written for Muslims,’ is doing better than O.K. during the COVID-19 pandemic. He’s part of a loose cohort of far-right provocateurs, white nationalists and right-wing extremists who have built large, engaged audiences on lesser-known platforms like DLive after being banned from main-stream sites for spreading hate speech and conspiracy theories.
Stephenville Empire-Tribune: Coronavirus-inspired racism sows fear, anger among local Asian community. “Hurt. Angry. Unsafe. That’s how Jasmine Yuan says she felt in March when a stranger in a car yelled ‘corona!’ at her while driving by in a grocery store parking lot in North Austin. Yuan, 39, said she loves Austin and considers it a diverse, multicultural city, but lamented that she now fears portions of town that used to be part of her everyday life. After the incident, Yuan said she has avoided public places that she feels might put her at risk of being targeted again.”
CNET: Twenty state AGs press Facebook to do more to combat hate speech. “In a letter to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg on Wednesday, the AGs said they believed the company had ‘fallen short’ on its civil rights record and urged the company to ‘aggressively enforce’ its policies prohibiting hate speech and hate-based organizations. Other steps the AGs’ letter suggested Facebook take include allowing public, third-party audits of hate content and enforcement, as well as expanding its policies on limiting ads that disparage minorities.”
London School of Economics and Political Science: Facebook, language and the difficulty of moderating hate speech. “In March 2018, the Sri Lankan government blocked access to Facebook, citing the spread of hate speech on the platform and tying it to the incidents of mob violence in Digana, Kandy. In this post by Yudhanjaya Wijeratne, a senior researcher at Asia Pacific think-tank LIRNEasia, the difficulties of responding to hate speech are unpacked based on research that his Data, Algorithms and Policy team recently completed.”
Axios: Hate speech has soared online since George Floyd’s death. “On June 3, at the height of nationwide protests, DoubleVerify, which uses its own technology to scan pages online so advertisers can avoid objectionable content, says instances of hate speech were more than 4.5 times higher than usual — the highest-ever rate it has measured to date.”
Los Angeles Times: Reddit moderators spent years asking for help fighting hate. The company may finally be listening. “When [Jefferson] Kelley, a Reddit moderator, booted hateful users off threads where Black people discussed sensitive personal experiences, racial slurs piled up in his inbox. Crude remarks about women filled the comment sections under his favorite ‘Star Trek’ GIFs. The proliferation of notorious forums, including one that perpetuated a vicious racist stereotype about Black fathers, stung Kelley, a Black father himself. Kelley and other moderators repeatedly pleaded with the company to back them up and take stronger action against harassment and hate speech. But Reddit never quite came through. Then, all of a sudden, that seemed to change.
OneZero: Months Before Reddit Purge, The_Donald Users Created a New Home. “Monday, Reddit banned thousands of subreddits including The_Donald, a conservative community of nearly 800,000 members accused of inciting violence, spreading white supremacist propaganda, and other repeat offenses since its creation in 2015. The takedown marked Reddit’s latest push to curb hate speech on the platform, and The_Donald was a ripe target for moderation. But while the community was purged from Reddit, its members have been relocating to an alternate website for months now, suggesting that users were expecting the ban — and serving as a reminder that ‘deplatforming’ is only so useful.”
Neowin: Facebook will have its hate speech controls audited. “Media Rating Council (MRC), a nonprofit organization that manages accreditation for media research and rating purposes, will conduct the audit, and evaluate how the firm safeguards advertisers from appearing next to harmful content. Additionally, the firm will assess the accuracy of Facebook’s reporting in specific domains. Facebook hasn’t decided when the audit will take place or what will be its scope.” Uh-huh.
MSN News: Democratic Senators Ask Zuckerberg to Act on White Supremacy. “Facebook Inc. Chief Executive Officer Mark Zuckerberg faces demands from Senate Democrats for answers about hate groups on the platform at the same time a growing number of companies are pulling advertising from its sites over harmful content. In a letter to Zuckerberg Tuesday, three Democratic senators question what they call the company’s ‘lack of action to prevent white supremacist groups from using the platform as a recruitment and organizational tool’ despite Facebook’s stated policies on hate speech.”
The Conversation: Social media helps reveal people’s racist views – so why don’t tech firms do more to stop hate speech?. “As Black Lives Matter continues to draw attention to racism – and trigger pushback from people using social media to express sentiments against people of colour – it’s time internet companies did more to tackle all forms of bigotry. A few years ago, I conducted research on online Islamophobia following the 2013 Woolwich terror attack, identifying eight types of offender on Twitter who could be classed as racist. Most were not members of a far-right group. They included builders, plumbers, teachers and even local councillors. But many used the cover of social media to spread their own conspiracy theories and an ‘us and them’ narrative.”
CNN: French parliament passes law requiring social media companies delete certain content within an hour. “The French parliament passed a controversial hate speech law on Wednesday that would fine social media companies if they fail to remove certain illegal content within 24 hours — and in some cases, as little as one hour.”