Mozilla Blog: Examining AI’s Effect on Media and Truth

Mozilla Blog: Examining AI’s Effect on Media and Truth. “It’s a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong. Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we’re announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.”

Wired: Optimize Algorithms to Support Kids Online, Not Exploit Them

Wired: Optimize Algorithms to Support Kids Online, Not Exploit Them. “I cannot imagine how I would have learned what I have learned or met the many, many people who’ve enriched my life and work without the internet. So I know first-hand how, today, the internet, online games, and a variety of emerging technologies can significantly benefit children and their experiences. That said, I also know that, in general, the internet has become a more menacing place than when I was in school.”

New York Times: YouTube, the Great Radicalizer

New York Times: YouTube, the Great Radicalizer. “At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations. Soon I noticed something peculiar. YouTube started to recommend and ‘autoplay’ videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.”

Inverse: How to Create Socially Responsible Algorithms, According to AI Institute

Inverse: How to Create Socially Responsible Algorithms, According to AI Institute. “AI Now’s report, Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies, outlines the need for transparency when it comes to deploying algorithms. Algorithms have a huge impact on our daily lives, but their impact sometimes goes unnoticed. Because they are baked into the infrastructure of social media and video platforms, for example, it’s easy to forget that programs often determine what content is pushed to internet users. It’s only when something goes wrong, like a conspiracy theory video reaching the top of YouTube’s trending list, that we scrutinize the automated decision procedures that shape online experiences.”

Medium: Algorithmic Consumer Protection

Medium: Algorithmic Consumer Protection. “This March, Facebook announced a remarkable initiative that detects people who are most at risk of suicide and directs support to them from friends and professionals. As society entrusts our safety and well-being to AI systems like this one, how can we ensure that the outcomes are beneficial?”