University of Exeter: Network of channels tried to saturate YouTube with pro-Bolsonaro content during 2022 Brazil election

University of Exeter: Network of channels tried to saturate YouTube with pro-Bolsonaro content during 2022 Brazil election . “Experts have identified coordinated efforts to saturate YouTube’s recommender algorithm, flooding users with pro- Bolsonaro content during the 2022 Brazil election. Researchers from the University of Exeter and Instituto Vero have uncovered a complex, web-like influencer system of channels that shaped political narratives during this period. This is in addition to YouTube’s own recommender algorithm which also generates suggestions based on users’ viewership patterns.”

Platformer: Instagram’s co-founders are mounting a comeback

Platformer: Instagram’s co-founders are mounting a comeback. “Kevin Systrom and Mike Krieger are back. The Instagram co-founders, who departed Facebook in 2018 amid tensions with their parent company, have formed a new venture to explore ideas for next-generation social apps. Their first product is Artifact, a personalized news feed that uses machine learning to understand your interests and will soon let you discuss those articles with friends.”

Wall Street Journal: Facebook Wanted Out of Politics. It Was Messier Than Anyone Expected.

Wall Street Journal: Facebook Wanted Out of Politics. It Was Messier Than Anyone Expected.. “The plan was in line with calls from some of the company’s harshest critics, who have alleged that Facebook is either politically biased or commercially motivated to amplify hate and controversy. For years, advertisers and investors have pressed the company to clean up its messy role in politics, according to people familiar with those discussions. It became apparent, though, that the plan to mute politics would have unintended consequences, according to internal research and people familiar with the project.”

Rolling Stone: Jan. 6 Committee Experiment Found TikTok Went From Zero To Nazi in 75 Minutes

Rolling Stone: Jan. 6 Committee Experiment Found TikTok Went From Zero To Nazi in 75 Minutes. “WHEN THE JAN. 6 committee wanted to test how easy it was for TikTok users to wander down a far-right rabbit hole, they tried an experiment. They created Alice, a fictional 41-year-old from Acton, Massachusetts, gave her a TikTok account, and tracked what the social media app showed her. To their surprise, it only took 75 minutes of scrolling — with no interaction or cues about her interests — for the platform to serve Alice videos featuring Nazi content, following a detour through clips on the Amber Heard-Johnny Depp defamation suit, Donald Trump, and other right-wing culture war flashpoints.”

Washington Post: They came to TikTok for fun. They got stuck with sexualized videos

The Washington Post: They came to TikTok for fun. They got stuck with sexualized videos. “We spoke to five people who have struggled to get sexual content out of their feeds, and tested each of the apps ourselves as new users with no history on the sites. We found that sexual content was suggested by default to new users on four of the five apps, although the material rarely violated community guidelines.”

Engadget: TikTok will explain why it recommends videos on its ‘For You’ page

Engadget: TikTok will explain why it recommends videos on its ‘For You’ page. “The algorithm that powers TikTok’s ‘For You’ page has long been a source of fascination and suspicion. Fans often remark on the app’s eerie accuracy, while TikTok critics have at times speculated the company could subtly manipulate its algorithm to influence its users in more nefarious ways.”

Swansea University: Samaritans And University Report Reveals Dangers Of Social Media’s Self-Harm Content

Swansea University: Samaritans And University Report Reveals Dangers Of Social Media’s Self-Harm Content. “Social media sites are still not doing enough to tackle self-harm content being pushed to users on their sites, says Samaritans. The warning comes as new research from the charity and Swansea University found 83 per cent of social media users surveyed were recommended self-harm content on their personalised feeds, such as Instagram’s ‘explore’ and TikTok’s ‘for you’ pages, without searching for it.”

Wall Street Journal: The Surprising Reason Your Amazon Searches Are Returning More Confusing Results than Ever

Wall Street Journal: The Surprising Reason Your Amazon Searches Are Returning More Confusing Results than Ever. “If you want to be reminded just how tiny you are, you could travel to a remote part of the world and behold the night sky, or stand atop a mountain and contemplate its immensity, or you could try to find the best garlic press on Amazon… Granted, there are many more stars in the night sky than the 300 or so garlic presses visible on Amazon’s U.S. site. But wading through page after page of those listings, for items with tens of thousands of collective reviews, is, like many searches on Amazon, increasingly an exercise in frustration, despair and confusion.”

MIT News: Technique protects privacy when making online recommendations

MIT News: Technique protects privacy when making online recommendations. “Algorithms recommend products while we shop online or suggest songs we might like as we listen to music on streaming apps. These algorithms work by using personal information like our past purchases and browsing history to generate tailored recommendations. The sensitive nature of such data makes preserving privacy extremely important, but existing methods for solving this problem rely on heavy cryptographic tools requiring enormous amounts of computation and bandwidth. MIT researchers may have a better solution.”

Tech Xplore: New music recommendation system includes long-tail songs

Tech Xplore: New music recommendation system includes long-tail songs. “Music recommendation systems commonly offer users songs that others have enjoyed in the genres that the user requests. This can lead to popular songs becoming more popular. However, it neglects the less well-known songs, the long-tail songs that users may well enjoy just as much but have less chance of hearing because of the way the recommendation algorithms work. New work in the International Journal of Computational Systems Engineering, offers an approach to a music recommendation system that neglects the popular in favor of the long-tail and so could open users to new music.”

University of South Florida: Researchers find new way to amplify trustworthy news content on social media without shielding bias

University of South Florida: Researchers find new way to amplify trustworthy news content on social media without shielding bias. “Social media sites continue to amplify misinformation and conspiracy theories. To address this concern, an interdisciplinary team of computer scientists, physicists and social scientists led by the University of South Florida (USF) has found a solution to ensure social media users are exposed to more reliable news sources. In their study published in the journal Nature Human Behaviour, the researchers focused on the recommendation algorithm that is used by social media platforms to prioritize content displayed to users.”

Washington Post: How Facebook neglected the rest of the world, fueling hate speech and violence in India

Washington Post: How Facebook neglected the rest of the world, fueling hate speech and violence in India. “In February 2019, not long before India’s general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company’s largest market…. At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp. Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech.”

Media Matters: Instagram’s suggestion algorithm is promoting accounts that share misinformation

Media Matters: Instagram’s suggestion algorithm is promoting accounts that share misinformation. “A Media Matters analysis found that Instagram’s ‘similar account suggestions’ feature, a drop-down widget that appears on users’ profiles and suggests accounts to follow, reliably shepherds users who show an interest in anti-vaccine misinformation and other harmful content (some of which the platform claims to ban) toward similar types of content.”

Nature Communications: Neutral bots probe political bias on social media

Nature Communications: Neutral bots probe political bias on social media. “Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections.”

Cornell Chronicle: ‘Dislike’ button would improve Spotify’s recommendations

Cornell Chronicle: ‘Dislike’ button would improve Spotify’s recommendations. “Spotify’s whole business model relies on keeping you listening and being able to predict what songs you’ll want to hear next. But Cornell researchers recently asked the question: Why do they still not let you vote down a song? The research team recently developed a recommendation algorithm that shows just how much more effective Spotify would be if it could, in the style of platforms like Pandora, incorporate both likes and dislikes.” I wish they would let you block songs. Surely I’m not the only one who has bad memory songs they never want to hear again?