‘There is no standard’: investigation finds AI algorithms objectify women’s bodies (The Guardian)

The Guardian: ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies. “AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved.” Considering Facebook’s longstanding history of incorrectly moderating anything vaguely resembling a breast, I can’t say I’m shocked.

Yorkshire Bylines: Musk’s reader suppression about voter suppression

Yorkshire Bylines: Musk’s reader suppression about voter suppression. “In the latest twist of logic for Elon Musk, his chaotic version of Twitter has now decided to flag Byline Times as an ‘unsafe’ site. Worse still, the article selected as ‘potentially spammy or unsafe’ is an article by Josiah Mortimer on voter suppression, entitled ‘VOTER ID “It’s Far Worse than Any US State”‘. Mortimer’s article examines the rushed roll out of mandatory voter ID for next May’s local elections, which have been widely condemned as voter suppression, particularly when it comes to young people.”

Ars Technica: Facebook reverses permanent ban on Holocaust movie after outcry

Ars Technica: Facebook reverses permanent ban on Holocaust movie after outcry. “Facebook moderators told Newton that his film was banned because the company’s ad policy restricts content that ‘includes direct or indirect assertions or implications about a person’s race.’ Because Newton’s movie in the US is titled Beautiful Blue Eyes, Facebook moderators banned its promotion in Facebook ads, seemingly reading the title as hinting at race.” Sometimes in the course of doing ResearchBuzz I end up yelling at the monitor. This is one of those times.

Washington Post: Twitter labeled factual information about covid-19 as misinformation

Washington Post: Twitter labeled factual information about covid-19 as misinformation. “Many of the tweets have since had the misinformation labels removed, and the suspended accounts have been restored. But the episode has shaken many scientific and medical professionals, who say Twitter is a key way they try to publicize the continuing risk of covid to a population that has grown weary of more than two years of shifting claims about the illness.”

New York Times: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.

New York Times: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.. “Because technology companies routinely capture so much data, they have been pressured to act as sentinels, examining what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the rampant online spread of sexual abuse imagery. But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.”

DailyDot: Unfair Instagram moderation of women’s bodies highlighted in a new exhibit

Daily Dot: Unfair Instagram moderation of women’s bodies highlighted in a new exhibit. “Getting your content—or worse—your profile removed from a social media platform without explanation or recourse is an alienating feeling that a growing number of people are experiencing. To reflect what it can mean for people’s community, mental health, and even livelihood, London-based creative agency RANKIN launched a project meant to re-platform hundreds of people whose content had been removed from spaces like Instagram, TikTok, Facebook, YouTube, and Twitter.”

Carscoops: Google’s AI Finds This Strosek Porsche Interior Photo Sexually Arousing

Carscoops: Google’s AI Finds This Strosek Porsche Interior Photo Sexually Arousing. “We regularly feature some down and dirty sexy cars on this site, but we want Carscoops to be a place for fans of all ages (and backgrounds, races, religions and genders,) so usually try to come up with a reacharound, I mean workaround, when faced with the prospect of publishing potentially offensive content. Unless it’s a shockingly ugly modified Ferrari. But this week Google clearly thought we weren’t sticking to our own rules as it flagged a story we’d written about a modified Porsche, claiming it contained sexually explicit material. Google being Google, though, it didn’t tell us exactly what it was about the post that had upset its AI guardians.”

New York Times: Facebook’s Unglamorous Mistakes

New York Times: Facebook’s Unglamorous Mistakes. “…ordinary people, businesses and groups serving the public interest like news organizations suffer when social networks cut off their accounts and they can’t find help or figure out what they did wrong. This doesn’t happen often, but a small percentage of mistakes at Facebook’s size add up. The Wall Street Journal calculated that Facebook might make roughly 200,000 wrong calls a day.

Bleeping Computer: Google Drive flags nearly empty files for ‘copyright infringement’

Bleeping Computer: Google Drive flags nearly empty files for ‘copyright infringement’. “Dr. Chris Jefferson, Ph.D., an AI and mathematics researcher at the University of St Andrews, was also able to reproduce the issue when uploading multiple computer-generated files to Drive. Jefferson generated over 2,000 files, each containing just a number between -1000 and 1000. The files containing the digits 173, 174, 186, 266, 285, 302, 336, 451, 500, and 833 were shortly flagged by Google Drive for copyright infringement.”