Mashable: Reddit partners with Crisis Text Line to give users mental health support

Mashable: Reddit partners with Crisis Text Line to give users mental health support . “The partnership makes it possible for a Reddit user to flag someone they feel is struggling with serious self-harm or suicide. That will trigger an immediate private message from Reddit to the person in distress. The message will include mental health resources and a suggestion to use their phone to text the phrase CHAT to connect with a Crisis Text Line counselor. This tool will only be available for users based in the United States.”

CU Anschutz Medical Campus: Study Shows Promising New Web Approach to Prevent Firearm Suicide

CU Anschutz Medical Campus: Study Shows Promising New Web Approach to Prevent Firearm Suicide. “Access to firearms and other lethal methods of suicide during periods of risk can make it more likely that a suicide attempt will end in death. Yet many patients with suicidal thoughts or behaviors receive no counseling about this from healthcare providers, and many have questions about options for firearm or medication storage. To address the issue, clinicians and researchers at the University of Colorado School of Medicine at the Anschutz Medical Campus partnered with Grit Digital Health. The team created Lock to Live, a web resource to help suicidal adults – and family, friends or providers – make decisions about reducing access to firearms, medications, and other potential suicide methods.”

BBC: Facebook removes 11.6 million child abuse posts

BBC: Facebook removes 11.6 million child abuse posts. “Facebook has released the latest figures in its efforts to remove harmful content from its platforms. They reveal 11.6 million pieces of content related to child nudity and child sexual exploitation were taken down between July and September 2019. For the first time, Facebook is also releasing figures for Instagram and including numbers for posts related to suicide and self-harm.”

University of New Mexico Health Sciences: The Devil is in the Data

University of New Mexico Health Sciences, and I really really really hate this headline: The Devil is in the Data. “In a paper published last month in the Journal of the American Medical Informatics Association, the team reported their finding that instances of self-harm among people with major mental illness seeking medical care might actually be as much as 19 times higher than what is reported in the billing records.”

BBC: The woman who tracks ‘dark’ Instagram accounts

BBC: The woman who tracks ‘dark’ Instagram accounts. “Intervening to help suicidal Instagram users is not a role Ingebjørg [Blindheim] would have chosen for herself. She doesn’t work for the social media site, and she isn’t paid for what she does. Nor is she formally qualified to offer help, having received no training in mental healthcare. Instead she feels compelled to act, realising she’s often the last chance of help for those posting their despair online.”

Molly Russell: Instagram extends self-harm ban to drawings (BBC)

BBC: Molly Russell: Instagram extends self-harm ban to drawings. “Instagram has pledged to remove images, drawings and even cartoons showing methods of self-harm or suicide. The move is its latest response to the public outcry over the death of British teenager Molly Russell.”

The Next Web: Pinterest says AI reduced self-harm content on its platform by 88%

The Next Web: Pinterest says AI reduced self-harm content on its platform by 88%. “Yesterday, on international World Mental Health Day, Pinterest announced in a blogpost that for the past year, it’s been using machine learning techniques to identify and automatically hide content that displays, rationalizes, or encourages self-injury. Using this technology, the social networking company says it has achieved an 88 percent reduction in reports of self-harm content by users, and it’s now able to remove harmful content three times faster than ever before.”

TechCrunch: Facebook tightens policies around self-harm and suicide

TechCrunch: Facebook tightens policies around self-harm and suicide. “Timed with World Suicide Prevention Day, Facebook is tightening its policies around some difficult topics, including self-harm, suicide and eating disorder content after consulting with a series of experts on these topics. It’s also hiring a new Safety Policy Manager to advise on these areas going forward. This person will be specifically tasked with analyzing the impact of Facebook’s policies and its apps on people’s health and well-being, and will explore new ways to improve support for the Facebook community.”

University of Washington: Suicidal thoughts? Therapy-oriented website might help

University of Washington: Suicidal thoughts? Therapy-oriented website might help. “Researchers asked more than 3,000 website visitors how they felt before they got to the website compared with after a few minutes after arriving. Nearly one-third were significantly less suicidal, and the intensity of their negative emotions had also decreased. Findings were published in the Journal of Medical Internet Research, an open-access publication.” This site apparently launched in 2014, but it’s new to me.

Ars Technica: Suicide instructions spliced into kids’ cartoons on YouTube and YouTube Kids

Ars Technica: Suicide instructions spliced into kids’ cartoons on YouTube and YouTube Kids. “Tips for committing suicide are appearing in children’s cartoons on YouTube and the YouTube Kids app. The sinister content was first flagged by doctors on the pediatrician-run parenting blog pedimom.com and later reported by the Washington Post. An anonymous ‘physician mother’ initially spotted the content while watching cartoons with her son on YouTube Kids as a distraction while he had a nosebleed. Four minutes and forty-five seconds into a video, the cartoon cut away to a clip of a man, who many readers have pointed out resembles Internet personality Joji (formerly Filthy Frank). He walks onto the screen and simulates cutting his wrist. ‘Remember, kids, sideways for attention, longways for results,’ he says and then walks off screen. The video then quickly flips back to the cartoon.”

Wired: When Algorithms Think You Want to Die

Wired: When Algorithms Think You Want to Die. “Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it. And recommendation is a different animal than mere availability. A growing academic literature bears this out: Whether its self-harm, misinformation, terrorist recruitment, or conspiracy, platforms do more than make this content easily found—in important ways they help amplify it.”

Refinery29: Self-Harm & Suicide Content Is Still Alarmingly Easy To Find on Social Media

Refinery 29: Self-Harm & Suicide Content Is Still Alarmingly Easy To Find on Social Media. “While other harmful topics appear to have been blocked completely on Instagram, searching for ‘self harm’ still brings up handles which contain the words. Some accounts are private, others are not. Within a minute of browsing through these accounts, you can find alternative self-harm hashtags in image captions that are currently in use, which are often amalgams of similar words and phrases.”

Ubergizmo: Self-Harm Images Will Be Hidden Behind ‘Sensitivity Screens’ On Instagram

Ubergizmo: Self-Harm Images Will Be Hidden Behind ‘Sensitivity Screens’ On Instagram. “Instagram wants to clamp down on images that depict suicide or self-harm. The Facebook-owned company has now decided to hide self-harm images behind ‘sensitivity screens.’ This feature is going to blur the image until the user makes a decision to view it and taps on the image.”

The Week: Social media firms face ban over suicide images

The Week: Social media firms face ban over suicide images. “Matt Hancock has written to social media bosses at Facebook, Instagram, Twitter, Snapchat, Pinterest, Google and Apple warning them to ‘purge’ material promoting self-harm and suicide to ensure they do not breach the policies of internet providers.”

Michael K. Spencer: Facebook’s Suicide Algorithms are Invasive

Michael K. Spencer: Facebook’s Suicide Algorithms are Invasive. “We think of artificial intelligence as something that should better humanity, but user monitoring is an invasion of privacy. Facebook’s incessant experiments on us, whether with dating or blockchain are going to take a toll on us. But to be rated by how likely we are to self-harm? That’s state monitoring at its worst. It’s worse I think than Chinese parents wanting GPS smart clothing for their kids. There’s a place for AI to benefit people, but it’s not a company like Facebook to warn us or our loved ones if we are suicidal.”