Using AI to Identify Mental Health Issues on Social Media

Stevie Chancellor, a CS+X postdoctoral fellow at Northwestern, examined the promises and perils of social media platforms using artificial intelligence to prevent suicide.

Imagine you were scrolling through the feed of your favorite social media channel one day when you came across a concerning post. A friend shared a picture of herself wrapping her hands around her thigh and complained that she had an important event next weekend and was worried she wouldn't fit into her dress. At the end of her post, she added #anorexic and #eatingdisorder. 

What would you do?

Maybe you would like the post or add a supportive comment. Perhaps you would send a direct message to the friend and ask if they were OK. You might flag the post for the channel to review, or you might just continue to scroll.

"All of these are valid options because quite frankly, we don't know how to respond in these situations and support individuals," said Stevie Chancellor, a CS+X postdoctoral fellow in computer science at Northwestern.

Chancellor shared this scenario during "The promises and perils of AI for mental health on social media," part of a larger discussion on AI and Human-Computer Interaction hosted by Northwestern Engineering's Master of Science in Artificial Intelligence (MSAI) program. In her presentation, Chancellor shared her own research and identified how different social platforms are attempting to use AI to provide social and emotional support to users. Positive outcomes such as being able to intervene before potential suicide attempts have resulted from these initiatives, but there are a host of concerns and risks associated with them.

In 2017, Facebook implemented an algorithm to determine whether users were considering potentially harming themselves. The following year, NPR ran an article titled "Facebook Increasingly Reliant on AI To Predict Suicide Risk" that quoted Facebook Global Head of Safety Antigone Davis saying that the algorithm led Facebook to contact emergency responders for approximately 10 users every day.

The World Health Organization reported that suicide is the second leading cause of death among 15- to 29-year olds, who also make up the highest percentage of people on at least one social media channel, according to the Pew Research Center. The American Foundation for Suicide Prevention reported that more than 48,000 Americans died by suicide in 2018. That same year, there were approximately 1.4 million suicide attempts.  

"When someone is expressing thoughts of suicide, it’s important to get them help as quickly as possible," said Facebook Director of Product Management Catherine Card in this post about using AI to prevent suicide. "Because friends and family are connected through Facebook, we can help a person in distress get in touch with people who can support them." 

Chancellor has extensively researched and written about incorporating AI and machine learning to identify high-risk behaviors. For example, she studied 1.5 million Reddit posts with machine learning to see how people self-direct recovery from opioid addiction. She also reviewed 2.5 million Instagram posts to examine how users shared about eating disorders and whether the platform's content moderation was an effective form of intervention.  

"We've been able to predict mental illness using social media data for about seven or eight years," she said. "And when I say predict, we get accuracies in the 85 to 95% [range], for things as varied as stress, depression, eating disorders and the risk of somebody self-injuring or self-harming themselves."

While those numbers are encouraging, Chancellor recognized a number of fundamental risks or downsides of this type of AI intervention, including:

  • Erroneous machine learning models
  • Bad scientific standards
  • Improper causal assumptions
  • Incorrect diagnosis and/or intervention
  • Discrimination and injustice

In 2014, an app designed to alert Twitter users when individuals they follow post concerning or potentially suicidal content was suspended less than two weeks after its debut. People used the app to bully those individuals who were already in a vulnerable state.  

This, as well as Facebook's current suicide prevention algorithm, bring the issue and concerns over data privacy to the forefront. What types of authorization should be required of users if the content they are posting is already public? What responsibility do these platforms have in revealing these types of behind-the-scenes uses of AI and machine learning? The short answer is, it's complicated. 

"Things that work well are going to inherently push us to reconsider the roles of diagnosing and treating mental illness," Chancellor said. "We need responsible, thoughtful, socially aware practices within AI to come up with ways that we can ethically use social media to diagnose and treat mental illness." 

The next step in this work is to examine how to use the technologies of intelligence to go beyond the identification of problems to build systems that can partner with healthcare professionals to both identify and intervene as soon as problems are noticed. 

To watch the complete AI and Human-Computer Interaction presentation, which also featured Maia Jacobs talking about "Bringing AI to the Bedside with User-Centered Design," visit the link here.

McCormick News Article