AI model uses emotional ‘fingerprints’ to spot mental disorders in social posts
Dartmouth researchers say they are the first to analyze emotional states rather than text
Dartmouth researchers say they are the first to analyze emotional states rather than text
A team of researchers from Dartmouth College built an AI model that can detect mental disorders by analyzing Reddit posts.
What makes this model different from other AI approaches is that it focuses entirely on emotional states in the posts rather than analyzing the text itself, according to the university. The researchers said they believe they are the first to use this method.
Taking this approach looks to perform better over time, according to the paper, “Emotion-based Modeling of Mental Disorders on Social Media,” by Xiaobo Guo, Yaojia Sun and Soroush Vosoughi.
One in four people suffer from a mental disorder at some point in their lives, which was exacerbated by the pandemic, the paper said. But nearly 67% with a known mental disorder do not seek help due to stigma, discrimination and neglect, the researchers said.
With some prompting, however, people needing help could be encouraged to seek it, the paper said. That is where screening tools like the one the researchers developed could be useful.
Emotional ‘fingerprints’
Researchers focused on emotional mental distress: major depression, anxiety, post-traumatic stress and bipolar disorders. They studied Reddit posts from 2011 to 2019 representing nearly 8,000 users, looking at folks who disclosed they had one of these disorders and also a control group of those without these maladies.
Researchers trained their AI model to label the emotions on each social post and record changes in the user’s emotions, say from joy to sadness. These changes would be put in a matrix to show how likely a user’s emotional stage changed.
These emotional transition patterns become emotional “fingerprints” of users, which are then compared to established markers for emotional disorders to spot people in distress.
Figure 1: Doctoral candidate Xiaobo Guo, left, and Soroush Vosoughi, assistant professor of computer science at Dartmouth. (Source: Dartmouth Collage. Photo credit: Robert Gill)
The researchers believe their approach avoids one problem that other AI models may encounter when analyzing only the text in social posts: information leakage. This is where the AI may mistakenly associate some words with a mental state.
For example, the AI text model could learn to correlate the word ‘Covid’ with sadness or anxiety. So a scientist that posts about Covid could be wrongly labeled as having anxiety. The Dartmouth researchers said their focus on emotions avoids this inaccurate linkage.
“Our approach, different from content-based representations influenced by topic, domain, and information leakage, is more robust and has better interpretability,” the researchers said.
About the Author
You May Also Like