Machine learning algorithm detects signals of child depression through speech
Anxiety and depression are inherently tricky conditions to diagnose, with clinicians turning to exams and questionnaires to make their assessments. For this reason, the search is on for clearer, more straightforward ways to identify the conditions, and scientists at the University of Vermont believe they have unearthed another with the help of a newly-developed AI algorithm that can detect telltale signals in a subject's speech.
Diagnosing depression through typical means can not only be confronting for the patient, it can also be drawn out and costly. These shortcoming have driven a range of research initiatives aimed at uncovering a better way of doing things and we've seen some promising steps forward in this area.
One study from way back in 2014 discovered a link between serotonin levels in the blood and the depression network in the brain, raising the prospect of a blood test for the condition. One from 2017 even hinged on a computer program that could detect signs of depression via subject's Instagram images, while another from earlier this year explored how a machine learning algorithm could identify the condition with the help of wearable motion sensors.
The new research also makes use of a machine learning algorithm, but this time tuned to detect markers of depression and anxiety through the speech of young children, conditions known together as "internalizing disorders." What drew the researchers towards this problem was the trouble children under the age of eight have in communicating their mental well being, and the consequences of the conditions going undiagnosed in developing brains. This can lead to problems like substance abuse and suicidal tendencies down the track.
"We need quick, objective tests to catch kids when they are suffering," says Ellen McGinnis, a clinical psychologist at the University of Vermont Medical Center's Vermont Center for Children, Youth and Families and lead author of the study. "The majority of kids under eight are undiagnosed."
McGinnis and her team enlisted a group of 71 children between the ages of three and eight and subjected them to a variation of the Trier-Social Stress Task, a mood induction exercise designed to trigger feelings of stress and anxiety. In it, the children were tasked with improvising an interesting three-minute story while a stern judge watched on and gave them neutral or negative feedback.
"The task is designed to be stressful, and to put them in the mindset that someone was judging them," says Ellen McGinnis.
Meanwhile, a machine learning algorithm was put to work analyzing their speech. This analysis was then compared to traditional diagnosis obtained through a clinical interview and questionnaire, and the researchers were able to tease out some characteristics of their speech that seemed to serve as indicators of depression and anxiety. The three of them that really stood out are described as low-pitched voices, repeatable speech inflections and a higher pitched response to a buzzer.
"A low-pitched voice and repeatable speech elements mirrors what we think about when we think about depression: speaking in a monotone voice, repeating what you're saying," says McGinnis.
One clear benefit of the approach, albeit only demonstrated in a small study at this stage, is its efficiency, with the algorithm able to offer its analysis within seconds of the task ending. With these positive results, the team is now exploring how it could potentially be deployed as a universal screening tool for clinical use, possibly in the form of a smartphone app.
"The algorithm was able to identify children with a diagnosis of an internalizing disorder with 80 percent accuracy, and in most cases that compared really well to the accuracy of the parent checklist," says Ryan McGinnis, a biomedical engineer at the University of Vermont and member of the team.
The research was published in the Journal of Biomedical and Health Informatics.
Source: University of Vermont