Patients with anxiety feel more tension in their bodies, which can also change the way their voice sounds, she said. “They tend to speak faster. They have more difficulty breathing.”
Other technologies add a potentially helpful layer of human interaction, like Kintsugi, a company based in Berkeley, Calif., that raised $20 million in Series A funding earlier this month. Kintsugi is named for the Japanese practice of mending broken pottery with veins of gold.
One concern with the development of these types of machine learning technologies is the issue of bias — ensuring the programs work equitably for all patients, regardless of age, gender, ethnicity, nationality and other demographic criteria.
Psychologists have long known that certain mental health issues can be detected by listening not only to what a person says but how they say it, said Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine.
With depressed patients, Dr. Espinola said, “their speech is generally more monotone, flatter and softer. They also have a reduced pitch range and lower volume. They take more pauses. They stop more often.”
Today, these types of vocal features are being leveraged by machine learning researchers to predict depression and anxiety, as well as other mental illnesses like schizophrenia and post-traumatic stress disorder. The use of deep-learning algorithms can uncover additional patterns and characteristics, as captured in short voice recordings, that... See more
Founded by Grace Chang and Rima Seiilova-Olson, who bonded over the shared past experience of struggling to access mental health care, Kintsugi develops technology for telehealth and call-center providers that can help them identify patients who might benefit from further support.
By using Kintsugi’s voice-analysis program, a nurse might be prompted, for example, to take an extra minute to ask a harried parent with a colicky infant about his own well-being.