Wouldn’t it be amazing if an app on your phone could diagnose health problems by identifying changes in your speech? Although it may seem futuristic, diagnosing diseases by voice analysis is an exciting development. With this technology, we could receive diagnoses earlier than traditionally possible and monitor chronic conditions. Additionally, monitoring vocal patterns could help doctors track disease progress and make appropriate treatment decisions.
It turns out that certain diseases, like those affecting our heart, lungs, or brain can alter our voices. Therefore, subtle changes can indicate early signs of disease. Using artificial intelligence (AI), software can detect “vocal biomarkers”, which are signatures, or features, or a combination of features that can be used to diagnose patients, monitor patient health, or grade the severity of a disease.
Simply put, by analyzing a person’s voice, an algorithm could identify vocal patterns that are characteristic of certain diseases. Importantly, diagnosing diseases by voice analysis could provide fast, accurate, easy, cost-effective screenings that can take place in almost any location.
Diagnosing diseases by voice analysis.
Read below for a few of the ways in which we may benefit from this technology.
Rapid, accurate diagnoses of COVID-19 cases is critical to get the pandemic under control. Unfortunately, long lines and longer waits for appointments made getting a diagnosis challenging and frustrating. How great would it be if we could get an accurate diagnosis, at home, by speaking or coughing into an app?
An MIT team developed an algorithm that has correctly identified people with COVID-19 solely on the sound of their coughs. They trained their algorithm with thousands of cough recordings submitted through their website during April and May of 2020.
Interestingly, the algorithm correctly identified COVID-19 infections in 98.5% of people who had received an official positive COVID-19 test. Moreover, the algorithm was also quite successful in identifying COVID-19 among asymptomatic people.
MIT scientist and study co-author Brian Subirana said: “The way you produce sound changes when you have COVID, even if you’re asymptomatic.”
Similarly, the startup Sonaphi aims to use vocal biomarkers combined with machine learning to gain insights into health and wellness. Currently, the company is gathering data to identify vocal biomarkers for COVID-19 infections. Like the MIT program, they are asking people to share voice samples and test results on their app. They hope to use this database of vocal biomarkers to create AI that can detect COVID through voice recordings.
Voice disorders are very frequent among people with Parkinson’s disease – seen in as many as 89% of cases. These voice disorders are mostly related to articulation, including pitch variations, breathiness, mumbling, and slurring. Interestingly, research shows changes in voice features in up to 78% of early-stage Parkinson’s patients, even though both doctors and patients often don’t notice these changes.
Importantly, experts believe it’s likely that screening for vocal biomarkers will help with early diagnosis as well as to track disease progression, supplementing the traditional physical exam. Hopefully, doctors will be able to use this technology for early diagnosis, to determine when to start treatment, and to monitor the effectiveness of treatments.
Parkinson’s Voice Initiative
Importantly, their research shows that recording ‘aaah’ vocal sounds using lab-quality digital recordings can accurately diagnose Parkinson’s disease, with almost 99% accuracy. However, the Initiative now wants to test if using “regular” digital audio lines, like those in our homes, will provide accurate diagnoses. To that end, they aim to collect 10,000 ‘aaah’ vocal sounds. Those with and without Parkinson’s are encouraged to participate – visit their website to record your ‘aaah’ to help with this project (currently for Android phone users only).
While the results of the lab tests are encouraging, recordings in non-lab environments will be trickier to accurately analyze. For instance, factors such as background noise and personal behaviors cannot be controlled.
However, if they succeed, we will have a fast, very inexpensive, accessible method to test ourselves at home.
Alzheimer’s disease and mild cognitive impairment.
Unsurprisingly, subtle changes in voice and language can appear years before the appearance of the typical symptoms of Alzheimer’s disease or mild cognitive impairment. Both conditions can impact speech in a slew of ways. For instance, patients may speak slowly, repeat words, frequently use filler sounds (for example, um, er), and may be incoherent.
In one study, published in 2016, an analysis of vocal samples allowed researchers to accurately identify patients with Alzheimer’s disease in 81% of subjects.
Because there are so many voice features that change with Alzheimer’s disease and mild cognitive impairment, screening for vocal biomarkers could potentially become a simple, noninvasive method for early diagnosis.
Unfortunately, patients with multiple sclerosis often struggle with voice impairment and dysarthria. (Dysarthria is when the muscles that produce speech are damaged, paralyzed, or weakened, making it hard to control the tongue or voice box, sometimes leading to slurred speech). Experts believe vocal screenings could be used to diagnose patients, monitor disease progression, and make treatment decisions.
Coronary Artery Disease.
Interestingly, researchers at the Mayo Clinic identified several vocal features associated with a history of coronary artery disease. Building on that knowledge, Mayo Clinic researchers recently tested their algorithm for vocal biomarkers related to coronary artery disease.
Excitingly, their findings, published in 2022, showed their AI accurately predicted which patients were more likely to have clogged arteries that led to further heart problems.
For this study, researchers analyzed voice recordings taken with a smartphone app to predict the risk for coronary artery disease. After several years, those who had a high vocal biomarker score were more likely to have severe chest pain or heart issues that led to hospital or ER visits. Additionally, this group was also more likely to have a positive stress test. And they were more likely to have coronary artery blockage found during follow-up testing.
Similarly, a study in Israel, published in 2020, found high vocal biomarkers were associated with adverse outcomes (including increased risk of death) for patients with congestive heart failure.
Although it’s early in the process, AI voice analysis technology may provide a low-cost, noninvasive tool to easily predict heart disease.
Researchers identified differences in vocal characteristics in people with and without type 2 diabetes. Additionally, people with type 2 diabetes who have poor glycemic control or neuropathy have more vocal straining, voice weakness, and a different voice grade.
In a 2020 report, researchers demonstrate an association between voice signals and blood glucose levels. They state vocal biomarkers could provide a non-invasive tool for diabetics to check daily blood glucose levels. Additionally, they suggest vocal biomarker screening could identify people with prediabetes or those at risk of developing diabetes.
Interestingly, it’s possible that mental health issues could be diagnosed and/or monitored with vocal biomarkers as well.
For instance, from analyzing speech patterns, researchers were able to identify PTSD (post-traumatic stress disorder) in 89% of subjects. The study, published in 2019, found the probability of PTSD was higher for certain vocal biomarkers, including slower, more monotonous speech. Interestingly, they found that depression symptoms, alcohol use disorder, and traumatic brain injury had an insignificant influence on the outcome.
Schizophrenia and bipolar disorders.
Since there are well established changes in voice and facial expressions among those with schizophrenia spectrum disorders and bipolar disorders, identifying these changes could help doctors diagnose patients. Interestingly, a study published in 2022, researchers accurately identified symptoms of these disorders by analyzing voice and facial expressions.
In general, they found that moving the cheek-raising and chin-raising muscles were the strongest predictors for men. In contrast, it was voice features that were the most common indicators for women.
Importantly, the ability to accurately identify symptoms can provide a more objective method to diagnose these conditions. Currently, doctors must rely on patient reports of symptoms, which can be inaccurate. Importantly, this technology, still in its early stage, can also help doctors monitor progress overtime to help them determine treatment protocols.
When people experience clinically depression, their speech becomes lower, more monotone, more labored, with more stops, starts, and pauses. Furthermore, as depression worsens, speech becomes more gravelly, hoarse, and less fluent. Interestingly, a study published in 2007 found that speech samples obtained over the phone correlated significantly with depression severity.
One start-up is making progress on diagnosing diseases by voice analysis.
Start-up Novoic is developing AI algorithms to detect neurological diseases by analyzing people’s speech. Interestingly, patients will use their app to record a few minutes of speech. Novoic’s app will then analyze the speech using their algorithm and quickly report the results to doctors.
Currently, they are validating their concept and applying for regulatory clearance to use their technology for screening for cognitive disorders. Additionally, they are in various stages of evaluating the technology to screen for Alzheimer’s, mood disorders, and motor disorders.
When will doctors start diagnosing diseases by voice analysis?
Importantly, right now, neither the US Food and Drug Administration nor the European Medicines Agency have approved any vocal biomarkers.
As with other medical innovations, identifying a vocal biomarker is just the first step towards widespread use. In addition to the normal challenges associated with medical innovations, vocal biomarkers come with additional challenges. For instance, developers must address the issue of accents and foreign languages. Additionally, currently there are no standards for collecting vocal biomarker samples and creating repositories for clinical use.
Once a company develops an algorithm, they must create a user-friendly interface (for example, a phone app, medical device connector, etc.). Next, there must conduct a feasibility study, one or more clinical trials, and real-world studies. All of this can take a considerable amount of time, so don’t expect to see this technology soon!
The future may bring an addition of video analysis.
Experts surmise the future will bring an addition of visual analysis by video, in combination with vocal pattern analysis to diagnose diseases. Interestingly, using visual images will help identify emotions and other health characteristics, making it easier to diagnose and monitor patients.
Whether you receive your diagnosis using traditional diagnostic methods, or your disease is diagnosed using voice analysis, engaging in your care will help you get the best outcome possible. Read these posts for more tips:
- Why Are Second Opinions so Important?
- 10 Steps to Reduce Your Risk of Diagnostic Error
- 10 Tips for a Better Medical Appointment.
- The Dangers of Too Many Medical Tests and Treatments.
- How Can You Get the Best Healthcare? Actively Participate!
Leave a Reply