Your smartphone could tell how drunk you are by listening to you talk
Anyone who's been to karaoke night at a bar knows just how dramatically altered our voices get after throwing back a few drinks. Scientists have now shown that analyzing these vocal changes is a surprisingly good way to see just how drunk we are.
In efforts to protect ourselves from ourselves, scientists have been pursuing various self-analyzing options over the years to determine when we've had too much to drink. We've seen a smartphone app that analyzed eye patterns; a pressure-sensing driver's seat; sweat-analyzing sensors; and even talking urinal cakes, all designed to gauge our level of inebriation and – hopefully – help us make better choices, like calling an Uber instead driving if we're too drunk.
Now, researchers at Stanford Medicine and the University of Toronto believe they may have found another way to provide instant feedback on our states of inebriation. They conducted a small study in which they gave each of 18 adults a dose of alcohol based on their body weight. They also had them read a tongue-twister while being recorded by a smartphone placed one to two feet away. They read one before the drinks and then hourly afterwards. Their blood alcohol levels were also monitored every 30 minutes for the seven hours of the study.
Next, they broke the vocal recordings down into one-second increments and analyzed them using metrics including pitch and frequency. Once the database was built, the researchers found that it could accurately predict intoxication levels correctly an impressive 98% of the time.
"The accuracy of our model genuinely took me by surprise," said lead researcher Brian Suffoletto, associate professor of emergency medicine at Stanford. "While we aren’t pioneers in highlighting the changes in speech characteristics during alcohol intoxication, I firmly believe our superior accuracy stems from our application of cutting-edge advancements in signal processing, acoustic analysis, and machine learning."
The researchers believe that incorporating their system into smartphones and allowing it access to the microphone could be a way to monitor users' intoxication levels, and send alerts when it senses that someone is too drunk to drive.
"While one solution could be to frequently check in with someone to gauge their alcohol consumption, doing so could backfire by being annoying at best, or by prompting drinking at worst," said Suffoletto. "So, imagine if we had a tool capable of passively sampling data from an individual as they went about their daily routines and surveil for changes that could indicate a drinking episode to know when they need help."
Suffoloetto says such a system could also be combined with other phone capabilities, such as the using the accelerometer to check for unusual gait patterns or a system to analyze texts for changes in communication patterns that might indicate inebriation. Doing so could create an intervention while someone is still capable of being responsive to it, says Suffoletto.
"Timing is paramount when targeting the optimal moment for receptivity and the relevance of real-time support," he says. "For instance, as someone initiates drinking, a reminder of their consumption limits can be impactful. However, once they’re significantly intoxicated, the efficacy of such interventions diminishes."
Suffoletto indicates the need for more studies to expand upon his findings, and to create a database with more vocal samples from a broader range of participants.
The current findings have been published in the Journal of Studies on Alcohol and Drugs.