Wellness & Healthy Living

Selective hearing: AI-powered listening device picks a voice out of a crowd

How we pick out single voices from noisy environments is referred to by scientists as the cocktail problem
How we pick out single voices from noisy environments is referred to by scientists as the cocktail problem

Us humans can be pretty adept at tuning into a singular voice – in say a busy bar or restaurant – but hearing aids and devices like them can quickly become pretty overwhelmed by all the background noise. Scientists call this the "cocktail party problem," and through a new form of artificial intelligence believe they have come up with a solution that could be of huge benefit to those with hearing impairments.

Today's hearing devices are designed to counter this so-called cocktail party problem, but generally rely on boosting the volume of all sounds, rather than strictly the ones the listener is trying to hear. This problem not only plagues those relying on hearing aids, but can also hamper the functionality of voice assistants like Amazon's Alexa and Google Assistant.

A team led by Nima Mesgarani, a neuroengineer at Columbia University, sought answers to this problem by tapping into the listening abilities of the human brain. More specifically, they used electrodes implanted in the brains of epilepsy patients to monitor the electrical activity as multiple people spoke, observing as the brainwaves tracked only the voice of speaker at the center of their attention, effectively blocking out the background noise.

This brain activity signature then served as a "soundprint" of sorts, with the team able to use it as a template to develop artificial intelligence that works in a similar way, and build it into an experimental hearing aid.

Using a deep neural network and this signature of brain activity, the device is able to determine which voice the listener is trying to tune into, and selectively boosts the volume of that particular voice over others in the vicinity. In testing, the device was able to distinguish between up to three voices, and could even adapt to new, unknown voices entering the conversation in real time.

"So far, we've only tested it in an indoor environment," said Mesgarani. "But we want to ensure that it can work just as well on a busy city street or a noisy restaurant, so that wherever wearers go, they can fully experience the world and people around them."

To that end the scientists are now working to improve the technology, building it into a microphone that can differentiate between voices. They're also trying to come up with a way of non-invasively tapping into the listener's brainwaves, perhaps through a sensor over the ear or scalp, rather than one implanted in the brain.

"We hope in the next five years this will become a real product," says Mesgarani.

The research was published in the journal Science Advances.

Source: AAAS, Zuckerman Institute

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
3 comments
notarichman
i really hope this succeeds. i have that exact problem. don't get me wrong, when the environment is quiet i can hear faint sounds. i just can't distinguish when multiple people are talking.
noteugene
After 40 yrs of deafness, I'll just say, about time. What took you guys so long?
ljaques
I'm in the same predicament, notarichman. I, too, hope it comes to production and works as stated. I'll need one soon enough.