Hearing aids are often stymied by the "cocktail party" effect, wherein they can't amplify one person's voice without also boosting the voices of everyone else in the room. A new AI system, however, could help focus the devices' attention where it's needed.
First of all, there already are artificial intelligence-based systems that are able to determine which of several voices someone is listening to. In a nutshell, they do so by classifying each voice as a unique sound signal. When one of those signals causes an increase in certain brainwaves, then the system knows that's the voice it should be isolating and amplifying.
According to scientists at Belgium's KU Leuven university, the problem with such setups is that they can take 10 to 20 seconds to do their job – that's impractically long in fast-moving conversations, particularly those involving more than two people. Instead, the researchers developed a much quicker AI-based brainwave-reading system that doesn't listen to voices at all.
"We trained our system to determine whether someone is listening to a speaker on their left or their right," says Prof. Alexander Bertrand. "Once the system has identified the direction, the acoustic camera [an array of microphones] redirects its aim, and the background noise is suppressed. On average, this can now be done within less than one second. That’s a big leap forward, as one second constitutes a realistic timespan to switch from one speaker to the other."
Currently, the electroencephalogram (EEG) data utilized by the system has to be gathered via an electrode-equipped skull cap. That said, it is hoped that once the setup is developed further, the EEG technology could instead be built into a compact hearing aid with integrated electrodes.
A paper on the research was recently published in the journal IEEE Transactions on Biomedical Engineering.
Source: KU Leuven