Despite some advances that have been made in the field, one of the continuing problems with hearing aids is the fact that they amplify background sound along with peoples' voices. While our brains are reasonably good at distinguishing between speech and distracting ambient noise, hearing aid users get the noise and the voice presented to them in one often-incomprehensible package. Researchers at The Ohio State University, however, may have a solution. They've developed a noise-filtering algorithm that's been shown to improve test subjects' recognition of spoken words by up to 90 percent.
The machine learning-based algorithm utilizes a deep neural network, which – like the human brain – has a layered structure that allows it to learn. Part of its "training" involves getting it to pick out known words as they're played back against a noisy background. The parts in which the voice dominates the noise are amplified, whereas instances in which the noise dominates are discarded.
In tests of the technology, 12 partially-deaf volunteers removed their hearing aids, then were asked to identify as many words as they could in recordings of speech obscured by background noise. They then took the test over, this time listening to the same recording after it had been "cleaned up" using the algorithm.
Their word comprehension increased by an average of 25 to almost 85 percent in cases where the speech had previously been obscured by random "background babble" (such as the noise produced by other peoples' voices) and by 35 to 85 percent when it had previously been obscured by more consistent "stationary noise" (such as the sound of an air conditioner).
When 12 students with full hearing listened to the speech obscured by noise, they actually scored lower than the first group did when listening to the enhanced speech. "That means that hearing-impaired people who had the benefit of this algorithm could hear better than students with no hearing loss," says Prof. Eric Healy, who is leading the research.
The algorithm was created by a team led by Prof. DeLiang "Leon" Wang. It is currently being commercialized, and is available for licensing from the university. Ultimately, it is hoped that it could find use in tiny digital hearing aids, or perhaps even in systems where the user's smartphone performs all the processing, then transmits the audio wirelessly to a paired earpiece.
A paper on the research has just been published in the Journal of the Acoustical Society of America.
Examples of how the technology works can be heard in the video below.
Source: The Ohio State University
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more