In a restaurant or at a party, background noise can make it hard to hear people talking, even up close. But soon we could be wearing headphones that use AI to filter out noise that’s more than a few feet away, creating a “sound bubble” that lets you focus on your own conversation.
Developed by engineers at the University of Washington, the device is essentially a pair of noise-canceling headphones, equipped with six extra microphones along the headband. A small onboard computer runs a neural network trained to analyze the distance of different sound sources, filtering out noise coming from farther away and amplifying sounds closer to the user.
The end result is a kind of sound bubble, as the team describes it, which can be customized with a radius of 1 to 2 m (3.3 to 6.6 ft). The idea is you can clearly hear people talking from within that bubble, while noises outside of it are suppressed by an average of 49 decibels. If someone else enters the bubble, they can join the conversation too.
“Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far,” said Shyam Gollakota, senior author of the study. “Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself.”
The neural network was originally trained on data gathered in 22 different indoor environments, such as offices and living spaces. The headphones were placed on a mannequin head and rotated while noises were played from different distances. The algorithm seems to be comparing the different phases of each frequency of sound to determine how far away the source is, and block sounds accordingly.
This is just the latest version of the tech, which the team has been developing for a while now. An iteration from last year used a swarm of small robots that would move around a room on their own, taking measurements to create separate audio streams for different sources, allowing the user to mute certain areas on demand. Just a few months ago, the researchers demonstrated a version that could single out the voice of one person just by looking at them.
This sound bubble version could end up being the most practical iteration of the tech, allowing you to have a clear conversation in a bar with people at your table. It could work even better if it can be integrated into smaller equipment like hearing aids or earbuds, and thankfully the team is already working on that, as well as founding a startup to commercialize the tech.
The research was published in the journal Nature Electronics. The team demonstrates the sound bubble tech in the video below.
Source: University of Washington