Wearables

EarCommand tech lets users silently mouth commands to devices

EarCommand tech lets users silently mouth commands to devices
Perhaps someday soon, your EarCommand-enabled earbuds could allow you to speak to your devices without speaking out loud
Perhaps someday soon, your EarCommand-enabled earbuds could allow you to speak to your devices without speaking out loud
View 1 Image
Perhaps someday soon, your EarCommand-enabled earbuds could allow you to speak to your devices without speaking out loud
1/1
Perhaps someday soon, your EarCommand-enabled earbuds could allow you to speak to your devices without speaking out loud

While it's handy to be able to control our devices via voice commands, speaking those commands out loud can be problematic. The EarCommand system offers an alternative, as it "reads" users' silently mouthed words by monitoring their ear canal.

First of all, what are the drawbacks to speaking to our smartphones or smartwatches? Well, spoken commands can be overheard by other people (raising privacy concerns), they may be drowned out by loud background noises, plus it can be difficult for devices to recognize a wide variety of different users' voices.

That's where EarCommand comes in.

The technology is being developed by a team from New York's University at Buffalo, led by Dr. Zhanpeng Jin. It's based on the observation that as we silently mouth different words, the accompanying muscle and bone actions cause the ear canal to deform in distinctive ways – that means specific deformation patterns can be matched to specific words.

Hardware-wise, EarCommand consists of an earbud-like device that uses an inward-facing speaker to emit inaudible near-ultrasound signals into the wearer's ear canal. As those signals are reflected back off the canal's inner surface, their echoes are picked up by an inward-facing microphone. A linked computer analyzes those echoes, utilizing a custom algorithm to determine the deformation of the ear canal, and thus the word spoken.

In tests conducted so far, users mouthed 32 different word-level commands, and 25 different sentence-level commands. The system understood most of what they were saying, with a word-level error rate of 10.2% and a sentence-level error rate of 12.3% – those figures should improve as the technology is developed further. What's more, it worked even when the users were wearing a mask or were in noisy environments, plus unlike some other silent-speech-reading systems, it doesn't incorporate a camera.

A paper on the research was recently published in the journal Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

And as an interesting side note, U Buffalo's related EarHealth system also incorporates signal-emitting, echo-reading earbuds, although it's designed to detect ear problems such as earwax blockages, ruptured ear drums and otitis media, which is a common ear infection.

Source: University at Buffalo via New Scientist

No comments
0 comments
There are no comments. Be the first!