Noisy environments pose a challenge to deaf people, particularly when they're trying to discern what a specific person within such a setting is saying. AirCaps glasses are designed to help, by providing real-time captioning to the wearer.
The glasses were developed by a team of engineers and designers from several universities, including Cornell University computer science student Nirbhay Narang. The technology is now being commercialized via his spinoff company, AirCaps. Looking much like a set of traditional thick-framed spectacles, the eyeglasses are linked via Bluetooth to an app on the user's handheld iPhone – an Android app is in the works.
As the phone's mic picks up the voice of a chosen person located close to the user, an AI-based speech-to-text algorithm converts their spoken words to text which is relayed back to the glasses. That text is projected onto the inside the lenses, where the wearer can read it. The text is also displayed on the phone's screen, as a backup.
According to the company, the app currently recognizes languages such as English, Spanish, French, Italian, Chinese, German and Portuguese. Technical terms, slang, or other words not currently in its vocabulary can be manually added by users.
The AirCaps glasses are currently only available to US buyers, via the company website. They're being offered at an introductory price of US$699, which is $100 off the planned retail price – prescription lenses are sold separately for $150. They should ship in six to eight weeks.
It is worth noting that additional costs are required in order to actually use the things.
Buyers can opt for a $49 monthly plan which allows them to utilize unlimited hours of speech recognition, or they can pay $2 an hour on a flexible plan.
Potential customers might also want to check out the similar TranscribeGlass glasses, although they have yet to reach production. The speech-captioning Voicee glasses, which were announced two years ago, failed to meet their crowdfunding goal.
Sources: AirCaps, Cornell University