Speech-to-text systems already exist, as do augmented-reality displays. Now, a group of New York City teens led by Daniil Frants (who interned at the MIT Media Lab when he was 14) have combined the two technologies to form the Live Time Closed Captioning System (LTCCS). Once up and running, it could revolutionize the way in which deaf people communicate with the hearing world.
The system consists of three components: a compact microphone that's clipped onto the user's clothing (as if they're a news anchor), a smartphone-sized Raspberry Pi/Adafruit-powered microcomputer that's kept in a pocket, and a Google Glass-like display.
The mic is calibrated to pick up human speech, even in environments with considerable background noise. That audio is processed by the computer, which converts it to text and wirelessly transmits that data to the display. Clipped to an existing pair of third-party glasses, that display in turn shows the user the text, superimposed over their view of the speaker.
There's reportedly very little in the way of lag between the words being spoken and being displayed.
Currently the team members have created a working proof-of-concept model, and are now building a functioning prototype. To that end, they've turned to Indiegogo to raise development funds. A pledge of US$650 will get you an LTCSS of your own, when and if it reaches production.
More information is available in the pitch video below.
Source: Indiegogo