MIT unveils colorful solution for cheap, accessible gesture-based computing
They're not a failed attempt at Belgian jigsaw camouflage or a trophy from clown school, these colorful lycra gloves are the vital component in a new gestural user input system developed by researchers at MIT. When used with a standard webcam and some clever software, the wearer's hand movements are instantaneously translated into on-screen commands or control gestures. Commercial development of the system could lead to widespread availability of cheap and easy-to-use spatial gesture interfaces.
MIT's Robert Y. Wang and Jovan Popovic have developed a gestural tracking system that uses just a standard webcam, a multi-colored cloth glove and some clever software which includes a new algorithm for rapidly searching through a database for visual matches. Instead of tracking reflective or colored tape attached to a user's digits as seen in other setups, this system can track and register a 3D representation of the whole gloved hand.
The apparent random configuration of 20 irregular colored shapes was in fact specifically chosen, according to Wang, to be "distinctive and facilitate robust tracking." As such, background objects can be ignored and the system will work in various lighting conditions whilst avoiding reading errors from color/shape collisions.
The webcam captures the image of the gloved hand and the software crops the background. The image is then reduced to a tiny 40 by 40 pixel resolution digital model. A specially-developed algorithm searches through megabytes of information in a database for a visual match and then produces the appropriate hand shape and position on the display. Of course, all of this is undertaken in a fraction of a second so that the time difference between the gloved and virtual hand is minimal.
Different hand sizes don't present too much of an issue either. The one-size-fits-all stretchy lycra glove just needs to be recalibrated in the software for any new users, a process that takes only a few seconds, and then it's good to go.
OK, so it's not as visually appealing as Tom Cruise standing in front of a transparent screen manipulating videos with the wave of a hand but the research holds the promise to bring gestural user input within consumer reach. As well as potential gaming applications, Wang sees future use in 3D modeling scenarios now fairly common in engineering and design.
The Wang and Popovic system was initially shown at a computer graphics conference last year but it was a little buggy and took too long to set up. Since then, enhancements have been introduced to make the system a lot faster, more stable and flexible. Wang is currently looking to expand the system to the whole body which will make the conversion of live actor movement into digital animation or athletic evaluation a whole lot cheaper and easier to use, at the expense of the wearer looking somewhat ridiculous.
The following video shows Wang overviewing several possible applications of the gesture recognition system: