Computers

Typealike tech uses computers' existing webcams for gesture control

Typealike tech uses computers' existing webcams for gesture control
The experimental Typealike system is currently 97 percent accurate at identifying 36 different hand gestures
The experimental Typealike system is currently 97 percent accurate at identifying 36 different hand gestures
View 1 Image
The experimental Typealike system is currently 97 percent accurate at identifying 36 different hand gestures
1/1
The experimental Typealike system is currently 97 percent accurate at identifying 36 different hand gestures

While it is already possible to control computers via hand gestures, doing so typically involves using peripheral electronic devices or special embedded hardware. The Typealike system, however, brings such functionality to existing computers, no added electronics required.

Currently under development at Canada's University of Waterloo, Typealike consists of two components: a small downward-angled mirror which is placed over the computer's webcam, and machine-learning-based software which is running on that computer.

When the user wishes to turn the speaker volume up – just as one example of the system's capabilities – they simply make a thumbs-up gesture with one hand that remains beside the keyboard. The software recognizes that gesture, and increases the volume accordingly.

"It started with a simple idea about new ways to use a webcam," says Nalin Chhibber, a recent master’s graduate from Waterloo’s Cheriton School of Computer Science. "The webcam is pointed at your face, but most interaction happening on a computer is around your hands. So, we thought, what could we do if the webcam could pick up hand gestures?”

Additionally, by angling the webcam's view down to their hands, users won't have the privacy concerns that would come with Typealike constantly viewing their face.

Because everyone's hands look and move differently, the researchers used 30 volunteers to create a video database of the 36 gestures currently used by the system. A machine learning algorithm was then trained on that database, learning to consistently recognize each gesture despite variations in hands and lighting conditions.

The system is presently 97 percent accurate at identifying the gestures. It is believed that once developed further, Typealike could also be used in virtual reality environments, making hand-held controllers unnecessary.

"We’re always setting out to make things people can easily use," says Assoc. Prof. Daniel Vogel. "People look at something like Typealike, or other new tech in the field of human-computer interaction, and they say it just makes sense. That’s what we want."

The research is described in a paper that was recently published in the journal ACM Human Computer Interaction.

Source: University of Waterloo

1 comment
1 comment
MarkGovers
I'll bet this is pretty good at watching hands type passwords :) Be careful out there.