Microsoft gets hands-on with gesture-based computer interfaces
Almost every object you encounter day-to-day has been designed to work with the human hand, so it's no wonder so much research is being conducted into tracking hand gestures to create more intuitive computer interfaces, such as Purdue University's DeepHand or the consumer product, Leap Motion. Now Microsoft has outlined some projects that deal with hand tracking, haptic feedback and gesture input.
"How do we interact with things in the real world?" asks Jamie Shotton, a Microsoft researcher in the labs at Cambridge, UK. "Well, we pick them up, we touch them with our fingers, we manipulate them. We should be able to do exactly the same thing with virtual objects. We should be able to reach out and touch them."
The researchers believe that gesture tracking is the next big thing in how humans interact with computers and smart devices. Combining gestures with voice commands and traditional physical input methods like touchscreens and keyboards will allow ambient computer systems, such as Internet of Things devices, to better anticipate our needs.
The first hurdle is a big one: the human hand is extremely complex, and tracking all the possible configurations it can form is a massive undertaking. That's the focus of Handpose, a research project underway at Microsoft's Cambridge lab, which is using the Kinect sensor you'd find packaged with an Xbox console to track a user's hand movements in real-time and display virtual versions that mimic everything real hands do.
The tool is precise enough to allow users to operate digital switches and dials with the dexterity you'd expect of physical hands, and can be run on a consumer device, like a tablet.
"We're getting to the point that the accuracy is such that the user can start to feel like the avatar hand is their real hand," says Shotton.
Another key aspect to the sensation that digital hands are really your own comes through the sense of touch, and while users of Handpose's virtual switches and dials still reported feeling immersed without any haptic feedback, a Microsoft team at Redmond, Washington, is experimenting with something more hands-on.
This system is able to recognize that a physical button, not connected to anything in the real world, has been pushed by reading the movement of the hand. Using a retargeting system allows multiple, context-sensitive commands to be laid over the top in the virtual world.
This means that a limited set of virtual objects on a small real-world panel is enough to interact with a complex wall of virtual knobs and sliders, like an airplane cockpit for example. The dumb physical actual buttons and dials help make virtual interfaces feel more real, the researchers report.
The third project comes out of Microsoft's Advanced Technologies Lab in Israel. The research on Project Prague aims to enable software developers to incorporate hand gestures for various functions in their apps and programs. So, miming the turn of a key could lock a computer, or pretending to hang up a phone might end a Skype call.
The researchers built the system by feeding millions of hand poses into a machine learning algorithm to train it to recognize specific gestures, and uses hundreds of micro-artificial intelligence units to build a complete picture of a user's hand positions, as well as their intent. It scans the hands using a consumer-level 3D camera.
In addition to gaming and virtual reality, the team believes the technology would have applications for everyday work tasks, including browsing the web and creating and giving presentations.
In the videos below, the researchers demonstrate Handpose and Project Prague.
Source: Microsoft blog