Last November, German tech firm Evoluce unveiled a Kinect-based prototype multi-touch system that allows users to navigate through Windows 7 applications, simply by moving their hands in the air. While that system utilizes the Kinect unit's RGB camera and depth sensor to track the user's hands, a new technology developed at Texas A&M University's Interface Ecology Lab uses a matrix of infrared light beams to do essentially the same thing. It's called ZeroTouch, and it was presented at last week's 2011 Conference on Human Factors in Computing Systems in Vancouver.
Unlike the Evoluce system, ZeroTouch incorporates an open picture-frame-like sensing apparatus, which the user reaches into. It can be placed on a desktop, around the computer screen, or it can hang in the air with the screen visible beyond it. Around the frame's four edges are an array of infrared LED lights, the invisible beams of which shine into and across the inside open area. Mixed in with those lights are 256 modulated infrared sensors, which register the beams of the lights located across from them.
When a user places one or more fingers or other objects within the frame - intersecting the grid-work of light beams - the system's software is able to calculate the size, shape and location of those objects within the frame, and apply that to equivalents on a Windows 7 computer screen. It's a technology known as point-to-point visual hull sensing, and it can handle over 20 objects at once.
The Texas A&M team demonstrated three ZeroTouch applications in Vancouver. These included intangibleCanvas, in which users can "paint" pictures using their elbows, arms, head and fingers; Hand + Pen in Hand Command, which is a real-time strategy game played via multi-touch and stylus; and ArtPiles, a curatorial tool for museums and art galleries, that allows users to organize large collections.