Does anyone remember the animated version of Star Trek from the 1970s? The Emmy-Award-winning series was the very first outing for the now familiar Holodeck, although it was called the recreation room back then. Despite some landmark advances in holographic technology in the years since - such as the University of Tokyo's Airborne Ultrasound Tactile Display - nothing has come close to offering the kind of physical interactivity with virtual objects in a 3D environment promised by the collective imaginations of sci-fi writers of the past. While we're not at the Holodeck level just yet, members of the Sensors and Devices group at Microsoft Research have developed a new system called HoloDesk that allows users to pick up, move and even shoot virtual 3D objects, plus the system recognizes and responds to the presence of inanimate real-world objects like a sheet of paper or an upturned cup.
Unfortunately, the research team hasn't revealed too much about how its new natural user interface system works, but here's what we do know. It's about the size of a filing cabinet and is made up of an overhead screen that projects a 2D image through a half-silvered beam splitter into a viewing area beneath. A Kinect camera keeps tabs on a user's hand position within the 3D virtual environment, a webcam tracks the user's face to help with placement accuracy, and custom algorithms bring everything together in (something very close to) real time.
The user looks down through a transparent display into the viewing area where holographic objects can be picked up and stacked on top of real-world ones, and real hands can juggle virtual balls or shoot them at targets, or play with a non-existent smartphone. The researchers also seem to have included the ability to remotely collaborate on shared multi-user virtual projects. Interestingly, objects in the virtual world still appear to obey the laws of real-world physics, but that doesn't mean that they have to - the beauty of a virtual world is surely that anything is possible.
As you can see from the following proof-of-concept Microsoft Research video, the development does suffer from some jerkiness and image dilution when real-world objects enter the viewing area, and there are also a few placement and tracking issues, but it's a major step forward and in its current stage of development might find immediate use in gaming, education and design.