We've all seen gigantic touch screens on the news or in movies, but what if you could achieve the same type of interface by simply replacing the bulb in your desk lamp? That's the idea behind the LuminAR, developed by a team led by Natan Linder at the MIT Media Lab's Fluid Interfaces Group. It combines a Pico-projector, camera, and wireless computer to project interactive images onto any surface – and is small enough to screw into a standard light fixture.
The LuminAR project (the capitals reflect its shared properties with other augmented reality set-ups) has two separate but interconnected components. The Luminar Bulb is a stand-alone unit that allows users to interact with its projection through simple hand gestures for zoom, position control, and content manipulation. It can plug into any fixture, but takes on even more functionality when combined with the LuminAR Lamp - an articulated robotic arm (similar to the Pinokio Lamp), enabling you to move the projected image around.
UPGRADE TO NEW ATLAS PLUS
More than 1,200 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
The Luminar Lamp remembers where you've moved different applications, allowing you to organize your workspace accordingly, such as putting your twitter feed in a less distracting location, or projecting a Skype session onto a wall. The Lamp can also take snapshots of the work area, allowing you to quickly scan and share work documents seamlessly across multiple devices.
Besides tracking your hands and fingers, the camera and image processing software could detect objects in the work space, such as a canned soft drink, and automatically display targeted advertising around it. One potential application would be projecting rich media, including product information, in a retail setting. In effect, browsing a store's display could incorporate the same media and interactivity as a product web site.
The LuminAR project was developed through 2010, and showcased earlier this year at the ACM CHI Conference on Human Factors in Computing Systems.
See how it works in this video summarizing its development.