May 31, 2007 More than four decades since the conception of the mouse, computer input devices are only just beginning to tap the promise of truly interactive technology which facilitates a two-way conversation between machine and user rather than simply reacting to a limited set of command parameters. This search for a more intuitive and efficient mode of input is now well underway with Microsoft’s ground-breaking foray into the new category of Surface Computing. The Surface is a 30-inch “coffee table” display that not only enables direct interaction with digital content, but also responds to natural gestures and physical objects.
We are familiar with touch-screens that facilitate access to content without the use of a separate keyboard or mouse, but this new concept expands the level of interaction by enabling users to “grab” digital information and manipulate it by touch and gesture. The technology also facilitates multi-touch interaction so that not only can both hands to be used to manipulate content more intuitively, but with as many as a dozen points of contact able to be simultaneously recognized by the system, several people can use the Surface at any given time to make real-time collaborative input a reality.
One of the most compelling features of the technology is its ability to recognize physical objects through identification tags similar to bar codes and then trigger different types of responses including transfer of digital content. The potential for this application is huge. For example by simply placing a digital camera on the table top its contents could automatically be displayed. Similarly non digital objects could also be tagged to render useful information - details of a particular wine could be relayed to the table when the bottle is set down, along with recommendations linked to the restaurant menu on what meals compliment the beverage or even images of the vineyard where it originated. Add to this the fact that you don’t have to leave the table to order your next drink or to pay for the meal - just drop a credit card on the Surface - and you have a very civilized interactive dining experience.
A roll-out of the product is planned for late 2007 and initially the Surface is aimed at public spaces in hotels, retail establishments, restaurants and public entertainment venues where it becomes a type of interactive information portal. It will contain basic applications, including photos, music and “virtual concierge” functions that will enable hotel guests to view maps, browse restaurant menus or book tickets without leaving the table.
In the retail sector applications could include access to information on a particular product just by placing it on the table. T-Mobile are investigating the possibility of applying this to phones where product features, prices and phone plans would appear when the device was placed on the counter.
Gaming is another area where this type of product has serious potential and beyond this Microsoft also sees applications for the technology within the home where potentially you could access digital content whilst brushing your teeth in front of the bathroom mirror or go shopping online using the kitchen counter.
Microsoft’s launch represents the culmination of a project first conceived back in 2001 and is the first major marketing of this technology, although others have been working in the field of multi-touch screens for some time, notably NYU’s Jeff Han who has developed a16-foot-long rear-projected interactive display that can sense multiple points of touch simultaneously. Apple’s iPhone also boasts multi-touch capabilities.
Microsoft Surface runs on Windows Vista OS but as but as you might expect, there is a lot more going on beneath the surface to enable the complex touch input including five infrared cameras that identify objects touching the surface and a DLP projector which produces the on-screen image.
Consumer versions are expected to be 3-4 years away with cost estimates varying from anywhere between $5,000 to $10,000.