Mobile Technology

Google's Project Tango: A smartphone that sees your surroundings

View 3 Images
'Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision'
'Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision'
Diagram of the Tango prototype
Google is offering 200 prototype development kits to qualified applicants to help improve the device and come up with new apps to exploit its capabilities
View gallery - 3 images

Smartphones are, well, smart, but they aren't very interested in their surroundings. This may seem trivial, but when it comes to working with people in what we like to call the “real” world, a mobile device that doesn't understand much outside of its camera view has only limited usefulness. With these limitations in mind, Google’s Project Tango is working on a smartphone that can map its environment in 3D in real time to provide user and device with some common ground.

People use a host of visual and non-visual cues to understand their world and navigate in it. In fact, there are so many of these cues and they interact so seamlessly that we’re almost unaware of it. We rarely notice when our depth perception ends and we start relying on perspective to judge distance, or how we avoid bumping into things using peripheral vision and our sense of movement, shape, and body image.

The electronic fly in the digital ointment is that while we’re used to living in three dimensions, our smartphones and other devices are basically two-dimensional, with no sense of the world outside of their screen view. Drawing on a decade of technological advances in the fields robotics and computer vision, the academic and industrial partners that make up Google’s Project Tango aim to create a new mobile phone that is capable building up a 3D map of the world around it as a way of enhancing the interaction between device and user.

Diagram of the Tango prototype

“Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision, enabling new and enhanced types of user experiences – including 3D scanning, indoor navigation and immersive gaming,” says Johnny Lee, Technical Program Lead, Advanced Technology & Projects at Google in a press release from Tango partner Movidus. “Movidius has provided a key component towards enabling access to these features in a small mobile platform with a chip designed with visual sensing and battery life in mind. We look forward to continuing our collaboration with Movidius as these new applications evolve and new device designs come to market.”

According to Google, the idea behind Tango is to make a phone that doesn't rely on simple mapping technology to sort out where it is and what’s going on, but rather builds up a detailed 3D model that it can navigate through. For example, it could build a 3D map of a room, which could be accessed while furniture shopping, or directions to someone’s door could tell you to go through the gate and down the stairs at the back of the house, shopping lists could locate the tinned peaches down to the shelf, and the visually impaired would have a digital guide “dog.”

Google is offering 200 prototype development kits to qualified applicants to help improve the device and come up with new apps to exploit its capabilities

Currently, Project Tango is a 5-inch Android prototype packed with new technology that makes it capable of building maps of its surroundings using a quarter-million 3D measurements per second. According to Google, this is nowhere near a marketable device, but is aimed at further development. As part of this process, Google is offering 200 prototype development kits to qualified applicants to help improve the device and come up with new apps to exploit its capabilities.

Google expects to distribute all 200 units by March 14.

The video below explains Project Tango.

Sources: Project Tango, Movidus

View gallery - 3 images
  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
3 comments
Nairda
I think the key here is the way the application comprehends the world in 3D.
Applied to something like Glass, and combined with an object recognition module, the machine could identify what the user is looking at and be able to provide appropriate content for the object.
e.g. - recognise a box of chocolates on the table. OCR the text and associate the brand. Provide the end user with options such as: -Dietary content/information -Option to call up respective provider to order more -price check/comparison to other stores -user reviews -If bar coded, even expiry date or time product has been on the shelf prior to purchase.
Neil
Can we say creepy? Google already reads your email, tracks your website visits, knows where you are 24/7 and now wants more?? No thanks.
Sam Joy
It's the future....as we see it today!, but tomorrow that will change as every new day of perspective gives new meaning and innovation of this yet to be dreamed of....it's why predictions of future events are so coarse and often wrong from the mark. If only one had H. G. Wells "Time Machine".?