Google Lens puts AI to work identifying the world around you
The first new product announced during the Google I/O keynote today is Google Lens, which combines image recognition, machine learning and Google's vast preexisting platform to help you understand your surroundings. With Google Lens, you can point your phone at an object to automatically access more information about it.
Google CEO Sundar Pichai gave a few examples: Pretend you're looking at a flower and you'd like to find out what kind it is. Open Google Lens, point the camera at the flower, and Google responds with the type and background information (great for allergy sufferers). Or, point the camera at a brick-and-mortar storefront. Google Lens can "read" the sign and tap into Maps and business listings to provide, hours, descriptions reviews and more. You could even point Google Lens at a Wi-Fi router username and password to prompt your phone to automatically connect to the network.
In conjunction with Google Assistant (which Google also confirmed will be coming to iOS), Google Lens can help you automatically perform tasks like translating signs and keeping track of events that you spot over the course of the day. Point the camera at an event marquee, for example, to learn more about the performer, add the event to the calendar or purchase tickets.
Google Lens will be incorporated into Google Assistant and Photos first (where it can identify things like buildings and pieces of art from your photo album), and is then expected to hit other apps.
While Google didn't mention this, this kind of AI service sounds like it would be a perfect fit for future augmented reality smartglasses.
Google's I/O event is happening now, and we expect to learn much more about the tech giant's new undertakings in the coming days. We'll post more details as they become available.
Please keep comments to less than 150 words. No abusive material or spam will be published.