Wearables

Google to trial new augmented reality glasses in public next month

View 2 Images
Google has previously demonstrated the potential of its AR glasses to translate voice to text
Google
Google is set to begin publicly trialing its new AR glasses next month
Google
Google has previously demonstrated the potential of its AR glasses to translate voice to text
Google

Seven years after the death knell was sounded for Google Glass, the tech giant is returning to the well of augmented reality to produce another set of smart eyewear. Promising that "it's early, and we want to get this right, so we’re taking it slow," the company is now preparing its new prototype AR glasses for public trials to ascertain their real-world potential.

Google has actually been developing this new set of AR glasses for a little while, and back in May offered a glimpse of its progress so far. A video shared by the company showed the eyewear being used to translate voice-to-text in real time and projecting that onto the wearer's view, helping them understand speech in other languages.

Google is set to begin publicly trialing its new AR glasses next month
Google

The scope of the new AR glasses may extend well beyond that, however. Google says there are limits to what it can learn from testing the glasses in a laboratory setting, and to offer functionality such as AR-guided navigation it needs real-world factors like weather and traffic to be taken into account.

The move will also help Google understand how they might be used in the real world. The glasses will feature in-lens displays, microphones and cameras, but won't be used to collect photos and videos, the company insists. Rather, that onboard hardware will be used for things like menu translations or to offer directions to a cafe nearby.

To begin with, these glasses will be publicly trialed by a handful of Google staff and selected testers, with the program to kick off next month.

You can check out the previous video on voice-to-text translation below.

Source: Google

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
5 comments
paul314
Does the tiny amount of hardware that fits in those frames really have enough storage and horsepower to do image processing on text and live translation, or location detection and navigation? Somehow I suspect that the video and sound aren't being stored explicitly, but there's still a fat data stream being sent to and from a server somewhere. (It's possible that you could train the hardware and store enough data for navigation and such within a very limited region, the way that old-school GPS units used to make you choose where you were before they could do any mapping or nav.)
AngryPenguin
@paul314 - I'm betting that anything that doesn't NEED to be attached to the glasses will be outsourced to the user's smartphone.
Claudio
too bad they'll very likely NOT work in China, where Google won't work (unless you use a VPN)
Aermaco
I’m hoping this wonderful advance can use a virtual focal plan for the text display that is variable by user to match the users real focal plan to avoid the nausea from focal accommodation and convergence disparity. It would be while driving set at infinity and while face to face what ever that distance may be etc.
Erik
Hmm, seems like a waste of money. They are just going to sell data about you collected from the glasses like they do for everything else.