In order for augmented reality systems to recognize locations from a user's ground-level perspective, they typically first have to be "trained" using ground-level images of those same places. Sturfee's City AR system, however, works quicker by utilizing satellite photos.
Announced this Tuesday, City AR begins by building a 3D digital mesh model of a city, based on high-resolution 2D satellite images of ground data such as building geometries, trees and roads. Each location within that model is in turn assigned a "visual fingerprint," resulting in a machine-readable "fingerprint map."
When a City AR-enabled smartphone app subsequently images one of those locations from the ground, the system's cloud-based computer vision algorithms are able to match the visual information from the app to one of the fingerprints on the map. It is thus able to determine where within the city the user is, and which buildings their phone is "looking" at, allowing text or graphics to appear onscreen in the appropriate places.
According to Sturfee CEO Anil Cheriyadat, it's a much more efficient technique than using camera-equipped vehicles or other methods to visually map cities from the ground.
"These are operationally-intensive approaches and very costly to scale," he says. "With our technology, we can create a machine-readable version of San Francisco in just a week, and detect and update any city changes even quicker."
Sturfee has so far mapped 15 cities in three continents, and has also signed a multi-year licensing agreement with Japan’s second-largest cellular service provider, KDDI Corp.
City AR can be seen in use, in the video below.
Source: Sturfee