It may be based on apparently familiar technology, but Y Combinator startup Matterport reckons it's putting its 3D scanning technology, which it claims can scan real environments into 3D digital representations 20 times faster than the competition, to innovative use.
"We turn reality into 3D models and our scanner is 20 times faster and 18 times cheaper than any other tool on the market," Matterport co-founder Michael Beebe claimed at the Y Combinator 2012 demo day at the end of March. And though that claim might be pushing it slightly - 3D scanners have been around for the better part of two decades - the technology demonstrated in Matterport's demo video is remarkable.
The handheld scanner, which at first glance might be mistaken for a Kinect sensor, is simple waved at the object or interior environment to be scanned in such a way as takes in the object's entire surface. The technology is not only speedy, but also easy, apparently requiring no precision in use.
But so far there has been precious little hard information on how the Matterport scanner actually works - and their demo video is curiously lacking in in-focus close-up shots of the scanner itself. But two things appear to be clear: the scanner does not appear to emit any so-called visible light, and in addition to capturing 3D forms, it is able to apply relatively accurate colors and patterns to the surfaces, reflecting the real object's appearance.
An old website for the technology, before the project was renamed Matterport, reveals that a Kinect sensor was indeed used as the basis for early prototypes, and though the scanner featured in Matterport's promo is clearly not a Kinect sensor, it seems more than likely that the same principles are at work, with two infrared laser depth sensors for depth and form sensing, and an RGB camera for detail. But to get from a Kinect sensor to the technology apparently on display in the promotional video must require some serious software to back it up.
Matterport is currently working with a handful of "beta partners" in fields such as real estate and video games development. We've reached out for more technical info on what makes this tick, and if we find out more you'll be the first to know. Check out Matterport's promo video, if you're curious, under the neath.
Sources: Matterport, 3dcapture.it, Venturebeat
However, it looks like the quality is WAY lower than zprinter's thousandth of an inch resolution.
Wonder how the price compares? It looks like this device as we see it here would be good for making a basic, low quality scan of an environment, but not good enough to scan in faces and such for computer graphics applications, unless the quality we're seeing in the video doesn't represent the finest quality (by about 100x) that it can handle. I'd like to know though.
http://www.faro.com/focus/us
@Alex L. – Using the xtion, we have a range limit of around 15 feet. That just means yes you have to walk around to capture your rooms. By the way, no more carrying around a laptop in the latest version.
With regard to scanning a cathedral, we do have partners who will attempt such things with our system so never say never.
We are still working out details on price but as stated in the article, it will remain significantly cheaper to create a 3D model with our product than existing tools.
Our focus is on reconstructing spaces. Scanning faces is really cool but a different problem. You know who does this really well? ShapeShot (www.shapeshot.com) Their founder Michael Raphael is a great guy.
@Dave A. – Our product video will focus on explaining the end-to-end workflow so you can see that the entire process (scan to textured 3D model) occurs in the timeframe of minutes versus hours or days. Thanks for the kind words; we look forward to showing off the real product too!