Disney Research has developed an algorithm which can generate 3D computer models from 2D images in great detail, sufficient, it says, to meet the needs of video game and film makers. The technology requires multiple images to capture the scene from a variety of vantage points.
The 3D model is somewhat limited in that it is only coherent within the field of view encompassed by the original images. It does not appear to fill in data.
However, judging from Disney Research's demo video, the detail achieved is incredibly impressive. The team used a hundred 21-megapixel photos for each of their demo models. These were captured by moving the camera along a straight line for each shot. Though this approach makes it easier to process the data, the team says that the algorithm can be applied to less regimented sets of images.
Unlike other systems, the algorithm calculates depth for every pixel, proving most effective at the edges of objects.
The algorithm demands less of computer hardware than would ordinarily be the case when constructing 3D models from high-res images, in part because it does not require all of the input data to be held in memory at once.
The system is not yet perfect. Depth measurements are less accurate than they would be if captured with a laser scanner, and the researchers admit that more work is needed to handle surfaces which vary in reflectance.
Alexander Sorkine-Hornung of Disney Research suggests that the algorithm could also be used in the manipulation of 2D images, by removing backgrounds or creating new 3D scenes from a combination of source images, for instance.
The team will be demonstrating the technology this week at SIGGRAPH 2013. The team's video is below.
Source: Disney Research
http://www.123dapp.com/catch
With the native app you can supposedly improve on it's guess at stitching the photos together (which I find pretty poor) by manually identifying points on the photos that are coincident. If, however, the points you tell it are coincident deviate too much from it's current guess it responds with a big red notice that in effect says, "Sorry but that just can't be right" and it rejects your specification. Incredibly frustrating. In nearly every case so far manual stitching produces a worse result than its poor guess. This program is far from ready for prime time. I'm not sure what they could be thinking releasing it for public use. It's a black eye rather than a feather in the cap.
Granted, it is a hard problem but they haven't solved it. There are many supposed successes in a gallery on their site but they only appear successful because the the photos are projected onto the captured mesh. Look at the mesh by itself to see the real quality of the capture. It ain't sterling and completely lacks the detail that the projected photos fool you with.
Caution. If you use it for a small object, shooting with a macro lens or setting, you absolutely _must_ remove all pincushion or other lens distortion (which may not be at all evident from your pictures) with some correction utility. Gimp can do it if you photograph and correct a rectangular grid and then apply the result to all photos but it is slow and you must correct one shot at a time which takes forever. Otherwise your attempts to manually stitch will be an exercise in maddening futile perversity. The app simply refuses to accept your input or modifies it in bizarre ways and any results will be worthless. I wasted an enormous amount of time trying to do so before the possibility of distortion pollution dawned on me. Mea culpa for that one.