Disney Research has developed an algorithm which can generate 3D computer models from 2D images in great detail, sufficient, it says, to meet the needs of video game and film makers. The technology requires multiple images to capture the scene from a variety of vantage points.
The 3D model is somewhat limited in that it is only coherent within the field of view encompassed by the original images. It does not appear to fill in data.
NEW ATLAS NEEDS YOUR SUPPORT
Upgrade to a Plus subscription today, and read the site without ads.
It's just US$19 a year.UPGRADE NOW
However, judging from Disney Research's demo video, the detail achieved is incredibly impressive. The team used a hundred 21-megapixel photos for each of their demo models. These were captured by moving the camera along a straight line for each shot. Though this approach makes it easier to process the data, the team says that the algorithm can be applied to less regimented sets of images.
Unlike other systems, the algorithm calculates depth for every pixel, proving most effective at the edges of objects.
The algorithm demands less of computer hardware than would ordinarily be the case when constructing 3D models from high-res images, in part because it does not require all of the input data to be held in memory at once.
The system is not yet perfect. Depth measurements are less accurate than they would be if captured with a laser scanner, and the researchers admit that more work is needed to handle surfaces which vary in reflectance.
Alexander Sorkine-Hornung of Disney Research suggests that the algorithm could also be used in the manipulation of 2D images, by removing backgrounds or creating new 3D scenes from a combination of source images, for instance.
The team will be demonstrating the technology this week at SIGGRAPH 2013. The team's video is below.
Source: Disney Research