Photography

Researchers unlock 3D vision from existing digital camera technology

Researchers have unlocked the previously unrecognized 3D imaging capability of regular camera technology
Simon Crisp/Gizmag.com
Researchers have unlocked the previously unrecognized 3D imaging capability of regular camera technology
Simon Crisp/Gizmag.com

While there are no shortage of ways to shoot in 3D, these typically involve devices which are more bulky, complex, and expensive than traditional digital cameras. However, a team of engineers has discovered a way to unlock the previously unrecognized 3D imaging capability of ordinary digital camera technology, by repurposing existing components.

Demonstrated in a proof-of-concept lab experiment, the researchers from Duke University say they’ve been able to show how the technology already present in many current digital cameras – namely the image stabilization and focus modules – could be used to extract depth-of-field information from a "single shot" image, without the need for additional hardware.

In their experiment, the engineers controlled a small deformable mirror (a reflective surface which can direct and focus light) using a beam splitter to emulate the workings of a camera. While this resulted in a comparatively long exposure time, it allowed them to investigate how the equivalent image stabilization technology in modern cameras, which typically removes wobbles or blur by moving a lens to counter movement, could instead help record 3D information.

The researchers say the key to achieving their 3D imaging is performing three functions simultaneously: activating the stabilization module to move relative to a fixed point, sweeping through the focus range with a sensor, and collecting light over a set period of time. This allows a single data-rich file to preserve image details while also granting each focus position a different optical response.

The files produced can then contain the all-focused full-resolution 2D image, as well as a depth map which describes the focus position of each pixel of the image. Using a commercial 3D graphics engine similar to those used to render 3D video games, the image can then be processed with the depth map to produce 3D imagery.

This approach, which does not impact the quality of the 2D image, differs from the more traditional multiple-image method of shooting in 3D, or other single-shot approaches, which tend to result in poorer quality 2D images, or require significantly more complex hardware.

While still at the early lab-based stage, and using surrogate technologies, it’s said that these techniques could one day be used in consumer products. In addition to offering 3D photography, the technology could also lead to a more efficient autofocusing process.

The paper "Image translation for single-shot focal tomography" was recently published in the journal Optica.

Source: Duke University

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
4 comments
xs400
So would this be possible via a firmware upgrade on a current camera?
Firehawk70
This is one of those things that I read and hit my forehead like, "Duh, of course, that makes perfect sense. Why hasn't anyone thought of this before?" The algorithms are somewhat complex, and perhaps it couldn't have really been executed until we had really good image stabilization and auto-focus devices, as well as the 3D rendering techniques for output. But the solution really seems fairly simple overall. If you manually focus your camera from far to near at a low f-stop (narrow depth of field) you can see how each level of depth is only in focus at a certain point in the range. Therefore, the piece in focus at each depth belongs there in the final 3D. I wonder if this technique would only work at certain focal lengths with a low f-stop, or if the image stabilization part makes up for that?
Oun Kwon
This sounds just like Lytro lightfield camera. Check it out https://www.lytro.com/
pjspot
This is what I call a two and a half dimensional approach and not true 3D. This process gives a depth map and a color image which can be used to make a 3D like image. This technique is not a true 3D image because a depth map always connects the foreground objects to the background. So when you go to create a 3D stereoscopic pair from the depth map where the perspective has to shift in one of the two images, you always get artifacts along the edges of the foreground objects where they are connected to the background, which does not look so good. When you have true 3D and you look behind the foreground objects, you can actually see what's behind them. This is not the case with depth mapping techniques and true 3D can't be done with a single lens. You need two lenses or a lens array to capture true 3D.