While there are no shortage of ways to shoot in 3D, these typically involve devices which are more bulky, complex, and expensive than traditional digital cameras. However, a team of engineers has discovered a way to unlock the previously unrecognized 3D imaging capability of ordinary digital camera technology, by repurposing existing components.
Demonstrated in a proof-of-concept lab experiment, the researchers from Duke University say they’ve been able to show how the technology already present in many current digital cameras – namely the image stabilization and focus modules – could be used to extract depth-of-field information from a "single shot" image, without the need for additional hardware.
UPGRADE TO NEW ATLAS PLUS
More than 1,200 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
In their experiment, the engineers controlled a small deformable mirror (a reflective surface which can direct and focus light) using a beam splitter to emulate the workings of a camera. While this resulted in a comparatively long exposure time, it allowed them to investigate how the equivalent image stabilization technology in modern cameras, which typically removes wobbles or blur by moving a lens to counter movement, could instead help record 3D information.
The researchers say the key to achieving their 3D imaging is performing three functions simultaneously: activating the stabilization module to move relative to a fixed point, sweeping through the focus range with a sensor, and collecting light over a set period of time. This allows a single data-rich file to preserve image details while also granting each focus position a different optical response.
The files produced can then contain the all-focused full-resolution 2D image, as well as a depth map which describes the focus position of each pixel of the image. Using a commercial 3D graphics engine similar to those used to render 3D video games, the image can then be processed with the depth map to produce 3D imagery.
This approach, which does not impact the quality of the 2D image, differs from the more traditional multiple-image method of shooting in 3D, or other single-shot approaches, which tend to result in poorer quality 2D images, or require significantly more complex hardware.
While still at the early lab-based stage, and using surrogate technologies, it’s said that these techniques could one day be used in consumer products. In addition to offering 3D photography, the technology could also lead to a more efficient autofocusing process.
The paper "Image translation for single-shot focal tomography" was recently published in the journal Optica.
Source: Duke University