A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters.
Nowadays, shooting a picture in 3D requires some specialized hardware. Commercial cameras such as the Fujifilm W3 use on two lenses to capture a subject from slightly different perspectives, while others like the Lytro use microlens arrays and absorbing masks to record the angle at which light hits the lens. With this information, the camera can do things like changing the focus of the picture or its perspective view even after the picture has been taken.
UPGRADE TO NEW ATLAS PLUS
More than 1,200 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
The technique developed by Kenneth Crozier and Anthony Orth at SEAS achieves the same result, but using software only. Their algorithm creates a 3D movie using two pictures taken from a stationary camera but at different focus depths.
3D on the cheap
Our eyes perceive depth either through binocular parallax (using our eyes to view the same object from two slightly different perspectives) or through motion parallax (changing perspective of an object based on the viewing angle).
Attempting to create a 3D image from a single, stationary camera is extremely challenging because neither binocular nor motion parallax can be used to infer depth. It would be like trying to judge depth with only one eye, without moving your head at all.
The researchers' workaround was to use a mathematical model to calculate at what angle the light is striking each pixel. It does this by comparing the slight differences between two images taken from the same position but focused at different depths. The two images can then be stitched together in an animation that gives the impression of a stereo image.
The technique, which the researchers have dubbed "light-field moment imaging," allows single lens cameras to produce 3D images, however, not all cameras will be up to the job. The crucial factor here is that camera aperture must be wide enough to let in light from a wide range of angles. While a smartphone camera is too small, the researchers say a standard 50 mm lens on a single-lens reflex (SLR) camera will do the job nicely.
While this technique won't allow you to see around corners, it does a very good job of imaging translucent objects such as living cells captured by a microscope, and so it could provide an interesting platform for biologists to study cell behavior under the microscope more effectively.
Beyond biology, the technology could also be used to create motion parallax in movie theaters so that by moving your head you would actually be able to see a slightly different perspective, thereby giving the impression of 3D depth.
"Using light-field moment imaging, we're creating the perspective-shifted images that you'd fundamentally need to make that work – and just from a regular camera," Orth explains. "So maybe one day this will be a way to just use all of the existing cinematography hardware, and get rid of the glasses. With the right screen, you could play that back to the audience, and they could move their heads and feel like they're actually there."
The team's research appears on the journal Optics Letters.
The short video below shows two examples of how the imaging technique can be used to create the illusion of depth.