Photography

New technique creates 3D images through a single lens

New technique creates 3D images through a single lens
Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths (Image: Harvard)
Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths (Image: Harvard)
View 2 Images
The image infers depth from two pictures taken from the same angle but with different focus depths (Image: Harvard)
1/2
The image infers depth from two pictures taken from the same angle but with different focus depths (Image: Harvard)
Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths (Image: Harvard)
2/2
Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths (Image: Harvard)

A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters.

Nowadays, shooting a picture in 3D requires some specialized hardware. Commercial cameras such as the Fujifilm W3 use on two lenses to capture a subject from slightly different perspectives, while others like the Lytro use microlens arrays and absorbing masks to record the angle at which light hits the lens. With this information, the camera can do things like changing the focus of the picture or its perspective view even after the picture has been taken.

The technique developed by Kenneth Crozier and Anthony Orth at SEAS achieves the same result, but using software only. Their algorithm creates a 3D movie using two pictures taken from a stationary camera but at different focus depths.

3D on the cheap

Our eyes perceive depth either through binocular parallax (using our eyes to view the same object from two slightly different perspectives) or through motion parallax (changing perspective of an object based on the viewing angle).

Attempting to create a 3D image from a single, stationary camera is extremely challenging because neither binocular nor motion parallax can be used to infer depth. It would be like trying to judge depth with only one eye, without moving your head at all.

The researchers' workaround was to use a mathematical model to calculate at what angle the light is striking each pixel. It does this by comparing the slight differences between two images taken from the same position but focused at different depths. The two images can then be stitched together in an animation that gives the impression of a stereo image.

The technique, which the researchers have dubbed "light-field moment imaging," allows single lens cameras to produce 3D images, however, not all cameras will be up to the job. The crucial factor here is that camera aperture must be wide enough to let in light from a wide range of angles. While a smartphone camera is too small, the researchers say a standard 50 mm lens on a single-lens reflex (SLR) camera will do the job nicely.

Applications

While this technique won't allow you to see around corners, it does a very good job of imaging translucent objects such as living cells captured by a microscope, and so it could provide an interesting platform for biologists to study cell behavior under the microscope more effectively.

Beyond biology, the technology could also be used to create motion parallax in movie theaters so that by moving your head you would actually be able to see a slightly different perspective, thereby giving the impression of 3D depth.

"Using light-field moment imaging, we're creating the perspective-shifted images that you'd fundamentally need to make that work – and just from a regular camera," Orth explains. "So maybe one day this will be a way to just use all of the existing cinematography hardware, and get rid of the glasses. With the right screen, you could play that back to the audience, and they could move their heads and feel like they're actually there."

The team's research appears on the journal Optics Letters.

The short video below shows two examples of how the imaging technique can be used to create the illusion of depth.

Source: Harvard

Seeing depth through a single lens | Harvard School of Engineering and Applied Sciences

5 comments
5 comments
professore
This sort of claim looks like little more than false advertising. 3D or stereoscopic visions is also called binocular vision - and for a very good reason since ii relies upon two eyes which have some separation. The resulting images with slightly different viewing angles produce the full depth information. A single viewing point cannot do this and providing a narrow depth of field to ensure an out-of-focus background is hardly a substitute. Photographers have used this technique since the early days of photography merely by using precise focussing with the lens opened to its fullest aperture - it was often considered an "arty trick". A different ploy which is even worse is the claim by some TV makers that they can "convert your old 2D movies to 3D"....clearly impossible since half the necessary information was never recorded in the first place!
John Waller
T'ain't necessarily impossible to get 3d out of a 2d movie. It depends on the movie. You can even see 3D without glasses.Again, depending on the movie, the necessary information has been filmed. The ideal movie is one that pans across a scene, such as a cityscape or landscape, or even in a room. The panning records the slight differences necessary to create a 3d image, and because it is a moving image, the eyes receive the data at different times. The most famous example of this is the opening scene of "The Sound of Music". As it opens the camera pans across the mountains, and you will start to see them in 3d. Now, this is the 3D of a still photo, not the3d of a hologram, so you cannot see around corners. The Lord of The Rings trilogy has scenes with this effect. You can be there watching it and at the right scene it wil start popping out. Another common example of this is when a film or tv show flies over a city with the camera pointed down. As the cityscape slides below you, you get a decided feel of looking down on a 3D set.
This is because as the camera slides across the field of view each frame gets a slightly different view of the subject which creates the illusion of 3d, just as 3d photography does.
Look at the provided video, it definitely creates a 3d illusion. And with few exceptions, all 3d imaging is an illusion.
nomel_
How is this different from "depth from defocus"? Are they not estimating/using the point spread function or something?
professore said: > A single viewing point cannot do this and providing a narrow depth of field to ensure an out-of-focus background is hardly a substitute.
It's not a single viewpoint. By moving the lens forward or backwards (focus), you're changing the perspective. One perspective will appear to be closer than the other. I imagine this *only* works well with a single lens system.
WithoutVisionThePeopleAreLost
This is not new at all. See the 1993 CVPR paper from CMU titled "Depth from Focusing and Defocusing." And the basic concept pre-dates this paper by some years.
Yaman Asaf
what is the status of this software? does anyone know if there is a commercial copy of it to test it out