Photography

Researchers develop genuine 3D camera

Researchers have developed a camera system that snaps multi-gigabit images at 30 frames per second over 360 degrees and then displays them as one three dimensional panorama
Researchers have developed a camera system that snaps multi-gigabit images at 30 frames per second over 360 degrees and then displays them as one three dimensional panorama

Cameras that can shoot 3D images are nothing new, but they don't really capture three dimensional moments at all - they actually record images in stereoscopic format, using two 2D images to create the illusion of depth. These photos and videos certainly offer a departure from their conventional two dimensional counterparts, but if you shift your view point, the picture remains the same. Researchers from Ecole Polytechnique Fédérale de Lausanne (EPFL) hope to change all that with the development of a strange-looking camera that snaps 360 degrees of simultaneous images and then reconstructs the images in 3D.

The researchers have created two prototype models, both inspired by the multi-lens eyes of insects like the house fly. One has a lens head about the size of an orange and features over a hundred camera lenses - like the ones used in mobile phones - and the other about the size of a golf ball and sporting 15 lenses. Unlike the stereoscopic photographic or video cameras with a front facing lens setup, the prototypes are able to record images from all around them.

The lenses point out through a hemispherical frame and are positioned in such a way that each image captured overlaps slightly on its neighbors. Sophisticated algorithms built into a dedicated hardware platform then judge the actual distance between the camera and subjects in the frame and merges the many gigabits of photographic information captured at 30 frames per second into a 360 degree panorama.

"With this invention, we solved two major problems with traditional cameras," said Professor Pierre Vandergheynst. "The camera angle, which is no longer limited thanks to the camera's ability to film in 360 degrees and in real time; and the depth of field, which is no longer limiting thanks to the 3D reconstruction."

The researchers report that images are captured in real time and without distortion and that users can choose to snap a single shot from a particular lens or have them all work together to produce the 360 degree, three dimensional panorama.

The team's Professor Yusuf Leblebici said that the "work is likely to change the entire field of image acquisition, with a huge range of potential applications" including movie-making and immersive games design.

The project is a collaborative project between the EPFL's Signal Processing Laboratory - who authored the algorithms to calculate the distance between the camera and subjects and those tasked with assembling all of the images into one 360 degree panorama - and the Microelectronic Systems Laboratory - who developed the apparatus and took care of the processing needs.

In the following video, Vandergheynst gives a short explanation of the technology:

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
8 comments
jimbo92107
They stole my hat!
Joe Legeckis
But, 3D requires two view points of the same object in that they\'re slightly off angle views rotated from the objects axis, not from the camera\'s axis.
This is interesting, but I don\'t see how this solves anything having to do with 3D films as the film being projected would have to be displayed around your head, not multiple people around one display.
Mindbreaker
It would seem better suited as a robot head or an airport facial recognition camera.
kalqlate
@jimbo92107 - You are correct.. in terms of how in needs to be VIEWED by humans with two eyes.

However, for a computer that can capture a scene from a single perspective, but from many outward angles, the DIFFERENCE computed between the overlapping portions of the scene picked up by adjacent cameras will reveal the depth to the computer numerically. Now then, using this computed depth information, a STEREOSCOPIC version of the scene can be computed for any desired angle, which, of course, can be presented as stereoscopic 3D for humans.

A few months ago, I actually designed my own version of this... the configuration of mine looks just like theirs. I\'m not tooting my horn though. This is just the most reasonable design. Now, according to your concern, they could create a system configured with two such 360 cameras and, without computation, take the two images from corresponding angles on each of the cameras to have immediate stereoscopic images, but that would limit the view to discrete viewing angles. To allow smooth flow from angle to angle, a set of computations would still be involved. So, avoid extra expense and solve the depth problem by computing using the difference between overlapping portions of two lenses.

They will have a REAL WINNER on their hands if they or someone else configures the deployment of such cameras to work like this:

1) One or more of these cameras fitted with gps (so they can be stationary or mobile) feed all of their raw images from all lenses to a server.

2) ANY NUMBER of viewers could independently control their view by continuously sending their desired 3D position (X, Y, and Z (height)), and 3D target including focus to the server.

3) The server would compute what images from what 360 cameras would be required to compute the stereoscopic pair to be sent to any particular viewer.

4) If the this set of images are sent to the viewer\'s device, and the viewer\'s device is responsible then for computing the final stereoscopic pair, this would greatly reduce the load on the servers and allow a large number of viewers to tap into such a computed-holographic event, each with independent navigation and view.
kelvint63
This would work well at race-tracks; if they're position in a linear fashion, every inch of the track can be covered at all times. For International Races, all the broadcasters can pull their own live-feeds from anywhere on the track and not have to rely on the host-broadcaster and what they feel the rest of the world should be focusing on; (bonus; never miss any crashes either). It can even get to the point where individuals can log online and follow their favorite driver/rider for the entire race.
christopher
\"Genuine 3D\"? I don\'t think so - it generates a dataset that represents a 360-degree lateral and 108-degree vertical view from a particular point someplace. You might be able to semi-fake an actual 3D view with suitable headset or glasses and some software - but it would look like the edges of everything are wrong, and if you moved away from the spot the photo was taken - nothing would have any backs to it.

Eyes are a few inches apart. These lenses are a few milimeters apart. The correct edges of every non-flat object would not be visible to the camera, but would be visible to someone with normal eyes.

They\'d be better off writing software to take input from the 3D HD Video produced by the Finepix Real-3D W3 camera, and getting the photographer to turn around in a circle and/or walk around while filming - *that* would produce a real dataset with enough actual 3D data to reproduce the entire 3D scene \"Genuinely\".
Frank Harris
Interesting, however this seems like a lot of manipulation of multiple camera images. It appears this camera captures 360 degrees horizontally and 180 vertically, and has no ability to view the entire image.
Check out this system https://sites.google.com/site/immersionvision/
SeekMocha
Right - it\'s not true 3d until I can walk around the image and see the back of it. For real world photography, this seems to be an insurmountable problem unless you are willing to surround the object being photographed or videoed with a sphere of cameras.
Computer generated images, where all angled views of the virtual object can be recomputed as the viewpoint changes, are more amenable to 3d display.