Cameras that can shoot 3D images are nothing new, but they don't really capture three dimensional moments at all - they actually record images in stereoscopic format, using two 2D images to create the illusion of depth. These photos and videos certainly offer a departure from their conventional two dimensional counterparts, but if you shift your view point, the picture remains the same. Researchers from Ecole Polytechnique Fédérale de Lausanne (EPFL) hope to change all that with the development of a strange-looking camera that snaps 360 degrees of simultaneous images and then reconstructs the images in 3D.
The researchers have created two prototype models, both inspired by the multi-lens eyes of insects like the house fly. One has a lens head about the size of an orange and features over a hundred camera lenses - like the ones used in mobile phones - and the other about the size of a golf ball and sporting 15 lenses. Unlike the stereoscopic photographic or video cameras with a front facing lens setup, the prototypes are able to record images from all around them.
The lenses point out through a hemispherical frame and are positioned in such a way that each image captured overlaps slightly on its neighbors. Sophisticated algorithms built into a dedicated hardware platform then judge the actual distance between the camera and subjects in the frame and merges the many gigabits of photographic information captured at 30 frames per second into a 360 degree panorama.
"With this invention, we solved two major problems with traditional cameras," said Professor Pierre Vandergheynst. "The camera angle, which is no longer limited thanks to the camera's ability to film in 360 degrees and in real time; and the depth of field, which is no longer limiting thanks to the 3D reconstruction."
The researchers report that images are captured in real time and without distortion and that users can choose to snap a single shot from a particular lens or have them all work together to produce the 360 degree, three dimensional panorama.
The team's Professor Yusuf Leblebici said that the "work is likely to change the entire field of image acquisition, with a huge range of potential applications" including movie-making and immersive games design.
The project is a collaborative project between the EPFL's Signal Processing Laboratory - who authored the algorithms to calculate the distance between the camera and subjects and those tasked with assembling all of the images into one 360 degree panorama - and the Microelectronic Systems Laboratory - who developed the apparatus and took care of the processing needs.
In the following video, Vandergheynst gives a short explanation of the technology:
This is interesting, but I don\'t see how this solves anything having to do with 3D films as the film being projected would have to be displayed around your head, not multiple people around one display.
However, for a computer that can capture a scene from a single perspective, but from many outward angles, the DIFFERENCE computed between the overlapping portions of the scene picked up by adjacent cameras will reveal the depth to the computer numerically. Now then, using this computed depth information, a STEREOSCOPIC version of the scene can be computed for any desired angle, which, of course, can be presented as stereoscopic 3D for humans.
A few months ago, I actually designed my own version of this... the configuration of mine looks just like theirs. I\'m not tooting my horn though. This is just the most reasonable design. Now, according to your concern, they could create a system configured with two such 360 cameras and, without computation, take the two images from corresponding angles on each of the cameras to have immediate stereoscopic images, but that would limit the view to discrete viewing angles. To allow smooth flow from angle to angle, a set of computations would still be involved. So, avoid extra expense and solve the depth problem by computing using the difference between overlapping portions of two lenses.
They will have a REAL WINNER on their hands if they or someone else configures the deployment of such cameras to work like this:
1) One or more of these cameras fitted with gps (so they can be stationary or mobile) feed all of their raw images from all lenses to a server.
2) ANY NUMBER of viewers could independently control their view by continuously sending their desired 3D position (X, Y, and Z (height)), and 3D target including focus to the server.
3) The server would compute what images from what 360 cameras would be required to compute the stereoscopic pair to be sent to any particular viewer.
4) If the this set of images are sent to the viewer\'s device, and the viewer\'s device is responsible then for computing the final stereoscopic pair, this would greatly reduce the load on the servers and allow a large number of viewers to tap into such a computed-holographic event, each with independent navigation and view.
Eyes are a few inches apart. These lenses are a few milimeters apart. The correct edges of every non-flat object would not be visible to the camera, but would be visible to someone with normal eyes.
They\'d be better off writing software to take input from the 3D HD Video produced by the Finepix Real-3D W3 camera, and getting the photographer to turn around in a circle and/or walk around while filming - *that* would produce a real dataset with enough actual 3D data to reproduce the entire 3D scene \"Genuinely\".
Check out this system https://sites.google.com/site/immersionvision/
Computer generated images, where all angled views of the virtual object can be recomputed as the viewpoint changes, are more amenable to 3d display.