New tech makes four-camera 3D shooting possible
When it comes to producing 3D TV content, the more cameras that are used to simultaneously record one shot, the better. At least two cameras (or one camera with two lenses) are necessary to provide the depth information needed to produce the left- and right-eye images for conventional 3D, but according to researchers at Germany's Fraunhofer Institute for Telecommunications, at least four cameras will be needed if we ever want to achieve glasses-free 3D TV. Calibrating that many cameras to one another could ordinarily take days, however ... which is why Fraunhofer has developed a system that reportedly cuts that time down to 30 to 60 minutes.
The STAN assistance system ensures that the optical axes, focal lengths and focal points are the same for each camera. That way, as the viewer moves their head, the combined shots will all look like one three-dimensional shot.
Objects that are visible in all four shots are identified using a feature detector function. Using these objects as references, STAN then proceeds to calibrate the cameras so that they match one another. Due to slight imperfections in lenses, however, some discrepancies could still remain. In those cases, the system can do things such as electronically zooming in on one shot, to compensate for the flaws. This can be done in real time, so STAN could conceivably even be used for live broadcasts.
The Fraunhofer team is now in the process of developing a video encoding system, to compress all the data into a small enough form that it could be transmitted using the conventional broadcasting infrastructure. The four-camera setup is already in use by members of the MUSCADE project, which is a consortium dedicated to advancing glasses-free 3D TV technology.