Science

Single-lens, light field 4D camera to give robots and autonomous cars better vision

Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom)
Stanford Computational Imaging Lab/ Photonic Systems Integration Laboratory at UC San Diego
Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom)
Stanford Computational Imaging Lab/ Photonic Systems Integration Laboratory at UC San Diego

A robot is only as good as its sensors, so researchers at Stanford University and UC San Diego have developed a new "4D" camera that greatly enhances robotic vision. Billed as the first-ever single-lens, wide field of view, light field camera, the new system uses a spherical lens and advanced algorithms to capture information across a 138-degree field of view to allow robots to not only navigate, but also better understand their environment.

Ever since modern robots started to emerge in the 1970s, the problem of how to give such machines vision has confronted engineers. Over the years, various solution have been tried, like stereoscopic cameras, laser imaging, color analysis, pixel counting, and deep learning. Now the Stanford/UC San Diego team is turning to a new type of camera using spherical lenses developed for DARPA's Soldier CENtric Imaging with Computational Cameras (SCENICC) program.

These lenses were produced to provide a field of view encompassing nearly a third of the circle around the camera to create 360-degree images at a resolution of 125 megapixels per video frame. In the original version, the video camera used fiber optic bundles to convert the spherical images into flat focal planes. It worked, but it was also expensive.

The new camera dispenses with the fiber bundles in favor of a combination of lenslets developed by UC San Diego and digital signal processing and light field photography technology from Stanford, which is what the team say gives the camera a "fourth dimension."

This light field technology takes the two-axis direction of the light entering the lens and mixes it with the 2D image. As is the case with consumer light field cameras from the likes of Lytro, this means that the image now contains much more information about the light position and direction and allows images to be refocused after they've been captured. In addition, it allows a robot to see through things that could obscure their vision, such as rain. The camera is also able to improve close-up images and better ascertain object distances and surface textures..

"It could enable various types of artificially intelligent technology to understand how far away objects are, whether they're moving and what they're made of," says Gordon Wetzstein, electrical engineering professor at Stanford. "This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."

The camera is presently a proof-of-concept device, but the researchers believe that when the technology is mature, it will help robots to navigate in small areas, land drones, aid self-driving cars, and enable augmented virtual reality systems to produce seamless, integrated rendering. The next step will be to install a more compact prototype in an actual robot.

The research was presented in July at the 2017 Conference on Computer Vision and Pattern Recognition (CVPR).

The video below shows the first images from the Wide-FOV Monocentric Light Field Camera.

Source: UC San Diego

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
0 comments
There are no comments. Be the first!