Electronics

New chip could turn phone cameras into high-res 3D scanners

New chip could turn phone cameras into high-res 3D scanners
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
View 3 Images
A 3D image of an American penny produced by the NCI chip, with the colors corresponding to height at various points (Image: Ali Hajimiri/Caltech)
1/3
A 3D image of an American penny produced by the NCI chip, with the colors corresponding to height at various points (Image: Ali Hajimiri/Caltech)
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
2/3
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
3/3
A tiny, cheap chip may soon bring 3D imaging to smartphones, robotics, and many other areas (Image: Ali Hajimiri/Caltech)
View gallery - 3 images

As if smartphones can't already do enough, soon they may be able to scan three-dimensional objects and send the resultant high-resolution 3D images to a 3D printer that produces hyper-accurate replicas. This comes thanks to a small and inexpensive device called a nanophotonic coherent imager (NCI), which was developed by scientists at Caltech. The NCI could add 3D imaging to a variety of other devices and applications such as improving motion sensitivity in human machine interfaces and driverless cars.

Unlike in conventional cameras, the NCI chip determines both the appearance and distance at each pixel of the part of a scene or object that it represents. The NCI uses an array of tiny LIDARs (scanning laser beams) to gather this information about an object's size and distance away, with an optical concept called coherence (wherein waves of the same frequency align perfectly) exploited to make high-resolution images possible.

The coherent laser light from the NCI acts as a kind of ruler, measuring the precise distance of each point from the camera so that they can be mapped onto a 3D image of the scene.

The researchers believe this enables 3D imaging at a greater level of depth-measurement accuracy than ever before in silicon photonics, while at the same time the NCI's tiny size – just 300 microns across in their 16-pixel proof of concept – makes possible incorporation into even very small devices.

A 3D image of an American penny produced by the NCI chip, with the colors corresponding to height at various points (Image: Ali Hajimiri/Caltech)
A 3D image of an American penny produced by the NCI chip, with the colors corresponding to height at various points (Image: Ali Hajimiri/Caltech)

The current limitation of 16 coherent pixels did not stop the researchers from imaging the front face of an American one-cent coin from half a meter (1.5 ft) away with a method that scanned in four-by-four pixel increments.

The researchers see broad applications for their device, which they believe could easily be scaled up to house arrays of hundreds of thousands of pixels – which is closer to what would be required in real-world high-resolution 3D imaging through a camera lens. NCI could find use in security, robotics, gesture recognition, biomedical imaging, personal electronics, and more.

A paper describing the research was published in the journal Optics Express.

Source: Caltech

View gallery - 3 images
2 comments
2 comments
mach37
The illustrations with this article are not confidence-inspiring in the "high resolution image" claims for this product. Leaving this as a text-only article would have been just as informative, and less confusing.
christopher
The problem with optical-scanning, is it can't scan black things, and single-colour scanning of multi-coloured stuff messes up the results.
There is already 3D reconstruction that works using a normal phone camera, and software processing to determine distance from perspective calculated from 2+ sequential photos, so any market for lidar in phones (if there ever was one) is competing with existing tech as well...