While digital cameras such as the Hasselblad H4D-200MS and Nikon D800 have pushed the megapixel boundary in recent times, and Nokia’s inclusion of a 41-megapixel camera into its 808 PureView smartphone got plenty of attention, researchers at Duke University and the University of Arizona say the age of consumer gigapixel cameras are just around the corner – and they’ve created a prototype gigapixel camera to back up their claim.
The prototype camera was developed by electrical engineers from Duke University, along with scientists from the University of Arizona, the University of California - San Diego, and Distant Focus Corp., with support from DARPA. By synchronizing 98 tiny cameras, each with a 14-megapixel sensor, in a single device, the team created a camera that can capture images of around one gigapixel resolution. However, with the addition of extra microcameras, the researchers say it has the potential to capture images at resolutions of up to 50-gigapixels - the team points out this is five times better than 20/20 human vision over a 120-degree horizontal field.
Similar to the way the GigaPan Epic mount creates high resolution panoramas by capturing multiple images that are then stitched together, each of the prototype device’s 98 cameras captures data from a specific area of the field of view, which is then stitched together to form a single image offering incredible detail.
“A computer processor essentially stitches all this information into a single highly detailed image,” explains Duke’s David Brady, who led the team. “In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."
“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”
Because of this, the researchers believe that the continuing miniaturization of electronic components will see the next generation of gigapixel cameras becoming available to the general public within the next five years.
“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” said the University of Arizona’s Michael Gehm, who led the team responsible for the software that combines the input from the microcameras. “Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements.”
“A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations,” Gehm adds. “Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”
Although the current prototype camera measures 2.5 x 2.5 x 1.6 feet (76 x 76 x 51 cm), the optical elements account for only around three percent of the camera’s volume. The rest is taken up by the electronics and processors needed to process all the information, and the cooling components required to keep it from overheating. It is these areas the team believes can be miniaturized in the coming years, resulting in practical hand-held gigapixel cameras for everyday photographers.
The camera is described online in the journal Nature.