Snapshots banged off on a smartphone, tablet or point-and-shoot camera could soon be getting a lot better looking thanks to a new processor chip. Developed by researchers at MIT’s Microsystems Technology Laboratory, the new chip enhances images within milliseconds, and reportedly uses much less power than the image processing software installed on some devices.
The chip works by dividing photos into a matrix of small blocks, known as a bilateral grid. A histogram (a graphical representation of data) is created for each block, in which the block’s X and Y axes represent its location within the photo as a whole. This is combined with another histogram for that same block, which represents its brightness levels.
Sick of Ads?
Join more than 500 New Atlas Plus subscribers who read our newsletter and website without ads.
It's just US$19 a year.More Information
One of the things that the chip can do is create High Dynamic Range (HDR) images. Have you ever noticed how your eyes are able to simultaneously expose for the bright and dark elements of a high-contrast scene, whereas a camera has to either overexpose one or underexpose the other? Well, HDR is kind of like your eyes – the bright sky and the shady spot under a tree will both be properly exposed in an HDR shot.
To manage this, the chip actually records three Low Dynamic Range images of each shot – one normally-exposed image (like a camera would take in Auto mode), one that’s overexposed to pick up details in dark areas, and one that’s underexposed to properly capture bright elements. Those three images are then merged into one HDR photo – the whole process takes a few hundred milliseconds for a 10-megapixel photo, and could reportedly even be applied to video. The researchers say the chip uses considerably less power than existing software-based systems that rely on CPUs and GPUs for the number crunching.
The chip is also able to enhance shots taken in dark environments, again using multiple images. In this case, it records two images of the scene – one using the flash, and one without. Each of those images is then divided into two layers – a base layer that just contains the large-scale ambient background features of the scene, and another that only contains the sharper details. The chip then combines the base layer of the non-flash shot (which would be underexposed in the flash shot), with the detailed layer of the flash shot (which would be grainy in the non-flash shot).
Finally, in order to clean up noise in photos, the chip is able to smooth out the shot by blurring “undesired” pixels into the pixels adjacent to them. In order not to blur the edges of objects within the shot (as does occur in some noise-reduction software), the blurring function isn’t applied when neighboring pixels have significantly different brightness values.
There’s no word yet on when the chip might start to appear in consumer devices.
Source: MITView gallery - 2 images