Researchers at MIT Media Lab have proposed a new camera technology which could see an end to overexposed images. The modulo camera would work by employing a sensor which can reset the sensor capacitors of pixels as they overexpose, and "unwrapping" algorithms to recover color information which would otherwise have been lost in blown highlights.
There can't be many photographers who haven't suffered at the hands of tricky-to-photograph high-contrast scenes which combine bright and dark areas, where it can be all too easy to overexpose a shot, resulting in washed-out white patches. As such, many photographers are left relying on the dynamic range of their digital camera and trying to balance images in post production, or combining multiple exposures into a single HDR image.
UPGRADE TO NEW ATLAS PLUS
More than 1,200 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
However, a team from MIT Media Lab has come up with a different approach which uses a modulo sensor and new image-processing algorithms to ensure overexposed images are a thing of the past. Collaboration with MIT Lincoln Lab sees the pixel architecture of Digital-pixel Focal Plane Array (DFPA) technology, which features the ability to do on-the-fly digital signal processing, recast as a modulo sensor which can make adjustments on a per-pixel level.
As such, the modulo camera does not suffer from the problem of overexposure in the same way as traditional digital cameras, because whenever a pixel value gets to its maximum capacity during photon collection, it can be reset and exposed again. Receiving an excess amount of light would normally result in unrecoverable overexposure of those pixels, but this way the camera is able to calculate how much light was received, by how many times each pixel reached maximum capacity and was reset.
This information would produce the sort of psychedelic images seen in the middle of the simulated illustrations before newly proposed "unwrapping" algorithms are used to tap into the larger dynamic range that was captured. The result is the ability to recover Unbounded High Dynamic Range (UHDR) images from situations that traditional digital cameras would not have been able to.
While tests have currently been limited to a low-res lab-bound prototype camera, it's easy to see how this could develop and be deployed in DSLR or smartphone cameras. Researchers also see applications for real-time HDR cameras in robotic vision, such as in a driverless car driving into a tunnel where the contrast currently leaves most cameras temporarily blind.
You can check out a video explaining the tech below.
Source: MIT Media Lab