Even casual photographers nowadays have some knowledge of the benefits of a little image retouching here and there. Adding a filter, tweaking the contrast or heightening the colors before slinging the photo into the digital world is almost part of everyone's process now. A research collaboration between MIT and Google has taken the idea of computational photography to a new level by creating a system than can automatically retouch images in real-time before the shot has even been taken.

When New Atlas reviewed the new Google Pixel phone last year we were absolutely astonished at how good the photographs it produced were. This came down to Google's obsession with computational photography. Specifically, the algorithms and software the company has developed that dramatically improve the images its camera captures.

UPGRADE TO NEW ATLAS PLUS

More than 1,500 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.

UPGRADE

Google's complex high-dynamic range (HDR) algorithms are at the head of the pack, but they are limited by the processing power a smartphone has at its disposal. This is where researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) come into the picture.

In 2015 MIT graduate student Michael Gharbi developed a clever way to streamline server-based image processing that significantly reduced both bandwidth and power consumption. This system allowed a smartphone to undertake complex image processing in short periods of time without draining its battery.

"Google heard about the work I'd done on the transform recipe," says Gharbi. "They themselves did a follow-up on that, so we met and merged the two approaches. The idea was to do everything we were doing before but, instead of having to process everything on the cloud, to learn it. And the first goal of learning it was to speed it up."

To achieve this, the team used low-resolution versions of a given image for processing, but the big challenge was finding a way to upsample the image back to a high-resolution copy.

Using machine learning a system was trained on a data set of 5,000 images, each with five different retouched variants. Instead of outputting a complete image, the system outputs a formula for modifying the colors of the image pixels. Across the learning process it improves its own algorithm by judging how well its output can approximate the original image.

Before processing is switched on (Credit: MIT)

After processing is switched on (Credit: MIT)

Ultimately the system was able to replicate a high-resolution HDR image 100 times faster than the original HDR algorithm. This allowed for real-time HDR-retouched images to appear in the view screen of a smartphone using very little processing or battery power.

"Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones," says Jon Barron from Google Research. "This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience."

With their commitment to focusing on software over hardware Google has already produced one of the best smartphone cameras on the market. Don't be surprised if you find this new work with MIT incorporated into a Pixel phone in the future.

The CSAIl/Google team presented the system at the Siggraph digital graphics conference this week. Take a look at the system in more detail in the video below.

Source: MIT News

View gallery - 5 images