MIT and Google join forces to retouch smartphone photos in real time
Even casual photographers nowadays havesome knowledge of the benefits of a little image retouching here andthere. Adding a filter, tweaking the contrast or heightening thecolors before slinging the photo into the digital world is almostpart of everyone's process now. A research collaboration between MITand Google has taken the idea of computational photography to a newlevel by creating a system than can automatically retouch images inreal-time before the shot has even been taken.
When New Atlas reviewed the new GooglePixel phone last year we were absolutely astonished at how good thephotographs it produced were. This came down to Google's obsessionwith computational photography. Specifically, the algorithms and softwarethe company has developed that dramatically improve the images itscamera captures.
Google's complex high-dynamic range(HDR) algorithms are at the head of the pack, but they are limited by the processing power a smartphone has at its disposal. This is where researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) come into the picture.
In 2015 MIT graduate student Michael Gharbi developed a clever way to streamline server-based image processing thatsignificantly reduced both bandwidth and power consumption. This system alloweda smartphone to undertake complex image processing in short periodsof time without draining its battery.
"Google heard about the work I'ddone on the transform recipe," says Gharbi. "They themselves dida follow-up on that, so we met and merged the two approaches. Theidea was to do everything we were doing before but, instead of havingto process everything on the cloud, to learn it. And the first goalof learning it was to speed it up."
To achieve this, the team usedlow-resolution versions of a given image for processing, but the bigchallenge was finding a way to upsample the image back to ahigh-resolution copy.
Using machine learning a system wastrained on a data set of 5,000 images, each with five differentretouched variants. Instead of outputting a complete image, thesystem outputs a formula for modifying the colors of the imagepixels. Across the learning process it improves its own algorithm byjudging how well its output can approximate the original image.
Ultimately the system was able to replicate a high-resolution HDR image 100 times faster than the original HDR algorithm. This allowed for real-time HDR-retouched images to appear in the view screen of a smartphone using very little processing or battery power.
"Using machine learning forcomputational photography is an exciting prospect but is limited bythe severe computational and power constraints of mobile phones,"says Jon Barron from Google Research. "This paper may provide uswith a way to sidestep these issues and produce new, compelling,real-time photographic experiences without draining your battery orgiving you a laggy viewfinder experience."
With their commitment to focusing onsoftware over hardware Google has already produced one of the bestsmartphone cameras on the market. Don't be surprised if you find thisnew work with MIT incorporated into a Pixel phone in the future.
The CSAIl/Google team presented the system at the Siggraph digital graphics conference this week. Take a look at the system in more detail in the video below.
Source: MIT News