Photography

Google's RAISR sharpens low-resolution images using machine learning

View 8 Images
Google's RAISR: a machine learning algorithm that can take a low-resolution, block image and upscale it while enhancing and sharpening the image
Google
A comparison of multiple image upscaling technologies at 2X resolution. Google's super-fast RAISR is at the bottom right
Google
A comparison of multiple different upscaling technologies, with Google's super-quick RAISR on the bottom right
Google
A comparison of multiple image upscaling technologies at 2X resolution. Google's super-fast RAISR is at the bottom right
Google
A comparison of multiple image upscaling technologies at 2X resolution. Google's super-fast RAISR is at the bottom right
Google
RAISR can learn to iron out common aliasing artifacts like moire and jaggies
Google
Left: original image, Right: RAISR upscaled to 2X resolution
Google
Left: original jagged jpg image. Middle: typical bicubic upscaling. Right: RAISR's effort.
Google
Google's RAISR: a machine learning algorithm that can take a low-resolution, block image and upscale it while enhancing and sharpening the image
Google
View gallery - 8 images

From a jagged low-res jpeg to a sharper, larger image file, Google researchers have found a way to use machine learning to upscale images to higher resolutions at lightning speed, and it works so fast it could one day be built into your smartphone.

Google is using machine learning to upscale jpeg images much, much faster and often more accurately than current processor-intensive upsampling methods.

Its RAISR program (Rapid and Accurate Image Super Resolution) is still at the experimental stage, but it's already operating between 10 to 100 times as fast as existing upscaling technology and getting better results in many cases.

A comparison of multiple different upscaling technologies, with Google's super-quick RAISR on the bottom right
Google

The system learns by taking in thousands of pairs of images – one at full resolution, the other downsampled to a jagged, low-res image. It pores over these pairs to work out which filters it can apply to the low-res image's pixels to get them closest to what's in the full-res file, taking context into account.

Within about an hour, it's gone through some 10,000 image pairs and built a pretty decent little knowledge base that it can then apply to any low-res image.

Here's an example, going from the original low-res file, to the kinds of results you could expect from a bicubic upscaler in Photoshop, for instance, to the result of the RAISR system.

Left: original jagged jpg image. Middle: typical bicubic upscaling. Right: RAISR's effort.
Google

Because it's a learning algorithm, it has also learned to unpack and reduce some of the aliasing issues you can get with close patterns in low-res images. The horizontal striping in the low-res image under the 5 below is greatly reduced in the RAISR upscaling.

RAISR can learn to iron out common aliasing artifacts like moire and jaggies
Google

Because it works so fast, RAISR-type technology could easily be adapted to run on a smartphone in real time. Google is examining whether it can be used to enhance a pinch-to-zoom type operation to let you zoom in further than the pixel level on photos you've taken.

Google is also looking at whether it can get good and fast enough to be used as a speed booster and data saver when sending images. If you were to crunch an image down to a low-res copy and send it, then unpack it and RAISR it back to a great approximation of the full-resolution image, you could save a ton of data transfer, not to mention cloud storage for your images.

Let's not forget how important this kind of work is when it comes to preserving memories; a lot of early digital cameras and phone cameras – not to mention, for example, security cameras – simply didn't have the resolution that today's big screen displays require. Wouldn't it be great to be able to enhance those images back to a much sharper picture?

Source: Google

View gallery - 8 images
  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
4 comments
Nairda
"for example, security cameras – simply didn't have the resolution " As long as people realize this is art imitating as realism. Detail reconstruction in this case is a machine interpretation of reality where as bicubic sharpening is a blind algorithm. Translation: May not me admissible in court as image evidence. The face of the crook in CCTV only mimics what the machine has seen before.
MarylandUSA
There was an episode of Galactica where the security chief needed a few hours to tease out detail from a low-res surveillance image. I thought, Seriously? You guys can travel at warp speed, but it takes you hours to massage a photo?
RasielSuarez
It would be wonderful if the article included mention of where one could try out Google's project with an image of their own. If the tech is not publicly available then at least make mention of that.
McDesign
"Enhance!"
From Blade Runner . . .