Skip to main content

Google and MIT’s new machine learning algorithms retouch your photos before you take them

Google and MIT’s new machine learning algorithms retouch your photos before you take them

Share this story

It’s getting harder and harder to squeeze more performance out of your phone’s camera hardware. That’s why companies like Google are turning to computational photography: using algorithms and machine learning to improve your snaps. The latest research from the search giant, conducted with scientists from MIT, takes this work to a new level, producing algorithms that are capable of retouching your photos like a professional photographer in real time, before you take them.

The researchers used machine learning to create their software, training neural networks on a dataset of 5,000 images created by Adobe and MIT. Each image in this collection has been retouched by five different photographers, and Google and MIT’s algorithms used this data to learn what sort of improvements to make to different photos. This might mean increasing the brightness here, reducing the saturation there, and so on.

Using machine learning to improve photos has been done before, but the real advance with this research is slimming down the algorithms so that they are small and efficient enough to run on a user’s device without any lag. The software itself is no bigger than a single digital image, and, according to a blog post from MIT, could be equipped “to process images in a range of styles.”

A composition created by MIT showing the original 12-megapixel image (left) and the retouched version produced by the new algorithm (right).
A composition created by MIT showing the original 12-megapixel image (left) and the retouched version produced by the new algorithm (right).

This means the neural networks could be trained on new sets of images, and could even learn to reproduce an individual photographer’s particular look, in the same way companies like Facebook and Prisma have created artistic filters that mimic famous painters. Of course, it’s worth pointing out that smartphones and cameras already process imaging data in real time, but these new techniques are more subtle and reactive, responding to the needs of individual images, rather than applying general rules.

In order to slim down the algorithms, the researchers used a few different techniques. These included turning the changes made to each photo into formulae and using grid-like coordinates to map out the pictures. All this means that the information about how to retouch the photos can be expressed mathematically, rather than as full-scale photos.

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” Google research Jon Barron told MIT. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

Will we be seeing these algorithms pop up in one of Google’s future Pixel phones? It’s not unlikely. The company has previously used its HDR+ algorithms to bring out more detail in light and shadow on mobile devices since the Nexus 6. And speaking to The Verge last year, Google’s computational photography lead, Marc Levoy, said that we’re “only beginning to scratch the surface” with this work.