Last year, Google unleashed its Deep Dream Generator on the world, and a wave of images filled with bright lines, eyeballs and creepy dog heads flooded the internet. Now a team at the University of Freiburg in Germany has given neural networks a better sense of style by developing a method for taking an existing art style and smoothly applying it to a video. Artists, including New York-based Danil Krivoruchko, have already put the system to work with some beautiful results.

The researchers build on previous studies, particularly a paper outlining a neural algorithm where the style of an image could be overlaid onto another still image, with adjustable parameters that could find a balance between the artistic style and the content of the image. This approach can be applied to videos by processing each frame as an individual image before stitching them back together, but it's not ideal. The wider context of each frame within the video isn't accounted for, resulting in a jarring, flashy mess.

To smooth things out, the team introduced a few rules into how the images are rebuilt in the desired style. A temporal constraint encourages the system to change as little of the image as possible from the previous frame, and a multi-pass algorithm clears up artifacts that form around the edges of the shot. When a character runs across a scene, for example, things that pass behind them are tracked so that when they reappear, the system doesn't rebuild them from scratch but remembers how they looked before they were blocked from view.

The resulting video is consistent, stable, and far easier on the eye than those without the constraints. It's hard to communicate the difference in words, but the reel above showcases just how much difference it all makes.

NYC Flow, which can be seen below, is another beautiful example of the system at work, as New York-based artist Danil Krivoruchko's project paints the Big Apple's skyline like a 3D watercolor, and splashes vibrant colors all over the drab greys and browns of the subway.

The research was published online at arXiv, and if you want to try it out for yourself, the team has uploaded the algorithm to GitHub, along with detailed instructions, requirements and examples.