Transferring Image “Style” via Deep Learning

So, what you want to do is apply a specific style to a photo. Oh, I’m not referring to the Instagram/Filter thing, I mean more in the sense of “I want to make this picture, taken in the summer,  look like it is the depths of winter, with snow and clouds and whatnot”, pretty much like the example below.
 There are existing techniques to do stuff like this — e.g. The “Van Gogh”approach by Gatys et al., or the “Deep Photo” approach by Luan et al. — but they all tend to either be too stylized (so yeah, it looks like “Starry Nights”. Yay.), or they have obvious artifacts. Whats more, they are all quite compute intensive, requiring upwards of minutes, on serious horsepower mind you, to do a single image, a far cry from Instagram-level abilities.
Li et al. from UCMerced and NVIDIA have a nifty new paper out (and code!and docs!) where they deal, quite successfully, with the above issues. In short, their approach applies the stylization as normal, but then proceeds to apply a “smoothing” step, which makes the image more spatially consistent, thus removing artifacts.
Their tests show that the results are far more pleasing to the eye than the current state-of-the-art, and what’s more, take only seconds to apply, as compared to minutes!
The examples below show the differences between the current algorithm (“ours”), and the current leading contenders (Pitié et al., and Luan et al.)
Quite the difference eh? Go read the whole paper for details, and then check out the examples!

Comments

Popular posts from this blog

Cannonball Tree!

Erlang, Binaries, and Garbage Collection (Sigh)

Visualizing Prime Numbers