Neural style transfer

I've recently been playing with Keras and I decided to try my hand at one of the many interesting applications of deep learning, neural style transfer.

This technique consists of taking the colors, patterns and general style of one picture and the content of another, creating a third picture which is a mixture of the style and contents of both images. This enables us to transfer the style of famous paintings to any other picture.

Here's a competition between various artists composing a picture of the Alcázar of Segovia each in their own style.

The algorithm works by comparing the similarities between the generated image and both the original content and style images  and computing a virtual "distance" factor between them. These distances are then added together to get a number that gets reduced over a number of iterations, theoretically making the generated image similar to both images and thus "transferring" the style into the content.

However, since both values get combined into a single number, a high decrease in one parameter could yield a lower total number than a combined, balanced, decrease in both values. This could make the algorithm focus more in a single aspect of the transfer, and we could get images very similar to the original with just a splash of style, or on the other hand a very colorful painting where we won't be able to discern the original subject. To avoid all of this and get a satisfying result the content weight as well as the number of iterations and the number of gradient descent steps per iteration parameter should be tweaked to control the amount of content or style being generated.

Beer in Oostende, Vincent van Gogh,
Color on pixel, 2019

The code uses the pre-trained VGG16 neural network, and follows François Chollet's example in Deep Learning with Python. I modified the program to split the code into different files and join all the important parameters in a single place.

The program creates a new folder with the names of the content and styles images and saves the generated one at the end of each iteration. At the end of the operation the original content image is copied over and a gif showing the generation process is produced.

Checkout the code in Github, all the parameters to be modified are set in the params.yml file. The weights and iterations will need to be tuned depending on the images, give it a try!

https://github.com/alvaroferran/style_transfer

Leave a Reply

Your email address will not be published. Required fields are marked *