In fine art, especially painting, humans have mastered the skill to create unique
visual experiences through composing a complex interplay between the content
and style of an image. Thus far the algorithmic basis of this process is
unknown and there exists no artificial system with similar capabilities. However,
in other key areas of visual perception such as object and face recognition
near-human performance was recently demonstrated by a class of biologically
inspired vision models called Deep Neural Networks.1, 2 Here we introduce an
artificial system based on a Deep Neural Network that creates artistic images
of high perceptual quality. The system uses neural representations to separate
and recombine content and style of arbitrary images, providing a neural
algorithm for the creation of artistic images. Moreover, in light of the striking
similarities between performance-optimised artificial neural networks and
biological vision,3–7 our work offers a path forward to an algorithmic understanding
of how humans create and perceive artistic imagery.
Last Wednesday, “A Neural Algorithm of Artistic Style” was posted to ArXiv, featuring some of the most compelling imagery generated by deep convolutional neural networks since Google Research’s “DeepDream” post.
On Sunday, Kai Sheng Tai posted the first public implementation. I immediately stopped working on my implementation and started playing with his. Unfortunately, his results don’t quite match the paper, and it’s unclear why. I’m just getting started with this topic, so as I learn I want to share my understanding of the algorithm here, along with some results I got from testing his code.
A Neural Algorithm of Artistic Style
Comparing Artificial Artists
I want blind tests where people would have to guess which painting was painted by a human and which one by an algorithm.