R&D Turns Into Art - Take a Deep Look at How These Styles Were Created
[caption id="attachment_2283" align="alignright" width="120"] An artist learns of his obsolescence.[/caption] Art. Wow, what a concept. What used to be considered a purely human endeavor can now be done as well, if not better, by computers. And I'm not talking about some abstract performance piece where a person interacts with a computer loosely involved in some atmospheric modulation. I'm talking about real, aesthetic, actual picture on canvas, artwork. With the ongoing development of machine learning through artificial neural networks (ANNs) and deep learning, computers are beginning to become more creative. It seems that ditching Von Neumann and, once again, copying nature may propel computers to a more human place. We now have computers that are faster than ever and possibly more human than ever. [caption id="attachment_2282" align="alignleft" width="264"] Simple![/caption] While perceptrons (aka algorithms) and ANNs are nothing new, with today's processing power, we are now able to explore vast networks that can truly learn on their own. Believe it or not, you're already using them in your day to day life. We can find ANNs in places like the Post Office for optical character recognition, in your smartphone for speech to text, in Facebook's facial recognition algorithm, Google translate, or Google's image searching algorithm. How can computers learn? Don't they just do what the programmer tells them to do? Well, ostensibly, yes. However, we can build software to create a virtual brain and say "Hey computer, make a network with 1000 neurons, with 16 inputs and one output. Here is our dataset, here is what we expect to see, now figure it out." And they will, using a process called backpropagation. Through repeated training, the network will compare its errors with expected results and slowly coverage to a solution. By utilizing this process, we can train a computer to read handwriting. Give it millions of hand-written letters and it will learn to classify them as separate entities. After the network has sorted itself out, you can say "this is an ‘A,' this is a ‘B'" etc. [caption id="attachment_2281" align="alignright" width="245"] Even more terrifying than the original. ;)[/caption] What does any of this have to do with creativity? After you have a trained network, rather than feeding it a dataset it expects (numbers, letters, faces, etc.), what would happen if you fed it noise or something entirely different instead? In this figure, we see what happens when a network trained to recognize dogs and birds might see when it "dreams" of Mr. Trump. That is, the network tries to find these images in the provided picture and transforms it into things it can recognize. The often cited resemblance to LSD or psilocybin-induced hallucinations is indicative of the functional resemblance to the visual cortex. (Click his image for the full video.) We now have algorithms that can detect features, recognize objects and shapes, and can manipulate images. Rather than training these things on discrete objects and concrete forms, what if we could teach them to understand something deeper or more abstract like stylization, mood, or personality? Someone tried it and came up with a program called "deep style," named after it's algorithmic roots in deep learning. This figure shows the capability of a network which was given a target image to transform to a particular artist's style. Figure A shows the original image, B through F show the output, and the inlays show the training image. The resemblance to the style, feeling, and mood of the training image is uncanny. [caption id="attachment_2284" align="alignleft" width="347"] Seen on the fridge in a Google datacenter.[/caption] I decided to take this to the extreme and feed in multiple styles using one image to celebrate sight and highlight the many things we may take for granted. Sure, we need to see to drive to work and find our lunch in the fridge, but it's easy to forget that the enjoyment of a baseball game, reading a comic book, or how a weekend camping relies on sight. I started with the plain frame in the right bottom corner of the canvas pictured above then I trained the network on Superman, newspaper, a map, The Simpsons, a view of nightlife, a cheetah, and water, etc. Take a closer look, what the other trained images do you recognize? If computers can do math faster than us, beat us in Chess and Go, recognize faces, create artwork, and run 24/7 without getting tired, is there anything left that we are better at? Where do we go from here? Perhaps that's not a meaningful question, and we should instead ask ourselves how we can harness their power and ‘talents' to enrich the lives of those around us. I, for one, welcome our new computational neural networking overlords. For a larger version of the canvas click here! - Erik Sikich - Cherry Optical, Inc Research & Development
- Celebrating Our Finalist Nomination for the 2022 Transitions Innovation Awards
- RESOLVED: Lab Connectivity Issues
- UPDATE: LAB CONNECTIVITY ISSUE: How to Temporarily Reach Customer Care
- Cherry Optical Lab is now a Distributor of Avulux Migraine & Light Sensitivity Lenses
- Happy Earth Day! Learn More About Our Efforts & How You Can Help!