Artificial Creativity

TensorFlow Dev Summit 2017 cont...

Matt and I had a long day listening to some of the latest breakthroughs in deep learning, specifically those related to TensorFlow.  Some standouts included a Stanford student that had created a neural net to detect skin cancer. Also liked Doug Eck's talk about artificial creativity. Jeff Dean had a cool keynote, and got to learn about TensorBoard from Dandelion Mane.  One of my favorite parts of the summit was getting shout outs and later talking to both Jeff Dean and Doug Eck.  The shoutouts to cloudpainter during Jeff's Keynote and Eck Session and lots of pics can be seen below. This is mostly for my memories.

TensorFlow Dev Summit 2017

About two months ago I applied to go to Google's first annual TensorFlow Dev Summit.  I sent in the application and forgot about it.  After a month I figured that I did not get an invite. Then about a week ago, the invite came in.  Turns out only one in ten applicants were invited to the conference. I have no idea what criteria they used to select me, but I am currently on plane to Mountain View excited to talk with the TensorFlow team and see what other developers are doing with it.

The summit will be broadcast live around the world.  Here is a link.  Look for me in the crowd. I will have a grey pullover on.

 

 

Our First Truly Abstract Painting

Have had lots of success with Style Transfer recently.  With the addition of Style Transfer to some of our other artificially creative algorithms, I am wondering if cloudpainter has finally produced something that I feel comfortable calling a true abstract painting.  It is a portrait of Hunter.

In one sense, abstract art if easy for a computer. A program can just generate random marks and call the finished product abstract.  But that's not really an abstraction of an actual image, its the random generation of shapes and colors.  I am after true abstraction and with Style Transfer, this might just be possible.

More details to come as we refine the process, but in short the image above was created from three source images, seen in the top row below, and image of Hunter, his painting, and Picasso's Les Demoiselles d Avignon.

Style Transfer was applied to the photo of Hunter to produce the first image in the second row. The algorithm tried to paint the photo in the style of Hunter's painting. The second image in the second row is a reproduction of Picasso's painting made and recorded by one of my robots using many of its traditional algorithms and brush strokes by me.

The final painting in the final row was created by cloudpainter's attempt to paint the Style Transfer Image with the brush strokes and colors of the Picasso reproduction.

transferWithArrows.jpg

While this appears like just another pre-determined algorithm that lacks true creativity, the creation of paintings by human artists follow a remarkably similar process. They draw upon multiple sources of inspiration to create new imagery.

The further along we get with our painting robot, I am not sure if we are less creative than we think, or computers are much more so than we imagined.

Hunter's Portrait

Inspired by our trip to the National Portrait Gallery, we started thinking to ourselves, what's so impressive about making our robot's paint like a famous artist.  Sure they are inspirational and a lot can be learned from them, but when you think about it, people are more interested in the art of their loved ones.  

So this morning, Hunter and I decided to do quick portraits of each other and then run the portraits through deep neural nets to see how well they applied to a photo we took of each other. As soon as we started, Corinne joined in so here is obligatory photo of her helping out.

Also in the above photo you can see my abstract portrait in progress.

Below you can see the finished paintings and how they were applied to the photos we took. If you have been following this blog recently, you will know that the images along the top are the source images from which style is taken and applied to the photos on the left. This is all being done via Style Transfer and TensorFlow. Also I should note that the painting on left is mine, while Hunter's is on right. 

Most interesting thing about all this is that the creative agent remains Hunter and I, but still something is going on here. For example even though we were the creative agents, we drew some of our stylistic inspiration from other artist's paintings that we saw at the National Portrait Gallery yesterday. Couldn't a robot do something similar?

More work to be done.

Inspiration from the National Portrait Gallery

One of the best things about Washington D.C. is its public art museums. There are about a dozen or so world class galleries where you are allowed to take photos and use the work in your own art, because after all, we the people own the paintings. Excited by the possibilities of deep learning and how well style transfer was working, the kids and I went to the National Portrait Gallery. for some inspiration.

One of the first things that occurred to us was a little inception like. What would happen if we applied style transfer to a portrait using itself as the source image.  It didn't turn out that well, but here are a couple of those anyways.

While this was idea of a dead end, the next idea that came to us was a little more promising. Looking at the successes and failures of the style transfers we had already performed, we started noticing that when the context and composition of the paintings matched, the algorithm was a lot more successful artistically. This is of course obvious in hindsight, but we are still working to understand what is happening in the deep neural networks, and anything that can reveal anythign about that is interesting to us.  

So the idea we had, which was fun to test out, was to try and apply the style of a painting to a photo that matched the painting's composition.  We selected two famous paintings from the National Portrait Gallery to attempt this, de Kooning's JFK and Degas's Portrait of Miss Cassatt. We used JFK  on a photo of Dante with a tie on. We also  had my mother pose best she can to resemble how Cassatt was seated in her portrait.  We then let the deep neaural net do its work. The following are the results.  Photo's courtesy of the National Portrait Gallery.

jfk_orig.jpg

Farideh likes how her portrait came out, as do we, but its interesting that this only goes to further demonstrate that there is so much more to a painting than just its style, texture, and color. So what did we learn. Well we knew it already but we need to figure out how to deal with texture and context better.

Applying Style Transfer to Portraits

Hunter and I have been focusing on reverse engineering the three most famous paintings according to Google as well as a hand selected piece from the National Gallery.  These art works are The Mona Lisa, The Starry Night, The Scream, and Woman With A Parasol.

We also just recently got Style Transfer working on our own Tensor Flow system. So naturally we decided to take a moment to see how a neural net would paint using the four paintings we selected plus a second work by Van Gogh, his Self-Portrait (1889).  

Below is a grid of the results.  Across the top are the images from which style was transferred, and down the side are the images the styles were applied to. (Once again a special thanks to deepdreamgenerator.com for letting us borrow some of their processing power to get all these done.)

It is interesting to see where the algorithm did well and where it did little more than transfer the color and texture.  A good example of where it did well can be seen in the last column. Notice how the composition of the source style and the portrait it is being applied to line up almost perfectly. Well as could be expected, this resulted in a good transfer of style.

As far as failure. it is easy to notice lots of limitations. Foremost, I noticed that the photo being transferred needs to be high quality for the transfer to work well. Another problem is that the algorithm has no idea what it is doing with regards to composition.  For example, in The Scream style transfers, it paints a sunset across just about everyone's forehead.

We are still in processing of creating a step by step animation that will show one of the portraits having the style applied to it.  It will be a little while thought cause I am running it on a computer that can only generate one frame every 30 minutes.  This is super processor intensive stuff.

While processor is working on that we are going to go and see if we can't find a way to improve upon this algorithm.

 

 

 

 

Channeling Picasso with Style Transfer and Google's TensorFlow

We are always jumping back and forth between hardware and software upgrades to our painting robot. This week it's the software. Pleased to report that we now have our own implemention of Dumoulin, Shlens, and Kudlar's Style Transfer. This of course is the Deep Learning algorithm that allows you to recreate content in the style of a source painting. 

The first image that we successfully created was made by transferring the style of Picasso's Guernica into a portrait of me in my studio.  

So here are the two images we started with. 

And the following is the image that the neural networks came up with.

I was able to get this neural net working thanks in large part to the step-by-step tutorial in this amazing blog post by LO at http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style. Cool thing about the Deep Learning community, is that I found half a dozen good tutorials. So if this one doesn't work out for you, just search out another.

Even cooler though, is that you don't even need to set up your own implementation. If you want to do your own Style Transfers, all you have to do is head on over to the Deep Dream Generator at deepdreamgenerator.com. On this site you can upload pictures and have their implementation generate your own custom Style Transfers.  There is even a way to upload your own source images and play with the settings.  

Below is a grid of images I created on the Deep Dream Generator site using the same content and source image that I used in my own implementation.  In them, I played around with the Style Scale and Style Weight settings. Top row has Scale set to 1.6, while second row is 1, and third is 0.4.  First column has the Weight set to 1, while second is at 5 and third is at 10.

So while I suggest you go through the pains of setting up your own implementation of Style Transfer, you don't even have to.  Deep Dream Generator lets you perform 10 style transfers an hour.

For us on the other hand, we need our own generator as this technology will be closely tied into all robot paintings going forward.

 

 

 

Capturing Monet's Style with a Robot

As we gather data in an attempt to recreate the style and brushstroke of old masters with Deep Learning, we thought we would show you one of the ways we are collecting data.  And it is pretty simple actually. We are hand painting brushstrokes with a 7BOT robotic arm and recording the kinematics behind the strokes.  It is a simple trace and record operation where the robotic arms position is recorded 30 times a second and saved to a file.

As can be seen in the picture above, all Hunter had to do was trace the brush strokes he saw in the painting.  He did this for a number of colors and when he was done, we were able to play the strokes back to see how well the robot understood our instructions.  As can be seen in the following video, the playback was a disaster.  But that doesn't matter to us that much.  We are not interested in the particular strokes as much as we are in analyzing them for use in the Deep Learning algorithm we are working on.

Woman With A Parasol is the fourth Masterpiece we have begun collecting data for.  As this is an open source project, we will be making all the data we collect public.  For example, if you have a 7Bot, or similar robotic arm with 7 actuators, here are the files that we used to record the strokes and make the horrible reproduction.

 

 

Selecting Masterpieces to Recreate with Our Robot

Spent afternoon with Hunter exploring the National Gallery of Art to decide on the next masterpiece we are going to recreate and analyze with our robot. Saw the da Vinci, many Van Gogh's, and lots of other paintings before being drawn to Monet. And as we looked around at several Monets, it became obvious that he had a special artistic style that would lend itself well to replication by our robot and Deep Learning algorithms. In the end Hunter and I decided on the painting  the artwork on the left below - Monet's "Woman with a Parasol."

Now if we could somehow program our robots to capture and paint the wind across her face like Monet did. Wow, that would be amazing.

Brushstroke Maps for Three Famous Paintings

When you Google "Famous Artwork" a list of paintings is revealed, and at the top of that list is da Vinci, Van Gogh, and Munch. Here is a picture of the top ten actually...

Now that we have a stroke map of the Mona Lisa and The Scream, we decided to round up the top three by creating a mapping of The Starry Night.  Interestingly, The Starry Night is probably one of the best examples of the importance of brushstrokes in a painting.  There is nothing but flow in it.  And a major part of the composition is the movement made by the direction of the strokes. 

If we could somehow capture how Van Gogh used his strokes, well thats impossible, but if we could at least learn something from them. Well we will never know until we try.

As of 7:00 PM January 8, 2017, cloudpainter has just barely begun to explore the strokes of Van Gogh's The Starry Night.  We realize these first strokes are rudimentary, but its just laying down a background. Over the next several days we will attempt to copy as many of the strokes with as much detail as possible. These will be stored in an Elasticsearch database and shared for anyone to use in attempts to deconstruct Van Gogh's brushstroke.

Work Continues Mapping the Brushstrokes of Famous Masterpieces

Once I created a brushstroke map of Edvard Munch's The Scream, I thought it would be cool to have brush stroke mappings for more iconic artworks. So I googled "famous paintings" and was presented with a rather long list. Interestingly The Scream was in the top three along with da Vinci's Mona Lisa and Van Gogh's Starry Night. Well, why not do the top three.  So work has begun an creating a stroke map for The Mona Lisa.  In the following image, the AI has taken care of laying down an underpainting, or what would have been called a cartoon in da Vinci's time.

 

I am now going into it by hand and finger-swiping my best guess as to how da Vinci would have applied his brushstrokes.  Will post the final results as well as provide access to the Elasticsearch database with all the strokes as soon as it is finished. My hope is that the creation of the brushstroke mappings can be used to better understand these artists, and how artists create art in general.

A Deeper Learning Analysis of The Scream

Am a big fan of what The Google Brain Team, specifically scientists Vincent DumoulinJonathon Shlens, and Majunath Kudlar, have accomplished with Style Transfer.  In short they have developed a way to take any photo and paint it in the style of a famous painting. The results are remarkable as can be seen in the following grid of original photos painted in the style of historical masterpieces.

However, as can be seen in the following pastiches of Munch's The Scream, there are a couple of systematic failures with the approach. The Deep Learning algorithm struggles to capture the flow of the brushstrokes or "match a human level understanding of painting abstraction." Notice how the only thing truly transferred is color and texture.

Seeing this limitation, I am currently attempting to improve upon Google's work by modeling both the brushstrokes and abstraction. In the same way that the color and texture is being successfully transferred, I want the actual brushstrokes and abstractions to resemble the original artwork.

So how would this be possible? While I am not sure how to achieve artistic abstraction, modeling the brushstrokes is definitely doable. So lets start there.

To model brushstrokes, Deep Learning would need brushstroke data, lots of brushstroke data. Simply put, Deep Learning needs accurate data to work. In the case of the Google's successful pastiches (an image made in style of an artwork), the data was found in the image of the masterpieces themselves. Deep Neural Nets would examine and re-examine the famous paintings on a micro and macro level to build a model that can be used to convert a provided photo into the painting's style. As mentioned previously, this works great for color and texture, but fails with the brushstrokes because it doesn't really have any data on how the artist applied the paint. While strokes can be seen on the canvas, there isn't a mapping of brushstrokes that could be studied and understood by the Deep Learning algorithms. 

As I pondered this limitation, I realized that I had this exact data, and lots of it.  I have been recording detailed brushstroke data for almost a decade. For many of my paintings each and every brushstroke has been recorded in a variety of formats including time-lapse videos, stroke maps, and most importantly, a massive database of the actual geometric paths. And even better, many of the brushstrokes were crowd sourced from internet users around the world - where thousands of people took control of my robots to apply millions of brushstrokes to hundreds of paintings. In short, I have all the data behind each of these strokes, all just waiting to be analyzed and modeled with Deep Learning.

This was when I looked at the systematic failures of pastiches made from Edvard Munch's The Scream's, and realized that I could capture Munch's brushstrokes and as a result make a better pastiche.  The approach to achieve this is pretty straight forward, though labor intensive.

This process all begins with the image and a palette.  I have no idea what Munch's original palette was, but the following is an approximate representation made by running his painting through k-means clustering and some of my own deduction.

With the painting and palette in hand, I then set cloudpainter up to paint in manual mode. To paint a replica, all I did was trace brushstrokes over the image on a touch screen display. The challenging part is painting the brushstrokes in the manner and order that I think Edvard Munch may have done them.  It is sort of an historical reenactment.

As I paint with my finger, these strokes are executed by the robot.

More importantly, each brushstroke is saved in an Elasticsearch database with detailed information on its exact geometry and color.

 

At the conclusion of this replica, detailed data exists for each and every brushstroke to the thousandth of an inch. This data can then be used as the basis for an even deeper Deep Learning analysis of Edvard Munch's The Scream. An analysis beyond color and texture, where his actual brushstrokes are modeled and understood.

So this brings us to whether or not abstraction can be captured.  And while I am not sure that it can, I think I have an approach that will work at least some of the time. To this end, I will be adding a second set of data that labels the context of The Scream. This will include geometric bounds around the various areas of the painting and be used to consider the subject matter in the image. So while The Google Brain Team used only an image of the painting for its pastiches, the process that I am trying to perfect will consider the the original artwork, the brushstrokes, and how brushstroke was applied to different parts of painting.

 

Ultimately it is believed that by considering all three of these data points, a pastiche made from The Scream will more accurately replicate the style of Edvard Munch. 

So yes, these are lofty goals and I am getting ahead of myself. First I need to collect as much brushstroke data as possible and I leave you now to return to that pursuit.

Full Visibility's Machine Learning Sponsorship

Wanted to take a moment to publicly thank cloudpainter's most recent sponsor, Full Visibility

Full Visibility is a Washington D.C. based software consulting boutique that I have been lucky enough to become closely associated with. Their sponsorship arose from a conversation I had with one of their partners. Was telling him how I finally thought that Machine Learning, which has long been an annoying buzzword, was finally showing evidence of being mature. Next thing I knew Full Visibility bought a pair of mini-supercomputers for the partner and I to experiment with. One of the two boxes can be seen in the picture of my home based lab below. It's the box with the cool white skull on it. While nothing too fancy, it has about 2,500 more cores than any other machine I have ever been fortunate enough to work with. The fact that private individuals such as myself can now run ML labs in their own homes, might be the biggest indicator that a massive change is on the horizon.

Full Visibility joins the growing list of cloudpainter sponsors which now includes Google, 7BotRobotArt.org, 50+ Kickstarter Backers, and hundreds of painting patrons. I am always grateful for any help with this project that I can get from industry and individuals. All these fancy machines are expensive, and I couldn't do it without your help.

Pindar Van Arman

Lost in Abstraction - Style Transfer and the State of the Art in Generative Imaging

Seeing lots of really cool filters on my friend's photos recently, especially from people using the prisma app. Below is an example of such a photo and one of the favorites that I have seen.

The filters being applied to these photos are obviously a lot more than adjusting levels and contrast, but what exactly are they? While I can not say for sure what prisma is using, a recently released research paper by Google scientists gives a lot of insight into what may be happening.

The paper, titled A Learned Representation for Artistic Style and written by Vincent Dumoulin, Jonatha Shlens, and Manjunath Kudlar of Google Brain, details the work of multiple research teams in the quest to achieve the perfect pastiche.  No worries if you don't know what a pastiche is, I didn't either until I read the paper.  Well I knew what one was, I just didn't know what they were called.  So a pastiche is an image where an artist tries to represent something in the style of another artist.  Here are several examples that the researchers provide.

In the above image you can see how the researchers have attempted to render three photographs in the style of Lichtenstein, Rouaul, Monet, Munch, and Van Gogh.  The effects are pretty dramatic. Each pastiche looks remarkably like both source images. One of the coolest things about the research paper is that it contains the detailed replicate-able process so that you too can create your own pastiche producing software.  While photo editing apps like prisma seam to be doing a little more than just a single pastiche, my gut tells me that this process or something similar is behind much of what they are doing and how they are doing it so well. 

So looking at the artificial creativity behind these pastiches, I like to ponder the bigger question. How close are we to a digital artist? I always ask this 'cause that is what I am trying to create.

Well, as cool and cutting edge as these pastiches are, they are still just filters of photos. And even though this is the state of the art coming out of Google Brain, they are not even true pastiches yet. While they do a good job of transferring color and texture, they don't really capture the style of the artist.  You wouldn't look at any of the pastiches in the second column above and think that Lichtenstein actually did them.  They share color, contrast, and texture, but thats about it.  Or look more closely at these pastiches made from Edvard Munch's The Scream (top left).

While the colors and textures of the imitative works are spot on, the Golden Gate Bridge looks nothing like the abstracted bridge in The Scream.  Furthermore, the two portraits have none of the distortion found in the face of the central figure of the painting.  These will only be true pastiches when these abstract elements are captured alongside the color, texture, and contrast. The style and process behind producing these pastiches seam to be getting lost in the abstraction layer.

How do we imitate abstraction.  No one knows yet, but there are a lot of us working on the problem and as of November 1, 2016 this is some of the best work being done on the problem.

My first robot project, a self driving car in 2005

Just reading that all new TESLA vehicles will be completely self driving and it made me think about my very first robot project. For the 2005 DARPA Grand Challenge II, I was a member of Team ENSCO and we built a self driving car that drove 86 glorious miles before careening into a tree.  The robot and our work on it can be seen in the video below. 

You can also see some cool things on our test vehicle, my 2005 Honda Element.  That weird looking thing on top is an experimental Velodyne LIDAR system.  Whenever you see a self driving Google car, it usually has the modern version of this contraption spinning around on top.  For two weeks we experimented with the very first prototype in existence.  I was actually pulled over by the Capital Police as we drove this vehicle around Capital Hill on a test run.  The officers nearly arrested me after asking me what the spinning thing on top of my car was and I foolishly responded "It's an array of 64 lasers, um wait, they aren't harmful lasers, let me explain..."

Among many interesting lessons in the project was marketing. Over the course of the project we would always spend lots of time explaining what an autonomous car was.  No one understood the word autonomous yet everyone in the industry insisted on calling them autonomous. Well in the ten years since it would appear that the marketers finally got involved and had the wisdom to just call them "self driving".  Which just shows you how clueless we engineers are.

TEDx Talk Now Live

Thanks to everyone backing this KickStarter, things got bigger than I imagined they ever would. 

After this project's success, things went semi-viral and my art was featured on multiple television programs, dozens of print and online pieces, earned Second Place and over $20,000 in an international Robot Art Competition, and oh yeah, the coolest thing was my recent TEDx Talk which you can check out here 

 

When this started out, I had a goal of two exhibitions.  I consider the TEDx Talk to be the first.  I have another exhibition in the works that may be even bigger, but its far from a sure thing right now, so stay tuned for news on that one if I can pull it off. 

Until then you can enjoy the TEDx Talk that you made possible!

Pindar

TEDx Talk

So TEDx Talk went great. Below is a picture taken during my talk by the very first backer of this project, Jessie. 

 

Oh yeah, a couple other local backers also showed up for the talk, so big thanks to them! And a big thanks to all of you cause I am pretty certain I wouldn't have gotten this far without the success of this Kickstarter and all the press it has gotten. Things have snowballed since this all started and it is pretty much thanks to your backing.

The TEDx Talk is still a little surreal. I will send you all a link to the video as soon as its public. I haven't seen it yet but I didn't trip or mumble, so I think it went well.

Am continuing down list of paintings I owe to backers.  I have contacted you if you are in queue for next couple weeks.  As always if you need a portrait rushed for a special event, or just because, contact me and I will bump you to front of queue.

Thanks for making all this possible,

TEDx Talk this Weekend

As you all probably know I have a TEDx Talk on Saturday.  It has sort of taken over all my free time so I have fallen behind on portraits the past couple of weeks.  Will have new schedule early next week. But if you are in DC area and interested in the talk or seeing who else is talking visit http://www.tedxfoggybottom.net. If you can not make it, no worries, I will post the video to you all as soon as it comes out.