cloudpainter

Lost in Abstraction - Style Transfer and the State of the Art in Generative Imaging

Seeing lots of really cool filters on my friend's photos recently, especially from people using the prisma app. Below is an example of such a photo and one of the favorites that I have seen.

The filters being applied to these photos are obviously a lot more than adjusting levels and contrast, but what exactly are they? While I can not say for sure what prisma is using, a recently released research paper by Google scientists gives a lot of insight into what may be happening.

The paper, titled A Learned Representation for Artistic Style and written by Vincent Dumoulin, Jonatha Shlens, and Manjunath Kudlar of Google Brain, details the work of multiple research teams in the quest to achieve the perfect pastiche.  No worries if you don't know what a pastiche is, I didn't either until I read the paper.  Well I knew what one was, I just didn't know what they were called.  So a pastiche is an image where an artist tries to represent something in the style of another artist.  Here are several examples that the researchers provide.

In the above image you can see how the researchers have attempted to render three photographs in the style of Lichtenstein, Rouaul, Monet, Munch, and Van Gogh.  The effects are pretty dramatic. Each pastiche looks remarkably like both source images. One of the coolest things about the research paper is that it contains the detailed replicate-able process so that you too can create your own pastiche producing software.  While photo editing apps like prisma seam to be doing a little more than just a single pastiche, my gut tells me that this process or something similar is behind much of what they are doing and how they are doing it so well. 

So looking at the artificial creativity behind these pastiches, I like to ponder the bigger question. How close are we to a digital artist? I always ask this 'cause that is what I am trying to create.

Well, as cool and cutting edge as these pastiches are, they are still just filters of photos. And even though this is the state of the art coming out of Google Brain, they are not even true pastiches yet. While they do a good job of transferring color and texture, they don't really capture the style of the artist.  You wouldn't look at any of the pastiches in the second column above and think that Lichtenstein actually did them.  They share color, contrast, and texture, but thats about it.  Or look more closely at these pastiches made from Edvard Munch's The Scream (top left).

While the colors and textures of the imitative works are spot on, the Golden Gate Bridge looks nothing like the abstracted bridge in The Scream.  Furthermore, the two portraits have none of the distortion found in the face of the central figure of the painting.  These will only be true pastiches when these abstract elements are captured alongside the color, texture, and contrast. The style and process behind producing these pastiches seam to be getting lost in the abstraction layer.

How do we imitate abstraction.  No one knows yet, but there are a lot of us working on the problem and as of November 1, 2016 this is some of the best work being done on the problem.

My first robot project, a self driving car in 2005

Just reading that all new TESLA vehicles will be completely self driving and it made me think about my very first robot project. For the 2005 DARPA Grand Challenge II, I was a member of Team ENSCO and we built a self driving car that drove 86 glorious miles before careening into a tree.  The robot and our work on it can be seen in the video below. 

You can also see some cool things on our test vehicle, my 2005 Honda Element.  That weird looking thing on top is an experimental Velodyne LIDAR system.  Whenever you see a self driving Google car, it usually has the modern version of this contraption spinning around on top.  For two weeks we experimented with the very first prototype in existence.  I was actually pulled over by the Capital Police as we drove this vehicle around Capital Hill on a test run.  The officers nearly arrested me after asking me what the spinning thing on top of my car was and I foolishly responded "It's an array of 64 lasers, um wait, they aren't harmful lasers, let me explain..."

Among many interesting lessons in the project was marketing. Over the course of the project we would always spend lots of time explaining what an autonomous car was.  No one understood the word autonomous yet everyone in the industry insisted on calling them autonomous. Well in the ten years since it would appear that the marketers finally got involved and had the wisdom to just call them "self driving".  Which just shows you how clueless we engineers are.

The Early Robots

Went searching for and found these images of the early robots.  

The first robot started with a lot of wood pieces. 

The very first paint head was a disaster, but this second one worked real well.

The Second Robot was built to be more mobile.  It fit just barely through doors.

The third, of which I have lost all the photos except this one, was built to be massive.  It filled a room.  Also important was that it had an open face so I could photograph the painting more easily in order to use feedback loops in the painting logic.

After these unnamed machines came Crowd Painter, BitPaintr, Neural Jet, and cloudpainter.

Painting Robot's Ultimate Goal is to Eliminate All Humans

If you want a quick synopsis of the current state of my painting robot, this Thrillist Feature captures it perfectly.  They somehow made the otherwise dry subject of artificial creativity entertaining, and sometimes funny.  I really appreciate all the work of the film crew that worked on this with me and brought out some of the best aspect of this project.  Big Thanks to Vin, Joshua, Mary, Peter, Mat, and Paul.  

 

 

Robotic Arms, and The Beginning of cloudpainter

So we have long realized this, but now we finally have a plan.  For this painting robot to truly be able to express itself, it needs arms.  So we are planning on adding a pair, if not four.  A conceptual 3D model showing where we would put the first two can be seen below.

So the way we are thinking about this whole robot coming together is to add a pair of robotic arms above the painting area.  They would hold brushes, or maybe a brush and a camera.  Still deciding on this.  But as currently envisioned, the XY table will control the airbrushes, and the arms will control the traditional artist brushes.  Lots of reasons for this, least of which we think it will look cool for them to dance around each other.

We expect to have one of the robot arms, a 7bot, here in a couple of days.  Can't wait to see what we can do with it. 

Another thing we are realizing is that this is beyond the scope of the Neural Jet.  This new robot, a machine with a modular paint head on an xy-table and two robotic arms, is sort of a new project.  So from here on out while the Neural Jet will refer to the modular paint head, the project in its entirety will be referred to as cloudpainter, and will encompass all the tech and algorithms from all of my previous robots.