Lost in Abstraction - Style Transfer and the State of the Art in Generative Imaging
Seeing lots of really cool filters on my friend's photos recently, especially from people using the prisma app. Below is an example of such a photo and one of the favorites that I have seen.
The filters being applied to these photos are obviously a lot more than adjusting levels and contrast, but what exactly are they? While I can not say for sure what prisma is using, a recently released research paper by Google scientists gives a lot of insight into what may be happening.
The paper, titled A Learned Representation for Artistic Style and written by Vincent Dumoulin, Jonatha Shlens, and Manjunath Kudlar of Google Brain, details the work of multiple research teams in the quest to achieve the perfect pastiche. No worries if you don't know what a pastiche is, I didn't either until I read the paper. Well I knew what one was, I just didn't know what they were called. So a pastiche is an image where an artist tries to represent something in the style of another artist. Here are several examples that the researchers provide.
In the above image you can see how the researchers have attempted to render three photographs in the style of Lichtenstein, Rouaul, Monet, Munch, and Van Gogh. The effects are pretty dramatic. Each pastiche looks remarkably like both source images. One of the coolest things about the research paper is that it contains the detailed replicate-able process so that you too can create your own pastiche producing software. While photo editing apps like prisma seam to be doing a little more than just a single pastiche, my gut tells me that this process or something similar is behind much of what they are doing and how they are doing it so well.
So looking at the artificial creativity behind these pastiches, I like to ponder the bigger question. How close are we to a digital artist? I always ask this 'cause that is what I am trying to create.
Well, as cool and cutting edge as these pastiches are, they are still just filters of photos. And even though this is the state of the art coming out of Google Brain, they are not even true pastiches yet. While they do a good job of transferring color and texture, they don't really capture the style of the artist. You wouldn't look at any of the pastiches in the second column above and think that Lichtenstein actually did them. They share color, contrast, and texture, but thats about it. Or look more closely at these pastiches made from Edvard Munch's The Scream (top left).
While the colors and textures of the imitative works are spot on, the Golden Gate Bridge looks nothing like the abstracted bridge in The Scream. Furthermore, the two portraits have none of the distortion found in the face of the central figure of the painting. These will only be true pastiches when these abstract elements are captured alongside the color, texture, and contrast. The style and process behind producing these pastiches seam to be getting lost in the abstraction layer.
How do we imitate abstraction. No one knows yet, but there are a lot of us working on the problem and as of November 1, 2016 this is some of the best work being done on the problem.
Integrating 7Bot with cloudpainter
Could not be happier with the 7Bot that we are integrating into cloudpainter.
The start-up robotic arm manufacturers that make 7Bot sent us one for evaluation and we have been messing around with it for the past week. The robot turned out to be perfect for our application, and also it was just plain fun. We have experimented with multiple different configurations inside of cloudpainter and think the final one will look something like the photoshop mockup above.
At this point here is how Hunter and I are thinking it will create paintings.
Our Neural Jet will be on an XY Table and airbrush a quick background. The 7Bots, each equipped with a camera and an artist's brush will then take care of painting in details. The 7Bots will use AI, Feedback Loops, and Machine Learning to execute and evaluate each and every brush stroke. They will also be able to look out into the world and paint things it finds interesting, particularly portraits.
The most amazing thing about all this is that until recently, doing all of this would have been prohibitively expensive. Something similar to this set up when I started 10 years ago would have been $40,000-50,000, maybe even more. Now you can buy and construct just about all the components that cloudpainter would need for under $5,000. If you wanted to go with a scaled down version, you could probably build most of its functionality for under $1,000. The most expensive tool required is actually the 3D printer that we bought to print the components for the Neural Jet seen in bottom left hand corner of picture. Even the 7Bots cost less than the printer.
Will leave you with this video of us messing around with the 7Bot. Its a super fun machine.
Also if you are wondering just what this robot is capable of, check out their video. We are really excited to be integrating this into cloudpainter.
Its an amazing machine.
My first robot project, a self driving car in 2005
Just reading that all new TESLA vehicles will be completely self driving and it made me think about my very first robot project. For the 2005 DARPA Grand Challenge II, I was a member of Team ENSCO and we built a self driving car that drove 86 glorious miles before careening into a tree. The robot and our work on it can be seen in the video below.
You can also see some cool things on our test vehicle, my 2005 Honda Element. That weird looking thing on top is an experimental Velodyne LIDAR system. Whenever you see a self driving Google car, it usually has the modern version of this contraption spinning around on top. For two weeks we experimented with the very first prototype in existence. I was actually pulled over by the Capital Police as we drove this vehicle around Capital Hill on a test run. The officers nearly arrested me after asking me what the spinning thing on top of my car was and I foolishly responded "It's an array of 64 lasers, um wait, they aren't harmful lasers, let me explain..."
Among many interesting lessons in the project was marketing. Over the course of the project we would always spend lots of time explaining what an autonomous car was. No one understood the word autonomous yet everyone in the industry insisted on calling them autonomous. Well in the ten years since it would appear that the marketers finally got involved and had the wisdom to just call them "self driving". Which just shows you how clueless we engineers are.
The Early Robots
Went searching for and found these images of the early robots.
The first robot started with a lot of wood pieces.
The very first paint head was a disaster, but this second one worked real well.
The Second Robot was built to be more mobile. It fit just barely through doors.
The third, of which I have lost all the photos except this one, was built to be massive. It filled a room. Also important was that it had an open face so I could photograph the painting more easily in order to use feedback loops in the painting logic.
After these unnamed machines came Crowd Painter, BitPaintr, Neural Jet, and cloudpainter.
Painting Robot's Ultimate Goal is to Eliminate All Humans
If you want a quick synopsis of the current state of my painting robot, this Thrillist Feature captures it perfectly. They somehow made the otherwise dry subject of artificial creativity entertaining, and sometimes funny. I really appreciate all the work of the film crew that worked on this with me and brought out some of the best aspect of this project. Big Thanks to Vin, Joshua, Mary, Peter, Mat, and Paul.
Robotic Arms, and The Beginning of cloudpainter
So we have long realized this, but now we finally have a plan. For this painting robot to truly be able to express itself, it needs arms. So we are planning on adding a pair, if not four. A conceptual 3D model showing where we would put the first two can be seen below.
So the way we are thinking about this whole robot coming together is to add a pair of robotic arms above the painting area. They would hold brushes, or maybe a brush and a camera. Still deciding on this. But as currently envisioned, the XY table will control the airbrushes, and the arms will control the traditional artist brushes. Lots of reasons for this, least of which we think it will look cool for them to dance around each other.
We expect to have one of the robot arms, a 7bot, here in a couple of days. Can't wait to see what we can do with it.
Another thing we are realizing is that this is beyond the scope of the Neural Jet. This new robot, a machine with a modular paint head on an xy-table and two robotic arms, is sort of a new project. So from here on out while the Neural Jet will refer to the modular paint head, the project in its entirety will be referred to as cloudpainter, and will encompass all the tech and algorithms from all of my previous robots.
Bonnie Helps Out With Logo
Not sure how I got distracted by logos today, but I did. Bonnie and Corinne played a big part in the process. First inspiration came from Corinne, who keeps calling the paint head a flower. Then Bonnie mentioned if it had a logo, it should both look like a flower, and incorporate the shape of the paint head.
Based on their input, I started playing with ideas and the number 9 became a big part of it. Besides the obvious fact that the paint head has 9 modules, I also based a lot of the proportions on factors of 9. For example, the outer radius was measured at 36mm, while the inner circles had radii of 27mm, and 18mm. The width of the internal line is 9mm, and all sorts of other stuff. As it was progressing some other cools ideas emerged that I left in for people to discover in the future.
So if you are looking at the picture above, you can see where it began on the left, and where I ended up on the right. The final image is the logo that I think I am going to go with. Gonna think about it for a couple of days.
Talked Portraiture With Chuck Close at the White House
As you know, I was invited to White House for SXSL. I have many good pictures of what was a real fun day. But I wanted to share this one in particular 'cause I am still processing how awesome meeting Chuck Close was.
So when I saw Chuck Close on the South Lawn, I realized that I had to introduce myself. As an artist he has always been a favorite, if not the favorite, though its sort of impossible to rank something like that. I have long imitated many of his concepts in my own art. I even designed my fourth robot to fit large canvases so it could paint portraits on the scale of his work.
While I wasn't expecting to be starstruck when I went up to introduce myself, I was. I didn't know what to say so I just told him I was a portrait artist to which he replied that he was sorry to hear that as it was a horrible line of work to be in. Then we chatted about portraiture briefly where he made a couple of other jokes before asking to see some of my work. I wasn't expecting that, either his interest in my work, or his compassionate humor. I showed him some portraits of my family on my phone, said my thanks for taking the time to talk with me, then went on my way.
I wanted to talk to him longer but at same time didn't want to be a harasser.
Thrillist Hero Shot
Thrillist Video just sent me this "hero shot" of me with the most recent robot. Looking forward to their video piece.
Awesome Video Shoot With Thrillist Crew
Had such a cool day. A film crew from Thrillist showed up to interview me about my painting robots. Thanks to Vin, Josh, Mary, Peter, Paul, and Matt. Was so fun. Lots of good footage including this shot that makes me look real important.