Work Continues Mapping the Brushstrokes of Famous Masterpieces
Once I created a brushstroke map of Edvard Munch's The Scream, I thought it would be cool to have brush stroke mappings for more iconic artworks. So I googled "famous paintings" and was presented with a rather long list. Interestingly The Scream was in the top three along with da Vinci's Mona Lisa and Van Gogh's Starry Night. Well, why not do the top three. So work has begun an creating a stroke map for The Mona Lisa. In the following image, the AI has taken care of laying down an underpainting, or what would have been called a cartoon in da Vinci's time.
I am now going into it by hand and finger-swiping my best guess as to how da Vinci would have applied his brushstrokes. Will post the final results as well as provide access to the Elasticsearch database with all the strokes as soon as it is finished. My hope is that the creation of the brushstroke mappings can be used to better understand these artists, and how artists create art in general.
A Deeper Learning Analysis of The Scream
Am a big fan of what The Google Brain Team, specifically scientists Vincent Dumoulin, Jonathon Shlens, and Majunath Kudlar, have accomplished with Style Transfer. In short they have developed a way to take any photo and paint it in the style of a famous painting. The results are remarkable as can be seen in the following grid of original photos painted in the style of historical masterpieces.
However, as can be seen in the following pastiches of Munch's The Scream, there are a couple of systematic failures with the approach. The Deep Learning algorithm struggles to capture the flow of the brushstrokes or "match a human level understanding of painting abstraction." Notice how the only thing truly transferred is color and texture.
Seeing this limitation, I am currently attempting to improve upon Google's work by modeling both the brushstrokes and abstraction. In the same way that the color and texture is being successfully transferred, I want the actual brushstrokes and abstractions to resemble the original artwork.
So how would this be possible? While I am not sure how to achieve artistic abstraction, modeling the brushstrokes is definitely doable. So lets start there.
To model brushstrokes, Deep Learning would need brushstroke data, lots of brushstroke data. Simply put, Deep Learning needs accurate data to work. In the case of the Google's successful pastiches (an image made in style of an artwork), the data was found in the image of the masterpieces themselves. Deep Neural Nets would examine and re-examine the famous paintings on a micro and macro level to build a model that can be used to convert a provided photo into the painting's style. As mentioned previously, this works great for color and texture, but fails with the brushstrokes because it doesn't really have any data on how the artist applied the paint. While strokes can be seen on the canvas, there isn't a mapping of brushstrokes that could be studied and understood by the Deep Learning algorithms.
As I pondered this limitation, I realized that I had this exact data, and lots of it. I have been recording detailed brushstroke data for almost a decade. For many of my paintings each and every brushstroke has been recorded in a variety of formats including time-lapse videos, stroke maps, and most importantly, a massive database of the actual geometric paths. And even better, many of the brushstrokes were crowd sourced from internet users around the world - where thousands of people took control of my robots to apply millions of brushstrokes to hundreds of paintings. In short, I have all the data behind each of these strokes, all just waiting to be analyzed and modeled with Deep Learning.
This was when I looked at the systematic failures of pastiches made from Edvard Munch's The Scream's, and realized that I could capture Munch's brushstrokes and as a result make a better pastiche. The approach to achieve this is pretty straight forward, though labor intensive.
This process all begins with the image and a palette. I have no idea what Munch's original palette was, but the following is an approximate representation made by running his painting through k-means clustering and some of my own deduction.
With the painting and palette in hand, I then set cloudpainter up to paint in manual mode. To paint a replica, all I did was trace brushstrokes over the image on a touch screen display. The challenging part is painting the brushstrokes in the manner and order that I think Edvard Munch may have done them. It is sort of an historical reenactment.
As I paint with my finger, these strokes are executed by the robot.
More importantly, each brushstroke is saved in an Elasticsearch database with detailed information on its exact geometry and color.
At the conclusion of this replica, detailed data exists for each and every brushstroke to the thousandth of an inch. This data can then be used as the basis for an even deeper Deep Learning analysis of Edvard Munch's The Scream. An analysis beyond color and texture, where his actual brushstrokes are modeled and understood.
So this brings us to whether or not abstraction can be captured. And while I am not sure that it can, I think I have an approach that will work at least some of the time. To this end, I will be adding a second set of data that labels the context of The Scream. This will include geometric bounds around the various areas of the painting and be used to consider the subject matter in the image. So while The Google Brain Team used only an image of the painting for its pastiches, the process that I am trying to perfect will consider the the original artwork, the brushstrokes, and how brushstroke was applied to different parts of painting.
Ultimately it is believed that by considering all three of these data points, a pastiche made from The Scream will more accurately replicate the style of Edvard Munch.
So yes, these are lofty goals and I am getting ahead of myself. First I need to collect as much brushstroke data as possible and I leave you now to return to that pursuit.
Full Visibility's Machine Learning Sponsorship
Wanted to take a moment to publicly thank cloudpainter's most recent sponsor, Full Visibility.
Full Visibility is a Washington D.C. based software consulting boutique that I have been lucky enough to become closely associated with. Their sponsorship arose from a conversation I had with one of their partners. Was telling him how I finally thought that Machine Learning, which has long been an annoying buzzword, was finally showing evidence of being mature. Next thing I knew Full Visibility bought a pair of mini-supercomputers for the partner and I to experiment with. One of the two boxes can be seen in the picture of my home based lab below. It's the box with the cool white skull on it. While nothing too fancy, it has about 2,500 more cores than any other machine I have ever been fortunate enough to work with. The fact that private individuals such as myself can now run ML labs in their own homes, might be the biggest indicator that a massive change is on the horizon.
Full Visibility joins the growing list of cloudpainter sponsors which now includes Google, 7Bot, RobotArt.org, 50+ Kickstarter Backers, and hundreds of painting patrons. I am always grateful for any help with this project that I can get from industry and individuals. All these fancy machines are expensive, and I couldn't do it without your help.
Pindar Van Arman
Some Final Thoughts on bitPaintr
Hi again,
Its been a year since this project was successfully launched. As such here is a recap of how the project went, insight on what I have learned about my own art, as well as a preview of where I am taking things next. This might be a long post, so sit tight.
Some quick practical matters first though. For backers still awaiting your 14"x18" portraits, it should be in the image and time lapse below. If there has been a mix-up and your portrait somehow got overlooked, just send me a message and I will straighten it out. Also look for any other backer awards such as postcards and line art portraits in coming weeks.
A Year of bitPaintr
I can start by saying that I did not imagine the bitPaintr project doing as well as it did. And I have no problem thanking all the original backers once again - even though you are all probably tired of hearing it. But as a direct result of your support so many good things happened for me over the past year. I could tell you about all of them but that would make this post too long and too boring - so I will just concentrate on the two most significant things that resulted from this campaign.
The first is that I finally found my audience. Slowly at first, then more rapidly once the NPR piece aired, people started hearing about and reacting to my art. And the more people would hear about it, the more media would cover it, and then even more people would hear about it. And while not completely viral, it did snowball and I found myself in dozens of news articles, feature, and video pieces. Here is a list of some of my favorite. This time last year I was struggling to find an audience and would have settled for any venue to showcase my art. Today, I am able to pick and choose from multiple opportunities.
The second most significant part of all this is that I found my voice. Not sure I fully understood my own art before, well not as much as I do now. I had the opportunity to speak to, hang out with, and get feedback from you all, other artists, critics, and various members of the artificial intelligence community. All this interaction has lead me to realize that the paintings my robots produce are just artifacts of my artistic process. I once focused on making these artifacts as beautiful as possible, and while still important to me, I have come to realize that the paintings are the most boring part of this whole thing.
The interaction, artificial creativity, processes, and time lapse videos are where all the action is. In the past year I have learned that my art is an interactive performance piece that explores creativity and asks the sometimes trite questions of "What is Art?" and "What makes me, Pindar, an Artist?" - or anyone an artist. This is usually a cliche theme, and as such a difficult topic to address without coming off as pretentious. But I think the way my robots address it is novel and interesting. Well, at least I hope so.
Next Steps
As I close up bitPaintr, I am looking forward to the next robot project called cloudPainter. Will begin by telling you the coolest part about the project which is that I have a new partner, my son Hunter. He is helping me focus on new angles that I had not considered before. Furthermore, our weekend forays into Machine Learning, 3D printing, and experimental AI concepts have really rejuvinated my energy. Already his enthusiasm, input, and assistance has resulted in multiple hardware upgrades. While the machine in the following photo may look like your average run-of-the-mill painting robot, it has two major hardware upgrades that we have been working on.
The first can be seen in the bottom left hand corner of robot. It is the completely custom 3D printed NeuralJet Painthead. Hunter, Dante, and I have been designing and building this device for the last 4 months. It holds and operates five airbrushes and four paintbrushes for maximum painting carnage. The second major hardware improvement can be seen near the top of canvas. You will notice not one, but two fully articulated 7Bot robotic arms. So while the NeuralJet will be used for the brute application of paint and expressive marks, the two 7Bot robotic arms will handle the more delicate details. Furthermore, each robotic arm will have a camera for looking out into its environment and tracking its own progress on the paintings.
Our software is currently receiving a similar overhaul. I would go into detail, but Hunter and I are still not sure of where its going. We are taking and using all of the previous artificial creativity concepts that have gotten us this far, and adding to them. While bitPaintr was a remarkably independent artist, it did have multiple limitations. In this next iteration we are going to see how many of those limitations we can remove. We are not positive what exactly that will look like, but have given ourselves a year to figure it out.
If you would like to continue following our progress, check out our blog at cloudpainter.com. Things are just getting started on our sixth painting robot and we are pretty excited about it.
Thanks for everything,
Pindar Van Arman
cloudpainter Hardware Complete - 2 Robotic Arms and 5 Airbrushes
cloudpainter, as we currently imagine it, will have two 7bot robotic arms and five airbrushes on our Neural Jet painthead. The canvas will be on a track and move up and down between the painting tools.
We are thinking that when a painting begins, the Neural Jet will use its airbrushes to paint a quick background.
Then the canvas will be moved up to an area where the robotic arms can use artist brushes to touch up the painting. If it needs more airbrushing, it will move back to that area, back and forth as needed.
There is still a lot of fine tuning we need to make to the hardware to make all of this possible. But at least we now know the direction we are heading in and we can begin to write the software.
Deussen & Lindemeier's eDavid
A couple of years ago a video started spreading that showed an articulated robotic arm painting intricate portraits and landscapes. This robot was named eDavid and was the work of Oliver Deussen and David Lindemeier from the University of Konstanz. While many painting robots had proceeded eDavid, none painted with its delicacy or captured the imagination of such a wide audience.
While the robot had remarkable precision it also seemed to have an artistic, almost impressionistic sensibility. So how did it go about creating its art?
When speaking of eDavid's, Deussen and Lindemeier see its paintings as more of a science than art. Their hypothesis is that "painting can be seen as an optimization process in which color is manually distributed on a canvas until one is able to recognize the content. - regardless if it is a representational painting or not." While humans handle this intuitively with a variety of processes that depend on the medium and its limitations, eDavid uses an "optimization process to find out to what extent human processes can be formulated using algorithms."
One of the processes they have nearly perfected is called feedback loops, a concept I use with my own robots and first heard about from painter Paul Klee. It is where you make a couple strokes, take a step back and look at them, adjustment your approach depending on how well those strokes accomplished your intent, then make more strokes based on the adjustment. You do this over and over again until you finish a painting. Simple concept right? And almost mechanical, but it is how many artists paint.
So to emphasize how well the robot has become at painting with feedback loops, I leave you with my favorite eDavid creation. Not sure what its title is, but how can you deny that the painting below looks and feels like it was painted by a skilled artist.
Lost in Abstraction - Style Transfer and the State of the Art in Generative Imaging
Seeing lots of really cool filters on my friend's photos recently, especially from people using the prisma app. Below is an example of such a photo and one of the favorites that I have seen.
The filters being applied to these photos are obviously a lot more than adjusting levels and contrast, but what exactly are they? While I can not say for sure what prisma is using, a recently released research paper by Google scientists gives a lot of insight into what may be happening.
The paper, titled A Learned Representation for Artistic Style and written by Vincent Dumoulin, Jonatha Shlens, and Manjunath Kudlar of Google Brain, details the work of multiple research teams in the quest to achieve the perfect pastiche. No worries if you don't know what a pastiche is, I didn't either until I read the paper. Well I knew what one was, I just didn't know what they were called. So a pastiche is an image where an artist tries to represent something in the style of another artist. Here are several examples that the researchers provide.
In the above image you can see how the researchers have attempted to render three photographs in the style of Lichtenstein, Rouaul, Monet, Munch, and Van Gogh. The effects are pretty dramatic. Each pastiche looks remarkably like both source images. One of the coolest things about the research paper is that it contains the detailed replicate-able process so that you too can create your own pastiche producing software. While photo editing apps like prisma seam to be doing a little more than just a single pastiche, my gut tells me that this process or something similar is behind much of what they are doing and how they are doing it so well.
So looking at the artificial creativity behind these pastiches, I like to ponder the bigger question. How close are we to a digital artist? I always ask this 'cause that is what I am trying to create.
Well, as cool and cutting edge as these pastiches are, they are still just filters of photos. And even though this is the state of the art coming out of Google Brain, they are not even true pastiches yet. While they do a good job of transferring color and texture, they don't really capture the style of the artist. You wouldn't look at any of the pastiches in the second column above and think that Lichtenstein actually did them. They share color, contrast, and texture, but thats about it. Or look more closely at these pastiches made from Edvard Munch's The Scream (top left).
While the colors and textures of the imitative works are spot on, the Golden Gate Bridge looks nothing like the abstracted bridge in The Scream. Furthermore, the two portraits have none of the distortion found in the face of the central figure of the painting. These will only be true pastiches when these abstract elements are captured alongside the color, texture, and contrast. The style and process behind producing these pastiches seam to be getting lost in the abstraction layer.
How do we imitate abstraction. No one knows yet, but there are a lot of us working on the problem and as of November 1, 2016 this is some of the best work being done on the problem.
Mathew Stein's PumaPaint
I recently spoke with Mathew Stein about his painting robot PumaPaint. Way back in 1998 he equipped a Puma robotic arm with a brush, aimed a web-cam at it, and then invited the internet to crowdsource paintings with it. And he did all this before even crowdsourcing was even a word. In the first two years of the project alone over 25,000 unique users created 500 paintings. The robot continued creating crowdsourced painting for about 10 years.
I asked Mathew if he realized how ahead of its time his PumaPaint Project was. He laughed and said he had not realized it until the New York Times wrote an article about him.
Oddly enough though, Mathew Stein, does not seam to consider himself an artist, or even realize that his project was an interactive performance art piece. For him it was about the technology and interaction with people around the world. Successful exhibitions in today's art scene are all about audience interaction and experimentation with new media. Without even setting out to do so, Mathew Steins' PumaPaint achieved both on a global scale. People from around the world were able to use the newly emerging internet to control a teleoperated robotic arm and paint with each other. This would be a cool interactive exhibit by today's standards, and it was done 20 years ago.
Below are some examples of the crowdsourced art produced by PumaPaint. Mathew Stein considers the painting on the right from 2005 to be the single "most interesting piece from PumaPaint."
Whether or not Mathew Stein realizes he is an artist, I do. And much of my own robotic art has been inspired by his early work.
Integrating 7Bot with cloudpainter
Could not be happier with the 7Bot that we are integrating into cloudpainter.
The start-up robotic arm manufacturers that make 7Bot sent us one for evaluation and we have been messing around with it for the past week. The robot turned out to be perfect for our application, and also it was just plain fun. We have experimented with multiple different configurations inside of cloudpainter and think the final one will look something like the photoshop mockup above.
At this point here is how Hunter and I are thinking it will create paintings.
Our Neural Jet will be on an XY Table and airbrush a quick background. The 7Bots, each equipped with a camera and an artist's brush will then take care of painting in details. The 7Bots will use AI, Feedback Loops, and Machine Learning to execute and evaluate each and every brush stroke. They will also be able to look out into the world and paint things it finds interesting, particularly portraits.
The most amazing thing about all this is that until recently, doing all of this would have been prohibitively expensive. Something similar to this set up when I started 10 years ago would have been $40,000-50,000, maybe even more. Now you can buy and construct just about all the components that cloudpainter would need for under $5,000. If you wanted to go with a scaled down version, you could probably build most of its functionality for under $1,000. The most expensive tool required is actually the 3D printer that we bought to print the components for the Neural Jet seen in bottom left hand corner of picture. Even the 7Bots cost less than the printer.
Will leave you with this video of us messing around with the 7Bot. Its a super fun machine.
Also if you are wondering just what this robot is capable of, check out their video. We are really excited to be integrating this into cloudpainter.
Its an amazing machine.
My first robot project, a self driving car in 2005
Just reading that all new TESLA vehicles will be completely self driving and it made me think about my very first robot project. For the 2005 DARPA Grand Challenge II, I was a member of Team ENSCO and we built a self driving car that drove 86 glorious miles before careening into a tree. The robot and our work on it can be seen in the video below.
You can also see some cool things on our test vehicle, my 2005 Honda Element. That weird looking thing on top is an experimental Velodyne LIDAR system. Whenever you see a self driving Google car, it usually has the modern version of this contraption spinning around on top. For two weeks we experimented with the very first prototype in existence. I was actually pulled over by the Capital Police as we drove this vehicle around Capital Hill on a test run. The officers nearly arrested me after asking me what the spinning thing on top of my car was and I foolishly responded "It's an array of 64 lasers, um wait, they aren't harmful lasers, let me explain..."
Among many interesting lessons in the project was marketing. Over the course of the project we would always spend lots of time explaining what an autonomous car was. No one understood the word autonomous yet everyone in the industry insisted on calling them autonomous. Well in the ten years since it would appear that the marketers finally got involved and had the wisdom to just call them "self driving". Which just shows you how clueless we engineers are.
The Early Robots
Went searching for and found these images of the early robots.
The first robot started with a lot of wood pieces.
The very first paint head was a disaster, but this second one worked real well.
The Second Robot was built to be more mobile. It fit just barely through doors.
The third, of which I have lost all the photos except this one, was built to be massive. It filled a room. Also important was that it had an open face so I could photograph the painting more easily in order to use feedback loops in the painting logic.
After these unnamed machines came Crowd Painter, BitPaintr, Neural Jet, and cloudpainter.
Harold Cohen's AARON
I recently received an email from Harold Cohen's assistant that he was sorry for not getting back in touch with me sooner, but it was because Cohen had passed away earlier in the month. This was earlier this year.
We had been talking at length about artificial creativity, and I was wondering why Cohen stopped talking with me all of a sudden. At the time he was helping me prep for my TEDx Talk on artificial creativity and was not shy in his critique of both my talk, and how he though I might be exaggerating my robot’s capabilities. As we talked, I found that our conversations on the subject often lasted far longer than it seamed either of us had planned for. The email his assistant was responding to was actually a draft of my TEDx Talk that I had sent him for review. I never heard back from him and figured that maybe he was no longer interested with my views on the subject. I had no idea his health was failing at the time.
In our talks I found his views on painting robots to be remarkably insightful and a little cantankerous. They were what you would expect from a man 40 years ahead of his time. His first painting robot AARON, was built in the 70s when no one else was even considering some of the concepts he was exploring. In our talks one of thing that stood out was his belief that a painting robots primary shortcoming was that it did not create its own imagery. He was obsessed with the idea that most were merely printers executing a filter on an image. Perhaps a filter more complex than something you find on Instagram or Snapchat, but a filter none-the-less. Though I can not find the quote I do remember reading something by him that was to the effect of "There are two kinds of painting robots. Those painting from photographs, and those lying about it."
I wish we had longer to talk with him ,because even though we disagreed on a lot, he was absolutely right about one critical aspect of robotic art. The ultimate goal is to break free from filters. I don't know what that means exactly, but whenever I create a new approach to artificial creativity, I ask myself how much of a filter it is, and try to make it less so.
Painting Robot's Ultimate Goal is to Eliminate All Humans
If you want a quick synopsis of the current state of my painting robot, this Thrillist Feature captures it perfectly. They somehow made the otherwise dry subject of artificial creativity entertaining, and sometimes funny. I really appreciate all the work of the film crew that worked on this with me and brought out some of the best aspect of this project. Big Thanks to Vin, Joshua, Mary, Peter, Mat, and Paul.
Robotic Arms, and The Beginning of cloudpainter
So we have long realized this, but now we finally have a plan. For this painting robot to truly be able to express itself, it needs arms. So we are planning on adding a pair, if not four. A conceptual 3D model showing where we would put the first two can be seen below.
So the way we are thinking about this whole robot coming together is to add a pair of robotic arms above the painting area. They would hold brushes, or maybe a brush and a camera. Still deciding on this. But as currently envisioned, the XY table will control the airbrushes, and the arms will control the traditional artist brushes. Lots of reasons for this, least of which we think it will look cool for them to dance around each other.
We expect to have one of the robot arms, a 7bot, here in a couple of days. Can't wait to see what we can do with it.
Another thing we are realizing is that this is beyond the scope of the Neural Jet. This new robot, a machine with a modular paint head on an xy-table and two robotic arms, is sort of a new project. So from here on out while the Neural Jet will refer to the modular paint head, the project in its entirety will be referred to as cloudpainter, and will encompass all the tech and algorithms from all of my previous robots.
Bonnie Helps Out With Logo
Not sure how I got distracted by logos today, but I did. Bonnie and Corinne played a big part in the process. First inspiration came from Corinne, who keeps calling the paint head a flower. Then Bonnie mentioned if it had a logo, it should both look like a flower, and incorporate the shape of the paint head.
Based on their input, I started playing with ideas and the number 9 became a big part of it. Besides the obvious fact that the paint head has 9 modules, I also based a lot of the proportions on factors of 9. For example, the outer radius was measured at 36mm, while the inner circles had radii of 27mm, and 18mm. The width of the internal line is 9mm, and all sorts of other stuff. As it was progressing some other cools ideas emerged that I left in for people to discover in the future.
So if you are looking at the picture above, you can see where it began on the left, and where I ended up on the right. The final image is the logo that I think I am going to go with. Gonna think about it for a couple of days.
Talked Portraiture With Chuck Close at the White House
As you know, I was invited to White House for SXSL. I have many good pictures of what was a real fun day. But I wanted to share this one in particular 'cause I am still processing how awesome meeting Chuck Close was.
So when I saw Chuck Close on the South Lawn, I realized that I had to introduce myself. As an artist he has always been a favorite, if not the favorite, though its sort of impossible to rank something like that. I have long imitated many of his concepts in my own art. I even designed my fourth robot to fit large canvases so it could paint portraits on the scale of his work.
While I wasn't expecting to be starstruck when I went up to introduce myself, I was. I didn't know what to say so I just told him I was a portrait artist to which he replied that he was sorry to hear that as it was a horrible line of work to be in. Then we chatted about portraiture briefly where he made a couple of other jokes before asking to see some of my work. I wasn't expecting that, either his interest in my work, or his compassionate humor. I showed him some portraits of my family on my phone, said my thanks for taking the time to talk with me, then went on my way.
I wanted to talk to him longer but at same time didn't want to be a harasser.
Thrillist Hero Shot
Thrillist Video just sent me this "hero shot" of me with the most recent robot. Looking forward to their video piece.
Awesome Video Shoot With Thrillist Crew
Had such a cool day. A film crew from Thrillist showed up to interview me about my painting robots. Thanks to Vin, Josh, Mary, Peter, Paul, and Matt. Was so fun. Lots of good footage including this shot that makes me look real important.
Ten Years of Progress on Painting Robots
These pics both show my very first painting robot head and the most recent one. The first, which could hold only one brush, was made from parts found lying around my house including old pieces of wood, a handmade electromagnet, tape, and deck parts. The most recent can hold and operate nine different kinds of brushes and is almost completely 3D printed. Some of the plastic even glows in the dark.
Airbrush Actuator Complete
It is amazing how much an invitation from the White House can speed up development. Long hours this weekend went into getting a working airbrush prototype. While paint brushes will remain the primary mark making device in the Neural Jet, it will be cool to have them backed up by five airbrushes with the ability to quickly paint backgrounds.
Also cool that we went with the servos instead of something like a solenoid to control air flow. With our servos we can actuate the air coming out to 16 different pressures. So mixing becomes possible, and since we have 5 airbrushes on the paint head, the Neural Jet will be able to paint over 1,000,000 colors (16^5). Yeah this part of the project really is just re-inventing a printer, but coupled with the other mark making tools that are coming, it will be on the next level. You can see the prototype in action below.