Recently I encountered some interesting bits of news that made me think about the future of art-as-we-know-it – and the question if it has one.
Humanity so far was quite sure, that the one thing computers can’t do is art. Machines just follow their programming and do not have what it takes to be original, creative, playful, associative, imaginative, intuitive – all of those were contributed to us humans alone and made up into something close to sacred.
I think this will be pretty much history shortly, like the flat earth and the universe turning around us as it’s centre.
For a long time we had “Art Filters” in for instance Photoshop. They created certain structures that did remind of natural media. Sometimes the results were interesting but often one could clearly see that the filter did not really understand the flow of lines and the content so the abstraction was pretty random and often not very convincing.
Over the years we saw better filters, that did some analysing and the results looked quite interesting in more cases, but overall even elaborate tools like Filter Forge or some of the better mobile apps only went so far.
Now enter the age of deep learning and AI. The filters are no longer just some fixed algorithms that a human programmer thought produced something interesting, but with deep learning, an algorithm can be trained on certain looks and then re-interpret a photo to use the same visual language as the original. The (at the moment of this writing) free app “Prisma” allows you to do just that on your phone or tablet: You take a photograph, select one of the available works of art as style guide, the image is uploaded to the server of the app provider (who gives himself quite a lot of license to do with the uploads what he wants, so be conscious about your uploads) and delivers back a result that in many cases looks quite stunning (see the header of this article for one of my favourites).
A simple sales slip as an extreme low-input test produces something that I can totally see as a record sleeve or poster:
A not-so-sharp-image of a bunch of flowers leads to something quite still-lifey:
Of course we are still talking about a relatively early software, some results don’t look very good, there still is some artificiality about the paintings, the range of options is very limited and it’s still a kind of mimicking, not original work, but as a first impression of what will be possible it’s quite impressive. Not the end of art by any means yet, but one step on a general path I see emerging.
Google Image Research
A while ago I saw a TED video about googles research on image recognition. The search engine specialists wanted to be able to catalogue images not only by their names or descriptions but directly by their content. So they worked on a neural network that can be trained on certain things to look for. You basically feed a lot of images (and google has them all ;-) ) of for instance birds into the algorithm. Then you let the algorithm try to find birds in mixed collections of images and a human tells the machine where it was wrong or right for training. And then you can refine this into different kinds of birds, the direction the bird is seen from, partial images of birds etc..
At the end you have a system that, presented with an image, can tell you if there is a bird in it, what kind etc.
Now the developers transformed the equation: what if we “tell” the system the result (“bird”) instead, give it some seeding image (maybe of something totally different like clouds) and let it come of with an image that converges towards “bird” as the system sees it.
As it happens, the results are not so far off from people staring into clouds seeing all kinds of things…
Here is the video for you with the face of the presenter in the clouds as seen by the machine:
Not too bad for a mere computer…
But more interestingly, the presenter proposes, that art, abstraction, caricatures etc. are basically a byproduct or “reverse” of – in this case – our visual sense, our perception. That our knowing of many different birds allows us to come up with new, non-existing birds or cubist birds or extremely abstracted birds like just a wavy line in the sky…
And that this ability is not as special or human-only as we are so proud to think.
But what about the reverse?
Now check out this video (best in full screen):
You focus on the crosshair in the centre of the screen and the faces that you see in the corner of your eyes turn into rather distorted images of people. Or monsters or caricatures or…
Doesn’t that feel extremely close to what happens in Googles image research?
Are we maybe much less different from what we start to build and program?
Is a lot of our highly valued imagination and creativity a mere side-product – or even a bug in our visual or other systems?
I never was into drugs, but from other peoples descriptions I can imagine them as substances to confuse our systems (even more than usual) to intensify that kind of “noise” and see all things distorted and have some or all senses screwed up more or less completely…
Another video on TED asks the question, if machines can create poetry and if yes, how would that make you feel:
And I think this is a very important question – does it make a difference?
What exactly is the value that we account to art?
The answer will probably be pretty different for the artist him/herself and the recipient.
If an artist is writing poetry as a way to express him/herself, cope with life etc., it does not matter much if a machine can do the same, but if the artist is trying to sell his work, the market may change if machines are able to create really good poetry (more on that later).
For the recipient it may boil down to the implied worth of sharing emotions with other humans. If the writer is not human, what exactly are you relating to? Is it the same if you are deeply moved by a poem about terror and war you think was written by a fellow human experiencing them to then find out the writer is an AI in a datacenter somewhere?
What if after some tragic event in the real world, Googles startpage would show a poem about that event that was created by an AI from ALL reactions worldwide, condensing them into a couple of lines, making you cry?
And again to take a step back, we are looking at the earliest results here so far.
First we had pretty much random poetry generators like for instance “Lyrics 2.0” from Xoxos.net (scroll a bit down on the page to find it).
This can sometimes come up with rather interesting results already.
I created the lyrics of the following piece with it, then they were “spoken” by “Karen” (one of the OSX system voices) and finally transformed into a kind of swarm-reading with the “Granite” grain-sampler:
As a next step, now we can train systems with deep learning on existing poetry and have them come up with something similar.
But think further: imagine something much more sophisticated. A deep learning algorithm with several kinds of inputs to train on: All poems ever written and published to analyse, all metres and rhymes and language structures to build on – and several humans connected to a scanner for brain activity.
Now let the algorithm optimise it’s results towards different emotional responses…
I think it would probably come up with VERY touching results.
Today I was sitting in the park at a little pond, the sun shining warmly, cute ducklings on the water, a soft breeze in the trees, the perfect summer Sunday.
On the grass around me there were many people, some alone, couples, families, crowds of friends – and about 80% of them looking at their smartphones trying to catch Pokemons.
Now what does that have to do with art you may ask?
Nothing indeed as such, but what I found totally fascinating was, how simple it is to catch human attention with a couple of cute little monsters.
So far we see thousands of companies creating games and every now and then, something goes totally viral for a while.
Millions of musicians work all their life and every now and then, somebody creates a song that the whole world can’t get out of their heads.
Often those things that humans find so captivating are actually rather simplistic in nature, not exactly the “highest art”. But imagine for a moment a future where ad agencies, game companies, music labels and poets no longer work via trial and error, but let their campaigns, games, songs and poems be optimised via neural networks and AIs for optimum penetration of the human senses, for the maximum grab of attention and for the longest possible remembering and earwormishness.
Can you imagine such things that humans are totally unable to resist?
Imagine a future where we know so absolutely, so perfectly, so surely how our mind works and how a human will respond to a certain stimulus that no ordinary human will be able to resist or stay away from something that one of the big players trains his AI on.
But will it be art?
We will probably need some new definitions there…
In this video we learn how computers can translate Chinese, also see examples of image recognition and how a computer was making a medical discovery as a byproduct of finding cancer cells.
Google got access to Great Britain hospital records to look for indicators for different kinds of diseases. The system may find things we so far thought insignificant, that we are so used to that we simply do not see them at all.
The development of self driving cars starts to first raise the question how long it will take until they are really good – and second, how long it will take until humans are no longer allowed to drive, since they are known all along for not really being that good at it. The death of one Tesla Model S customer recently raised a huge outcry, mainly because it was the first, while the huge numbers of deaths from human drivers are already so normal that nobody even cares anymore.
So what will happen if computers get smarter than we?
I personally think we are in for some major surprises on many levels and probably much sooner than expected.
We will probably lose many more of the illusions we hang on to about our own superiority.
A lot of what we thought makes us special may just fall away.
Who needs drivers when the car, the train, the bus, the air-plane is more securely driving or flying itself?
Who needs workers if machines can do all the menial work much quicker, cheaper and without overtime-payment and complaints?
Who needs psychologists if an AI can examine your brain in realtime and give it exactly the stimulus to get rid of that trauma?
And who needs artists if AIs can create experiences that no human creator can match?
The future of humanity
Many religions talk about getting rid of outer importance and just being what one really is.
The state of achieving that is often called “enlightenment” although the term is controversial and misleading.
But it is in essence a state of great freedom.
It may happen that we get to that state on a completely different route than expected.
What if we are no longer “needed” for work, thought or even art?
What if AI “takes over” the place we thought having?
What kind of development do we have to make to be able to stand that, to not go crazy?
I do not try to answer the question I asked at the beginning, I find questions and food for thought more interesting than answers.
But we ARE living in interesting times, aren’t we? ;-)