if you have been anywhere near the interwebs recently, you may have heard of DALL-E and a half trip. The kinds of art that neural networks can generate, and with a deeper understanding of the technology’s strengths and weaknesses, means we’re faced with a whole new world of pain. Often the butt of crude jokes (How do you get a waiter’s attention? He yells “Hey, artist!?”), the CG art is another key point in the “they took our job” narrative. of human versus machine.
To me, the interesting part of this is that robots and machines doing certain jobs have been grudgingly accepted, because the jobs are repetitive, boring, dangerous, or generally horrible. Machines that weld car frames do a much better, faster, and safer job than humans. Art, however, is something else.
As with all technology, there will come a time when you no longer trust your own eyes or ears; machines are going to learn and evolve at breakneck speed.
In the recent movie “Elvis,” Baz Luhrmann puts a quote into Colonel Tom Parker’s mouth, saying that a great act “gives the audience feelings they weren’t sure they should enjoy.” To me, that’s one of the best quotes I’ve heard about art in a long time.
Commercial art is nothing new; Whether your mind goes to Pixar movies, music, or the prints that come with frames at Ikea, art has been selling on a massive scale for a long time. But what, in general, it has in common is that it was created by humans who had a kind of creative vision.
The image at the top of this article was generated using MidJourney, while feeding the algorithm a slightly ridiculous warning: A man dances like Prozac is a cloud of laughter. As someone who has had a lifetime of mental health issues, including somewhat severe depression and anxiety, I was curious what a machine would come up with. And my God; none of these generated graphs is something that would have occurred to me conceptually. But, I’m not going to lie, they did something to me. I feel more graphically represented by these machine-generated works of art than almost anything else I’ve seen. And how wild it is I did. These illustrations were not drawn or conceptualized by me. All I did was write a weird message on Discord, but these images wouldn’t have existed if it wasn’t for my crazy idea. Not only did the image appear at the top of this article, but it spat out four completely different, and weirdly perfect, illustrations of a concept that is difficult to grasp:
It’s hard to put into words exactly what that means to concept illustrators everywhere. When someone can, with the click of a button, generate artwork from anything, emulate any style, create just about anything you can think of, in minutes, what does it mean to be an artist?
Over the last week or so, I may have gone a little overboard, generating hundreds and hundreds of images of Batman. Why Batman? I have no idea, but I wanted a topic that would help me compare the various styles that MidJourney can create. If you really want to go deeper down the rabbit hole, check out AI Dark Knight Rises on Twitter, where I share some of the best generated pieces I’ve found. There are hundreds and hundreds of candidates, but here is a selection that shows the variety of styles available:
Generating all of the above, and hundreds more, only had three bottlenecks: the amount of money I was willing to spend on my MidJourney subscription, the depth of creativity I could generate for prompts, and the fact that I could only generate 10 concurrent designs.
Now, I have a visual mind, but there is not an artistic bone in my body. But I don’t need one. I get a warning, for example, Batman and Dwight Schrute are in a fist fight – and the algorithm spits out four versions of something. From there, I can re-roll (ie generate four new images from the same indicator), generate a high resolution version of one of the images, or iterate based on one of the versions.

Batman and Dwight Schrute are in a fist fight. Because… well, why not. Image credits: Haje Camps (Opens in a new window) / half trip (Opens in a new window)
The only real shortcoming of the algorithm is that it favors the “take what you give” approach. Of course, you can get a lot more detail with your prompts to get a lot more control of the final image, both in terms of what goes into the image, the style, and other parameters. If you’re a visual director like me, the algorithm is often frustrating because my creative vision is hard to capture in words, and even harder for AI to interpret and render. But what’s scary (for artists) and exciting (for non-artists) is that we’re in the infancy of this technology, and we’re going to have a lot more control over how images are generated.
For example, I tried the following message: Batman (on the left) and Dwight Schrute (on the right) get into a fist fight in a parking lot in Scranton, Pennsylvania. dramatic lighting. Realistic photo. Monochrome. High detail. If I had given that notice to a human, I hope they’d tell me to fuck off for talking to them like they were a machine, but if they had to create a drawing, I suspect humans could interpret that. in a way that makes conceptual sense. I tried a bunch of tries, but there weren’t many illustrations that made me think “yeah, this is what I was looking for”.
What about copyright?
There is another interesting quirk here; many of the styles are recognizable and some of the faces are also recognizable. Take this one, for example, where I’m asking the AI to imagine Batman as Hugh Laurie. I don’t know about you, but I’m very impressed; he has the style of Batman, and Laurie is recognizable in the drawing. What I have no way of knowing, though, is if the AI ripped off another artist wholesale, and I wouldn’t love to be MidJourney or TechCrunch in court trying to explain how it all went wrong.

Hugh Laurie as Batman Image credits: half trip with a notice of Haje Camps under License BY-NC-40.
This kind of problem comes up in the art world more often than you might think. One example is the Shepard Fairey case, in which the artist allegedly based his famous Barack Obama “Hope” poster on a photograph of an AP freelance photographer, Mannie Garcia. It all turned into a fantastic mess, especially when a bunch of other artists started creating art in the same style. Now, we have a multi-layered plagiarism sandwich, where Fairey is allegedly plagiarizing someone else and being plagiarized in turn. And, of course, it’s possible to generate Fairey-style AI-art, which makes things infinitely more complicated. I couldn’t resist giving it a spin: Shepard Fairey-style Batman with HOPE text at the bottom.

HE WAITS. A great example of how AI can get close, but not pure, to the specific vision I had for this image. And yet the style is close enough to Fairey’s that it’s recognizable. Image credits: Haje Camps (Opens in a new window) / half trip (Opens in a new window)
Kyle has many more ideas about where the legal future of this technology lies:
So where does that leave artists?
I think the scariest thing about this development is that we’ve gone very quickly from a world where creative feats like photography, painting, and writing were safe from machines, to a world where that’s not so true as before. But, as with all technology, there will soon come a time when you can no longer trust your own eyes or ears; machines are going to learn and evolve at breakneck speed.
Of course, not everything is pessimism; if i were a graphic artist i would start using the latest tools for inspiration. The number of times I’ve been amazed at how well something turned out, and then thought, “but I wish it was a little more [insert creative vision here]” — if I had the graphic design skills, I could take what I have and turn it into something closer to my vision.
That may not be as common in the art world, but in product design, these technologies have been around for a long time. For PCBs, machines have been creating early versions of the trace layout for many years, often for engineers to tweak, of course. The same is true for product design; Already five years ago, Autodesk was showing off its prowess in generative design:
It’s a brave new world for every job (including mine, I had an AI write most of a TechCrunch story last year) as neural networks become ever more intelligent and ever richer sets of data to work with.
Let me close this extremely disturbing image, where several of the people the AI placed in the image are recognizable to me and other TechCrunch staff:

“A group photo of TechCrunch Disrupt staff with confetti.” Image credits: half trip with a notice of Haje Camps under License BY-NC-40
MidJourney images used in this post are licensed under Creative Commons Non-Commercial Attribution Licenses. Used with the explicit permission of the MidJourney team.