There is some anxiety in my industry surrounding the rapid advancement of Artificial Intelligence and how it may someday put designers out of a job. And while I don't think most designers have anything to worry about in the immediate future due to some of the limitations of AI, I feel it's time for designers to sit up and pay attention to what's happening in the world of AI, how it's going to impact our industry, and how to think of AI as a potentially powerful and reliable creative partner.
Humans have always worked towards creating machines and technology that improve the ways in which we get things done. From the wheel to indoor plumbing to smartphones, we have evolved to create technology that makes our work and life a bit easier — and we’ve evolved not just ourselves but also our jobs as a result of that.
Design is not immune to progress and is already being impacted by AI. But by learning more about AI, experimenting, and embracing the technology now, you as a designer will have the advantage of a design partner and tool that you can use to meet ever-evolving workplace demands.
Limitations of AI
Why do I keep saying that AI makes a good creative partner? Shouldn't a design-bot using AI whip out a perfect annual report or generate 20 different logos in a matter of seconds? Shouldn't it anticipate a user's needs and adjust UI design on the fly based on user behavior or tech or mood? Yes. It should. And it will, eventually. But in the meantime, it's important to know some of the limitations of AI before jumping in and getting all frustrated.
It's not great with emotion
I've always been very in tune with my audience's emotional state at various points in a sales funnel or marketing campaign. This skill has proved to be a phenomenal advantage over the course of my career as a Creative Director and especially when working in pitch or presentation situations. But really, I'm not that special. We all do it. It's hard-wired into our brains as humans to detect and decode other people's emotions based on body language, tone of voice, context and social cues – all of which are based on cultural and learned norms.
You can look at someone whom you've never met before, walking towards you in the grocery store, and recognize if they are mad, sad, scared, happy – instantly. AI on the other hand needs help with decoding emotional cues. It doesn't pick up on things like social cues automatically.
Significant Training Before Creating Original Content
When I work with AI, I need to spend a significant amount of time establishing rules and 'training' the AI. For example, when I work on a piece of music in AI, I need to let the AI know what music itself should sound like.
Consider a standard-sized piano keyboard. It has 88 keys that could be pressed and played quickly, held for a sustain, velocity affects volume, pedals can be involved, and of course, pianos are polyphonic. So with so many variables, how would AI have any idea what "music" should sound like? How do we not end up with a bunch of disorganized, random notes? The answer is simple. I teach it. I develop a framework for the AI to work within and establish rules based on music theory and expose it to the music of Bach, Mozart, and other great composers as examples–or patterns–for how to apply rules to the available 88 keys.
Here is an example of a piece I created called "Tables". With training in place, I introduced "seeds" that were ideas I had for the basic melody and structure of the piece. My AI partner took my original ideas and built on them. I rejected some of the ideas, kept others in their entirety, and modified suggestions from my AI partner that were in the right direction but not quite there yet. In the end, I created this:
AI is Amazing at Recognizing Pattens
As designers, we are trained to look for patterns in our world. The best designers (IMHO) are able to identify very subtle visual cues that correlate with a person's emotional state. For example, I designed a piece once upon a time for a client in the midwest targeted towards blue-collar men in their 30s. The piece itself needed to convey 'strength' and 'power'. My design research focused on visual cues that our target audience would instantly decode as "strength".
I settled on a "V" shape that would become the design theme for the entire piece. The "V" is a classic symbol for a 'strong' male (broad shoulders, thin waist), as well as the shape of a Harley Davidson engine that our audience instantly associated with 'power' and 'strength'.
If at the time I had an AI partner, the "V" concept wouldn't have been the final idea. It would have been the seed. It would have been the foundation for training AI to generate countless, additional iterations based on a design direction agreed to by stakeholders.
How will the Role of the Designer Change with AI?
If you have been in the industry for the past 15 years or so, you've already seen shifting expectations of what defines a designer. Specialized designers are becoming a thing of the past. Not too long ago, being a web designer was a specialty. So was being a print designer for that matter. But those lines have not only blurred over the years, there is now a need (and expectation) that designers are T-shaped, with a breadth in product strategy, visual design, and interaction design.
As our industry evolves and continues to redefine what a designer's role is, I believe that AI will begin to turn designers into curators before creators. As a Creative Director, I may take a 'seed' of a design idea as well as an over-arching brand strategy, and put it in the hands of a design team to run with. I would expect jaw-dropping iterations. I would expect the discovery of exceptions. Now, if a designer was using AI, and could crank out hundreds (or 7 million) iterations based on the provided creative brief, I guarantee you a large percentage of them will be unusable. That is where a trained designer of the future will come in. Look at the output, look at the business objectives, look at the emotional state we understand our audience to be in, look at the brand as a whole and start making some design decisions. Tweak, fix and present.
I believe that designers will become more visual system designers with their AI partners rather than individual artifact designers. Designing a system that implements generative design based on human ideation and input (seeds) that results in artifacts that are on strategy, I believe, will be the future of design.
Creating Original Content with AI
As I mentioned before, one of the hurdles to using AI is training your AI. The second big hurdle is training yourself.
The above piece, "Seed_7001" I created using AI. And it wasn't easy. First, I had to learn the N programming language. Then, I used a formula for the basis of the generation of the art that looked something like this:
my_formula <- list(
x = quote(runif(1, -1, 1) * x_i^2 - sin(y_i^2)),
y = quote(runif(1, -1, 1) * y_i^3 - cos(x_i^2))
By then applying another layer of variables that include colors, polar coordinates, and size, I can begin to see results. If what is being produced is unexpected or not what I had in mind, it's a matter of tweaking the code to get what I'm looking for.
Is this Ethical?
Of course, this raises a very old question in a very new way. From the examples above, did I design "Seed_7001"? Did I compose "Tables"? Can I take credit for it? Do I have legal rights to the end artifact? Can I sell it?
I found the
#nofilter tag when it came out to be fascinating. While the subtext of the hashtag is "This is what I actually look like", it was one of the early ways consumers of digitally created artifacts (selfies) grappled with crediting (or not crediting) AI.
If you apply a filter to a photograph, and that filter was the difference between a good shot and a great shot, are you an amazing photographer with an impeccable eye or did you just use AI to help you very quickly generate an artifact? Does it matter that you didn't spend 2 days on a shoot spending thousands on the perfect lighting to get the effect you're looking for?
I don't have answers to these questions yet, but I'll certainly keep asking them.
Yes. This article was written with a GPT-3 AI partner. :)