
Imagination, language and the future: further thoughts around AI in design
In our previous article, we wrote about how we were starting to explore the possibilities that text-to-image AI tools can offer us. That article and our accompanying talk at Motion North led to a series of webinars that we hosted to help educate partners and foster discussion between us all to understand where this technology might take us.

Across October, we brought together figures from across the creative industries; agencies, brands and academia to demonstrate and discuss the capabilities and implications of AI tools in design.
One of the first things that comes up when there’s any talk about AI is that it will be “the end of this” or “the death of that”, and it’s certainly a talking point that came up across our webinars. Fear of AI as a technology that will replace humans in industry is nothing new, and it’s applicable when we’re thinking about image-based tools in the creative industries.
The speed at which this technology is developing is breathtaking. It will be fascinating to see where and how it develops and what current tasks (and by association roles) it might take over, automate or supersede in time. However, we’re finding that it only does what it does with humans at the wheel and with existing visual content as its source – this is stating the obvious, but it’s a significant point to make.
We choose what to write in a prompt, direct the tool, change the outcome with different wording, alter the output size and pick the colour we want it rendered in. This process is similar to the choices and decisions we make daily as designers, albeit via a very different interface, a very different tool and a surprising and fast outcome. This makes the whole process new, hard to understand and unusual – all of which can make us feel uneasy.

Who is the creator in this future?
There is a great sense of unease for many people around the ethics of AI-generated art, and two big themes came up as concerns in our discussions. Where does the source material come from, and how does copyright work on the sources and the art the tools create?
The issues around source material are myriad. Key topics include whether the tools are sourcing from copyrighted images (and, if not, what that means for their use). How narrow (or wide) are the sources around race and gender, and are new images created in the tools feeding back into the sources? These concerns naturally lead to questions about what kind of biases are coming through in the resulting images. These concerns arise because of how the tools are built and the source materials on which they are trained.
Regarding copyright, there are concerns about how much a generated piece borrows from its text prompts and who owns the copyright of the resulting image, and it’s here where the challenge of how we should use text-to-image AI lies.
Let us step outside of AI for a minute into a traditional concept generation process. Is the copyright concern around AI any different from us using trend-based mood boards of other artists’ work, the same plug-ins and software to garner similar results and the same cultural reference points? Even if we’re not consciously doing this, it can and does happen – so are we not always borrowing, updating and remixing somehow?
It’s our view that the responsibility lies (as it always has in commercial creativity) with us as the makers in ensuring what we create is new and original by pushing something beyond the pastiche and the derivative into something new.

Limitless or limited?
These are early days, but we already see the limitations of where the tools pull from. It will be interesting to see how they evolve as they gain traction in the broader sense – will we see a world of new images or a homogenisation of the world around us? We also see limitations and differences in how and what different tools create – for example, the stylistic differences between DALL·E 2 and Midjourney imagery.
What is important right now is to understand that these creations aren’t necessarily the final answer to every brief or creative endeavour. We can explain what we want a visual result to look like with good references and research but forming what starts in our imagination into a clear visual direction can be tricky – AI can help us with this, but also hinder us or divert us from this by going off on tangents.
Diversions and frustration can arise when AI tools don’t quite achieve what you have in mind – similar to the feeling of trying to explain or sell an idea with reference imagery. It’s symptomatic of a new thing we can overuse or burn out. With AI text-to-image tools, the ability to make endless rapid iterations of a result means that there’s an intrinsic hope attached to every button press, which isn’t always a good thing.
This experience points back to how vital our imagination is in the process – and how we combine thoughts, ideas and references is what will make the new. We see enormous value in these tools offering new combinations, ideas and accidents that we might not have encountered before. However, the editor or director’s human input is still the final piece of the puzzle.

The Power of the Prompt
Writing is a vital part of the creative process, whether through writing a brief or forming an essay as a resulting work. AI prompts tap into this intrinsic relationship between words and images. When prompting an AI tool, our choice of words and writing style (and even where we leave gaps) affect the visible results. The importance of the prompt elevates language as its how we direct the AI system. Explicit requests or suggestive keywords engage the AI at different levels.
So we come back to one of their oldest creative outputs – written language, driven by human imagination. The challenge is how we can make that work hard, fuel it, feed it, and give it new and different sources and ideas to think about. We can all imagine and write anything, but we can’t all draw it, make it, or paint it, so these tools may fill that gap.

So how do we deal with this?
Our approach so far has been to embrace the tools and learn how to use them so we can understand them better. By having a mindset based on curiosity rather than fear, we’re figuring out their benefits whilst learning about their limitations and pitfalls.
We’re also trying to understand how they might fit into our working process, and we’re excited about how they might start to help us in other ways. For example, what would a CG designer’s day look like when AI is automating more mechanical tasks and speeding up notoriously slow processes like rendering?
Ultimately, as we move from toy to tool, we are starting to see how and where they can fit into our creative process and where they can begin to deliver real value by giving us more time and space to think.
Want to keep reading?
Explore our most recent Perspectives articles below:
The possibilies are(n’t) endless: exploring AI’s place in design
What makes a portrait a portrait?