The possibilities are(n't) endless: exploring AI's place in design | Found

The possibilities are(n’t) endless: exploring AI’s place in design

AI image generators like Midjourney and DALL-E have recently taken the creative industries by storm, and are one of the more tangible ways AI is beginning to affect our work. In this article, we explore the exciting possibilities of these tools and further examine the practical, legal and ethical issues surrounding them.

You might mistake the above image for one of our latest renders, but its origins are slightly more mysterious: a text prompt from our Creative Director, relayed to an AI image generator called DALL-E, and transformed into a unique image.

In April this year, the creative world reacted with equal measures of excitement and controversy around a new kid on the block: an AI image generator called Midjourney. Everyone from game designers to creative directors began experimenting with it, turning words into weird and wonderful custom images. Since then, several other AI image-generating systems (such as DALL-E) have risen in popularity as this approach garners more mainstream interest.

We sat down with our Creative Director, Clayton Welham, to explore how he’s been using them in our work, the legalities and potential of these tools, and why constraining their input might be a good thing.

Firstly, tell us a bit about how AI image-generating tools work.

In their simplest terms, these systems take a user’s text ‘prompt’ and turn it into a set of image-based responses. The systems collate visual references from undisclosed resources around the internet and weave them into unique compositions. The systems themselves are constantly evolving, offering both richer visual outputs and more features (such as higher resolution images and the ability to iterate on existing creations).

Activated environment and architecture study

How are you using these tools with the design team here at Found?

We started using Midjourney around May 2022, ahead of the open beta launching in mid-July. We introduced DALL-E into the studio a few months later. Once I got past the initial excitement of being able to make an image of absolutely anything, I started thinking about how to apply it to our project workflow. 

We’re lucky as we regularly work with clients in the science and technology spaces, which means we’re often asked to think about how to visualise something that either doesn’t exist yet, is impossible to photograph, or is an abstract idea. This part of a project is where we have the best opportunity to take control of the creative process and think about what we want to achieve with our team and the tools at our disposal.

Exploring the application of traditional carpentry joints at molecular level

Every project, if researched well enough, has many different factors and thoughts that can be brought into the concepting process. It’s the combination of these unique inputs that will set us on the right development track to create a unique and relevant result. I’m finding these AI systems are perfect for this ‘combining’ process, allowing me to test factors that might not be an easy fit on paper, or might take a very long time to combine through traditional methods. 

I can see how the use of these systems can really benefit me and our design process at Found. They have the potential to become a really efficient way to form unique starting points for us to lift off from. As an example, we’re currently working with Franklin Till on a project that is focused on materiality and circularity. In the early stage of this we did a lot of thinking around the notion of products, materials and bonds.

Using Midjourney I was able to explore ideas that pushed how we think about fabrication and specifically bonding and fixing materials with the product’s end of life in mind. The freedom this process gave me allowed for more ambitious thinking, which is starting in part, to inform the wider project’s development.

Combining the terms ‘paper archive’ and ‘voice pattern audio waveforms’

What are the limitations and possibilities of these tools as part of the creative process? 

One thing I learnt very quickly is that no two image results are the same, even if the same text prompt is used. This makes using these tools feel more like an organic imagination process than a mechanical image-making tool (which I like), but it can also be quite frustrating when trying to introduce some level of control into the process (which I don’t like!).

Because of this, it can take multiple attempts to get close to what I have in my head. I find that once I’m revising an image result multiple times (and trying to get the system to hit the nail on the head) what I want is better achieved with a concept artist or designer with whom I can talk to and iterate with more accurately. 

Longer term, I think there could also be an issue with how the systems evolve (or erode) as they keep ingesting what they and others create. The question here is whether evolution means the system creates more original outputs, or whether it just cannibalises itself. If the latter happens then the excitement is over and we’re back to the system leading the way creatively.

Using compacted materials for colour and texture research

Despite these limitations and concerns, I do think there are some amazing possibilities for how these systems might integrate into our workflow.

One possibility is for these tools to be integrated into design software (over and above any AI that already exists within them). If and when that happens, I think it will open up possibilities for efficiencies to be made in our early idea generation, concept and even previs stages.

Another possibility is seeing how outputs could become more customisable. Midjourney is already improving its image quality based on collective user image selection, but these tools are yet to have customised learning. If AI can begin to learn from individual users and build on results that those individuals have selected before, like how data ‘cookies’ from browsing tailor the ads we’re shown, then there are possibilities for it to become very personalised, like a more honed extension of someone’s imagination.

Visualising the design and fabrication process of furniture

This technology isn’t without its ethical issues though, what’s your view on these? 

Some of the criticism around these tools cites examples of them being used to create commercial pieces of art and design, with Midjourney’s decision recently to lift its non-commercial use ban only adding to these concerns. Journalists and artists are rightfully discussing how AI-generated art breaches copyright laws because it relies upon imagery which already exists somewhere online. 

There’s also the question of ‘Will I be out of a job?’, and ‘Why will they need me in a year’s time?’ which is particularly concerning for professions such as concept artists. My belief is that we can all try to protect our positions by researching, thinking and developing well-thought-out concepts, then using AI tools to take these to further far-out places. I hope it will remind creatives of the value of an idea and really shine a light on derivative works.

Building impossible optical devices and lens arrays

With all this considered, how do you see AI and human creative processes working together in the future? 

We’re already living in a world where we can’t produce the kind of work we make without the use of very intelligent machines and where the end product in our line of work, CG, is something that only ever exists digitally.

So, I don’t see the use of AI image-generating tools as much different to humans using computers or software to create things. As Midjourney’s founder, David Holz, recently said: “What does it mean when computers are better at visual imagination than 99 per cent of humans? That doesn’t mean we will stop imagining. Cars are faster than humans, but that doesn’t mean we stopped walking.” 

The creative future with these tools might look quite similar to now, but with increased efficiency and customisable learning. By containing their input to visualising in the conception stage of a project, the next generations of AI tools could become invaluable to us in accelerating and amplifying the creative processes we all already use.

 

Want to keep reading?
Explore our most recent Perspectives articles below:
What makes a portrait a portrait?
In conversation with our Senior Designer, Will