What artists want from AI Tools: with Eoghan O’Keeffe

Credit: Image provided by the artist

At The New Real, we want to reflect and represent those at the forefront of art and AI – and to work with them to develop actionable strategies and signposts for practitioners.

Here Eoghan O’Keeffe, an artist and toolmaker, shares with us the key features and capabilities artists are looking for in AI tools as these technologies move forward.

We wanted to know: Looking beyond the text prompt, what do artists want to see from a new generation of tools? How can we give human artists greater agency in co-creation with AI? How can artists create works based on rich understanding of models? And how important is it that tools are legible, interpretable, or configurable?

Here is what Eoghan told us.

 —

We want tools which are multi-modal.

 We want tools that go beyond the text prompt: tools that allow us to engage multimodally; where we can mix audio, text and video, to connect with all our senses; where we can use our own natural embodied forms, motions, senses; where there’s a real richness applied to the input that better reflects how we as artists – and as humans – engage with the world. This is so that an artificial intelligence's contact/interface with the world can behave similarly to our own contact/interface with the world, and for that intelligence to be relatable and fitted with our own; after all, intelligence is informed by that context and contact.

We want tools which don’t require us to cater our inputs to what the AI wants.

We want to be able to play with concepts, not with the perfect text-prompt – and move from narrow forms of communication for computers, to rich ones for humans. We want to be able to input ideas, themes, feelings, patterns – more high-level concepts which are far more in tune with how we as artists explore the world. We want to converse, and express ourselves, as humans. Contemporary AI is already incredibly good at interpreting this kind of rich natural communication, but it could be expanded to use it more as interface and interaction.

We want to be able to see – and truly interact with – AI's conceptual latent space.

We want to be able to play in the AI’s inner domain, not just get one ‘final’ result. We want a ‘fruit fly’s eye’ showing the range of thought patterns and lines of exploration the AI system undertakes before it reaches its endpoint – so we can see the AI's main line of thought in the centre, with variations spreading out into the peripheries (a fruit fly has what’s called a ‘compound eye’, which sees many angles all at once). We want to be more in the loop, truly playing within the AI, not simply using it.

We want more interoperability between tools.

We want to be able to create work which works – combining various different platforms so generalist tools can mediate between more specialist tools. We want to be able to merge with tools both old and new, and work effectively in the chaos that is at the forefront of AI innovation and development. 

We want more tools which are more interpretable, so we can better interrogate the different levels of the neural networks.

We know that interpreting AIs' inner workings is like doing brain surgery on a neural network, and that it’s difficult to intuit exactly how and why we get certain results. But we want to be able to better explore those insights; have more of an understanding of how the systems have been built so we can better disentangle them – ultimately, by understanding better the process of how the AI systems work, we can better learn about about our own experience as humans, being the ones the AI systems are inspired by in the first place.

We want to be able to converse with our AI tools.

We want to be able to have dynamic, real-time conversations with our AI tools. And this is more than to-and-fro chat logs, this is about being able to see how the AI system thinks as it's doing its thinking – witnessing the real-time branching, seeing the different threads of processing – so that we can step in and pivot the processing, the conversation, as we go.

Eoghan O'Keeffe makes art and creative work through technology as epok.tech. In his approach he adapts and learns across disciplines: coming from a fine-art background and moving into creative-technology, he is now developing an artistic practice creatively exploring both fields combined. He pursues creative and conceptual challenges, and explores creative applications of emerging technology: experimenting with tech, physics, maths, art; and developing real-time interactive graphics, web, apps, games, AI, XR; to explore new spaces and create striking experiences and utility.

These strategies were formulated/articulated/conveyed in an interview with Eoghan and have been edited for clarity.

Steven Scott

We are twofifths design agency. We design logos, create unforgettable brands, design & build beautiful websites, and bring stories to life through animated motion graphics films.

http://www.twofifthsdesign.com
Previous
Previous

How to create inspiring cultural experiences fuelled by AI: with Irini Papadimitriou

Next
Next

Strategies for ethical systems and organisations, and for empowering AI artists: with Eva Jäger