When they told us that AI is coming for people’s jobs, most of us didn’t think that they were talking about artists. Our popular imaginings of artificially intelligent futures often seem to bracket the work of artists as somehow beyond the cold capacities of clever machines. Could AI handle the manual, administrative, and even strategic aspects of human endeavor? Perhaps. Creativity and aesthetic sensitivity, however, were presumed by many to be unprogrammable, too reliant upon emotion and the subtleties of lived experience.
This popular tendency in viewing art and those who make it as exceptionally human is likely a kind of cultural hangover from the aesthetic theories of the nineteenth century—in which art, especially poetry and painting, were widely proposed to be the self-expression of an extraordinary individual, a genius with a uniquely profound or sensitive subjectivity. Many of our culture’s paradigmatic symbols of artistic psychology, from the Vincent Van Gogh to Jim Morrison, have relied heavily on this trope of spiritual, cultural, and, frequently, tragic heroism.
All of these assumptions have been put to the test over the course of the past two years, as innovations in AI have become increasingly accessible masses and an integral facet of public discourse, especially with respect to education and the ethics of things like celebrity ‘deep fakes’.
At the centre of all this is, of course, a particular sub-category of artificial intelligence known as generative AI —the kind of technology famously responsible for everything ranging from uncanny portraits with extra fingers, your favourite popstar’s robot-sounding cover of a song from the 1950s, and, lest we forget, eerily corporate-coded essays from undergrads who haven’t done their reading.
Well-known generative AI interfaces like Chat-GPT, Dall-E, Bard, and Amper all fall under the umbrella of generative AI. Trained on large data sets of text, images, and audio, these systems are capable of generating original images, bodies of texts, and sonic configurations from preexisting materials. While the interfaces easily available for public usage, like Chat-GPT and DALL-E, very rarely produce anything of a quality high enough to raise the eyebrows of human artists, more sophisticated interfaces have produced works of considerable aesthetic merit.
The early alarm bells blared in September of 2022, when artist Jason M. Allen took home first place and $300 cash prize in the ‘digital arts/digitally manipulated photography’ division at the Colorado State Fair Fine Arts Competition for his piece ‘Théâtre D’Opéra Spatial’. The image is an epic scene from a galactic royal court in a style somewhat evocative of nineteenth-century academicism, looking almost as though Sir Lawrence Alma Tadema had painted a scene from ‘Dune’.
However, in the days following Allen’s big win, news broke that image had been generated using Midjourney, a generative AI system that produces incredibly detailed and often hyper-realistic images from written prompts in a manner similar to DALL-E. The program might be best known for its viral 2023 image of Pope Francis sporting an exaggerated, rapper-style puffer jacket. Prominent voices from the art world and mainstream media alike sounded off about Allen’s win, triggering an initial flurry of quasi-philosophical questioning regarding the nature of art in the age of AI.
So much of this public discussion around AI and the arts has revolved around questions of authorship and representation. Can Jason Allen truly claim artistic responsibility for ‘Théátre D’Opera’? What about AI art generators trained on works by other human artists? Could this be considered a form of plagiarism? In the vein of representation, how should we be handling situations in which systems seem to have problematic biases in how they depict certain people or particular groups of people, as in the case of Megan Fox’s complaints about AI-generated images of her being excessively sexualised? While all undoubtedly legitimate and incredibly important questions to answer, a critical dilemma that seems consistently absent from this vibrant public discourse is that of artistic practice and how it might be altered or even endangered by the increasingly sophisticated abilities of generative AI.
Even the famously antiquated William Morris – who spent so much of his career trying to reclaim the dignity of artistic labour through the reviving of mediaeval design methods, made discerning use of new technologies in his printing and textile practices – granted that said technologies could be implemented without threatening the integrity of the art’s quality or the labour involved in making it. Therefore, the question clearly remains: how might recent developments in generative AI and the demands of art coexist?
I spoke with Maggie Mustaklem, a doctoral researcher at the Oxford Internet Institute. Her current project, entitled ‘Design Interrupted’, examines the role that AI is increasingly playing in the artistic brainstorming process, particularly with respect to how designers and architects draw inspiration from things they find in the AI-curated feeds of Pinterest and Instagram. Unlike many of the tech-savvy intellectuals who have tended to chime in on this issue, Mustaklem has actually worked in the arts as a knitwear designer, and is well-aware of the expertise such work demands.
She is resistant to the alarmism that pervades much of the popular discussion mentioned above. ‘I think that the scale and reach of generative AI in creative industries is often overblown’, Mustaklem notes. ‘My research focuses on the concept stage of the design process, where designers often pull images from the web for inspiration to present concepts to clients. Gen AI is well suited to assist with this task, and many are starting to experiment with it. However, during my research I conducted workshops with 15 design studios in London and Berlin. All of them were experimenting with gen. AI, but none were using gen AI images to present concepts to clients. It is becoming a tool in the tool kit, but not one that has yet to demonstrably alter the design process’.
This relatively modest impact of generative AI on the concrete practice of the arts is, according to Mustaklem, one of the most common misconceptions floating around this issue at the moment. Like nearly every other sector of work, it seems that the creative industries have undoubtedly experienced increasing interest in the new possibilities presented by generative AI However, ‘Statistics on job replacement and efficiency’, she notes, ‘often fail to consider points like how much of designing knitwear, or any product, is tangible and embodied, requiring localised skills and experience’.
‘I think new media and technology needs to be considered within the ecosystems it will disrupt,’ Mustaklem goes on to note, ‘A few years ago we thought 3D printers would replace overseas knitwear factories. Even though there’s some really exciting things happening with 3D printing, most knitwear is still produced overseas. Photography didn’t replace painting, but it did change painting. Gen AI will transform creative industries but it is unlikely to reshape them into something entirely different’.
Some artists have already begun to hint at what this ‘entirely different’ future for the arts might look like. While Mustaklem has design in mind, her prediction about the reconfiguration (rather than elimination) of traditional artistic practice also seems to hold for the so-called ‘fine arts,’ like painting and creative writing.
An especially exciting example of this reconfiguration in the world of literature is the magazine Heavy Traffic. A partial product of the pandemic-spawned ‘Dimes Square’ art and intellectual scene in New York City, the magazine has become a burgeoning touchstone of the American literary avant-garde. Distinct from Mustaklem’s vision of AI as a kind of collaborative design or conceptualising tool, writers publishing with Heavy Traffic present a more apophatic path for grappling with AI’s ability to mimic human creativity.
In an interview with ‘Dazed’, editor Patrick McGraw describes the magazine’s signature style as ‘shizzed out gibberish’, citing our culture’s AI-instigated shifting relationship to language as prompt for taking art where computers trained on patterns might have a difficult time following—poetic disruption and instability. As implied by McGraw’s colourful description, the writing in Heavy Traffic is characterised by a jarring, aggressively chaotic tone and even borderline incomprehensibility.
In some respects, a move like this is akin to how painters reacted in the wake photography. No longer needed as a medium for capturing visual reality, the impressionists through to the cubists and abstract expressionists sought to capture what photography could not—subjective sensation, perspective, and pure form.
Whether any of the above methods of grappling with the intersection of art and artificial intelligence can or should sustain our artistic needs into what we can fairly say will be a tech-driven future is by no means evident. However, they are a reminder that ‘human art’ and practice are by no means under existential threat. While the great nineteenth-century myth of singular artistic genius might well wither away in the wake of generative AI, the concrete work of the artist seems entirely capable of adapting for the time being.