Hard Code | When AI wields the brush, do we call it the artist or the tool?
The boom in generative AI may change our relationship with art, but how we receive it is a question we cannot answer yet. Exposure will determine acceptability
Roughly 30,000 years ago, someone — perhaps more than one person — created life-like portraits of animals in caves in what is present-day France. The Chauvet-Pont-d'Arc cave, in southeastern France features paintings of horses, mammoths, leopards and cave hyenas. Carbon dating has put these creations in the last of the palaeolithic eras. With the rock face as canvas, prehistoric humans fashioned painting instruments out of charcoal, stones and ochre, a natural earth pigment. The exquisite detail harks to imagination, creativity and artistry. From ochre smattered on walls and etchings of sharp stones to electronic styluses and the thousands of shades of paints artists use today, art remains the expression of humanity.
What happens when machines create art too (or compose music, or pen fiction)?
That machines can surpass our imagination and do so has become clear in the last couple of years with artificial intelligence (AI) tools like Dall-E and Midjourney that create visual renderings in the styles of artists, including those who have existed before our time. Today, these tools can portray Mickey Mouse in the style of Leonardo da Vinci with the same ease as it can set that cartoon character in the set of a Wes Anderson movie. In the past year alone, some of the examples of AI creativity have been truly staggering.
Take for instance the fake Balenciaga advertisement someone created with Harry Potter characters — each aspect of it, the visuals, the scripts and the voices was AI generated. Of course, it is clear as day that the ad is artificially generated, with little doubt that the people in it do not exist and the bionic-sounding words they say were never spoken.
Then there was the artificially generated voice of singer Grimes on a track called Cold Touch by a producer named Kito. The voice sounds eerily like Grimes, her typically thin and hazy tones, but she never sang it. Grimes released a “voice print” — an AI model trainer on her voice — which made it possible for artists to artificially generate her voice on songs.
And what started it all was a track called Heart on my sleeve produced by an artist called Ghostwriter. The AI creation here is the music and the singing, which makes the song sound like a collaboration between rapper Drake and singer The Weeknd. The video of the track is no longer available on YouTube due to a copyright claim by Universal Music Group.
A matter of conflict
In the above examples, while it is important to remember they were the products of people who understood filmmaking, song writing and music production, and then gave specific instructions to AI models to generate portions of the work, it is undeniable that the use of AI to do any (or all) of this process has transformative implication for creative fields.
The conflict played out when the Writers Guild of America (WGA) went on Hollywood’s second-longest work strike, demanding, in addition to better pay and working conditions, contractual guardrails to prevent AI from writing or rewriting literary material they have created or will earn their living off of.
Then there is the ethics of how AI is taught. In December, The New York Times issued OpenAI, the developers of ChatGPT, accusing it and its investor Microsoft, of unlawfully using copyrighted NYT content to train its AI.
The NYT, in its suit, attached examples of ChatGPT reproducing NYT content in verbatim. Around the same time, an AI image tool was churning out renderings of the comic character Joker in scenes that had an eerie resemblance to actual stills from the movie Joker.
AI tools, to be sure, do not keep a copy of the material they learnt on (such learning, in American law, falls in fair use of copyright materials). But the ability of machines to learn something so well that it can reproduce what is effectively a replica pushes up against a host of ethical, commercial and legal principles.
Isn’t the tool separate from the creator?
Can AI be called the creator, especially when AI generated art becomes too sophisticated to link to its training content? Researchers argue that AI must be treated as a tool, consciously, as a continuum of how humanity has in the past used tools to create art.
“With aesthetics and culture, we’re considering how past art technologies can inform how we think about AI. For example, when photography was invented, some painters said it was “the end of art.” But instead it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the modern art movement,” said Ziv Epstein, a postdoctoral candidate at Massachusetts Institute of Technology (MIT), and a co-author of the article “Art and the science of generative AI” published in the journal Science, last year.
Epstein goes on to argue that AI (generative AI, to be precise), is a “medium”. “The nature of art will evolve with that. How will artists and creators express their intent and style through this new medium?”
AI has no ability to think up abstract concepts or feel emotions, it has no agency or free will, its objective is hardcoded, and its ability to create is limited by all that has existed in the past. Nonetheless, University of Pennsylvania cognitive sciences professor Anjan Chatterjee argues in a separate journal (Frontiers in Psychology) article that the “continuing development of aesthetically sensitive machines will challenge our notions of beauty, creativity, and the nature of art”.
The eyes (and ears) of the beholder
Perhaps the most fundamental question can be determined by a simple test: how does a work of art created by a machine make you feel?
Early research suggests a lot may lie in knowing the provenance of a piece of artistic work. Researchers from Finland surveyed the attitudes of people towards AI art and published their findings in Poetics: Journal of Empirical Research on Culture, the Media and the Arts.
The research found that people were less open to the use of AI in the fields of arts and culture in general, compared to its uses in fields like biology. As they dug deeper, asking respondents on how they felt about the use of AI in creating, or detecting forged art, they found that not all people react in the same way to the idea of AI in art. For instance, those that could relate to the use of technology or felt autonomy when using such tools, were more open to the use of AI in art.
While this may seem intuitive — technologists tend to embrace new technologies — it does highlight that how machine-created art will be received may not be a question that can be answered fully. As capabilities and exposures change, so too will acceptability or the lack of it.
Continuing development of aesthetically sensitive machines will challenge our notions of beauty, creativity, and the nature of art
But one thing has been established: AI can dramatically increase how much and how quickly content can be created. It can also automate associated tasks, such as creating scripts, translations and image editing.
In this way, these tools could help set the scene on a canvas — but one of the questions that remains is, is starting with a blank canvas always the best way?
Hard Code is a column in which Binayak looks at some of the emerging challenges from technology and what society, laws and technology itself can do about it.