bIn 2022, when ChatGPT arrived, I was part of the first wave of users. Delighted but also a little unsure what to do with it, I asked the system to generate all sorts of random things. A song about George Floyd in the style of Bob Dylan. A menu for a vegetarian dinner. A briefing document on alternative shipping technologies.
The quality of what he produced was variable, but he made clear something that is even more evident now than then. That this technology was not just a toy. Rather, his arrival is a turning point in human history. In the coming years and decades, AI will transform every aspect of our lives.
But we are also at a turning point for those of us who make a living with words and, indeed, for anyone in the creative arts. Whether you’re a writer, actor, singer, filmmaker, painter or photographer, now a machine can do what you do, instantly and for a fraction of the cost. Maybe I still can’t do it as well as you, but just like the tyrannosaurus rex in the rearview mirror in the original Jurassic ParkIt is gaining ground on you and quickly.
Faced with the idea of machines that can do everything that humans can do, some have simply given up. Lee Sedol, the Go Grandmaster who was defeated by DeepMind’s AlphaGo system in 2016, retired on the spot, declaring that AlphaGo was “an entity that cannot be defeated”and that your “The whole world was collapsing.”.
Others have asserted the innate superiority of human-made art, circling the idea that there is something about the things we make that cannot be replicated by technology. In The words of Nick Cave.:
Songs arise from suffering… from the complex internal human struggle of creation… (but) algorithms do not feel. Data doesn’t suffer… What makes a great song great isn’t its close resemblance to a recognizable work. Writing a good song is not mimicry, nor replica, nor pastiche, it is quite the opposite. It is an act of self-suicide that destroys everything one has strived to produce in the past.
It’s an attractive position and I would like to believe it, but unfortunately I don’t. Because not only does it commit us to a hopelessly simplistic – and, frankly, reactionary – binary in which the human is intrinsically good and the artificial is intrinsically bad, but it also means that the category of creation we defend is extremely small. Do we really want to limit the work we value to those stunning works of art created from deep feeling? What about costume design, illustration, book reviews, and all the other things people do? Don’t they matter?
Perhaps a better place to begin the defense of human creativity is with the creation process itself. Because when we make something, the final product is not the only thing that matters. In fact, it may not even be the most important thing. There is also value in the act of doing, in the craft and care of it. This value is not inherent in the things we do, but in the creative work of making them. The interaction between our minds and our bodies and what we are doing is what brings something new (some understanding or presence) to the world. But the act of doing also changes us. That can be joyful and, at other times, frustrating or even painful. However, it enriches us in ways that simply prompting a machine to generate something for us never will.
What is happening here is not about giving free rein to our imagination, but about outsourcing it. Generative AI removes part of what makes us human and hands it over to a company so it can sell us a product that claims to do the same thing. In other words, the true purpose of these systems is not liberation, but profit. Forget simplistic marketing slogans about increasing productivity or unleashing our potential. These systems are not designed to benefit us as individuals or a society. They are designed to maximize the ability of technology corporations to extract value by strip mining the industries they disrupt.
This reality is particularly stark in the creative industries. Because the ability of artificial intelligence systems to create stories, images and videos did not come out of nowhere. In order to do these things, AIs must be trained with massive amounts of data. These data sets are generated from publicly available information: books, articles, Wikipedia entries, etc. in the case of text; videos and images in the case of visual data.
What exactly these works are is already very controversial. Some, like Wikipedia and out-of-copyright books, are in the public domain. But much of it (and possibly most of it) is not. How could ChatGPT write a song about George Floyd in the style of Bob Dylan without access to Dylan’s songs? The answer is that it couldn’t. He could only imitate Dylan because his lyrics were part of the data set used to train him.
Between the secrecy of these companies and the fact that the systems themselves are effectively black boxes, whose internal processes are opaque even to their creators, it is difficult to know exactly what has been ingested by an individual AI. What we do know for sure is that large amounts of copyrighted material have already been entered into these systems, and are still being entered today, all without permission or payment.
But AI does not only progressively erode the rights of authors and other creators. These technologies are designed to completely replace creative workers. Writer and artist James Bridle has compared this process to the enclosure of the commons, but whichever way you look at it, what we are witnessing is not just “systematic theft on a massive scale,” but the willful and deliberate destruction of entire industries. . and the transfer of its value to Silicon Valley shareholders.
This unbridled rapacity is not new. Despite advertising campaigns promising attention and connection, the entire tech industry model depends on extraction and exploitation. From publishing to transportation, tech companies have employed a model that depends on inserting themselves into traditional industries and “disrupting” them by evading regulation and trampling hard-won rights or simply fencing off things that were once part of the public sphere. In the same way that Google leveraged creative works to create its libraries, file-sharing technologies devastated the music industry, and Uber’s model depends on paying its drivers less than taxi companies, AI maximizes its profits by refusing to pay the creators of the material on which it depends. .
Meanwhile the human, environmental and social costs of these technologies They are carefully kept out of sight.
Interestingly, the sense of helplessness and paralysis that many of us feel in the face of the social and cultural transformation unleashed by AI resembles our inability to respond to climate change. I don’t think it’s a coincidence. In both cases there is a profound mismatch between the scale of what is happening and our ability to conceptualize it. We find it difficult to imagine fundamental change, and when faced with it, we tend to panic or simply shut down.
But it is also because, as with climate change, we have been fooled into thinking that there are no alternatives and that the economic systems we inhabit are natural, and arguing with them makes as much sense as arguing with the wind.
In fact, the opposite occurs. Companies like Meta and Alphabet, and more recently OpenAI, have only achieved their extraordinary wealth and power thanks to very specific regulatory and economic conditions. These arrangements can be modified. This is within the power of the government and we should insist on it. There are currently cases before courts in several jurisdictions seeking to frame the mass expropriation of the work of artists and writers by artificial intelligence companies as a violation of copyright. The outcome of these cases is still unclear, but even if the creators lose, the fight is not over. The use of our work to train AI must be under the protection of the copyright system.
And we shouldn’t stop there. We should insist on payment for work that has been used, payment for all future use, and an end to the tech industry’s practice of taking first and saying sorry later. His use of copyrighted material without permission was not accidental. They did it on purpose because they thought they could get away with it. The time has come for them to stop getting their way.
For that to happen, we need regulatory structures that ensure transparency about what data sets are used to train these systems and what those data sets contain. And audit systems to ensure that copyright and other forms of intellectual property are not violated, and to impose significant sanctions if they are. And we must insist on international agreements that protect the rights of artists and other creators rather than facilitating corporate profits.
But above all, we must think carefully about why what we do as human beings, and as creators and artists in particular, matters. Because it is not enough to worry about what is being lost, nor to fight from the rear against these technologies. We need to start making positive arguments about the value of what we do, and of creativity in general, and think about what form that might take in a world where AI is ubiquitous.
This is an edited version of the Australian Society of Authors’ 2024 Colin Simpson Memorial Keynote, titled ‘Creative Futures: Imagining a Place for Creativity in a World of Artificial Intelligence’.