doAte Blanchett, beloved actress, film star and refugee advocate, stands at a lectern, addressing the European Union Parliament. “The future is now,” she says authoritatively. So far, so normal, until she says: “But where the hell are the sex robots?”
The images are from a speech Blanchett gave in 2023, but the rest is made up.
Her voice was generated by Australian artist Xanthe Dobbie using the text-to-speech platform PlayHT, for Dobbie’s 2024 video work Future Sex/Love Sounds: an imagining of a sex robot-induced feminist utopia, featuring celebrity clone voices.
Much has been written about the world-changing potential of large language models (LLMs), including Open AI’s Midjourney and GPT-4, which are trained on vast amounts of data to create everything from academic essays, fake news, and “revenge porn” to Music, images and software code.
Advocates praise the technology for speeding up scientific research and eliminating routine administrative tasks, while on the other hand, a wide range of workers – from accountants, lawyers and teachers to graphic designers, actors, writers and musicians – face an existential crisis.
As the debate continues, artists like Dobbie are turning to those same tools to explore the possibilities and precariousness of technology itself.
“There’s this whole ethical grey area because legal systems can’t keep up anywhere near the speed at which we’re proliferating technology,” says Dobbie, whose work draws on internet celebrity culture to interrogate technology and power.
“We see celebrity replicas all the time, but our own data – we, the little people of the world – is collected at exactly the same rate… It’s not really the capability of the technology (that’s wrong), it’s the way flawed, dumb, evil people choose to use it.”
Choreographer Alisdair Macindoe is another artist working at the nexus of technology and art. His new work Plagiary, which premieres this week as part of Melbourne’s Now or Never festival and will then have a season at the Sydney Opera House, uses custom algorithms to generate new choreographies performed by dancers who receive them for the first time each night.
While the instructions generated by AI are specific, each dancer can interpret them in their own way, making the resulting performance more of a collaboration between man and machine.
“Often the questions (from dancers) at the beginning are along the lines of, ‘I’ve been told to turn my left elbow repeatedly, to go to the back corner, to imagine I’m a cow that’s just been born. Do I still turn my left elbow at that moment?’” Macindoe says. “Pretty soon it becomes a really interesting discussion about meaning, interpretation and what is truth.”
Not all artists are fans of technology. In January 2023, Nick Cave published a heartbreaking review of a song generated by ChatGPT that imitated his own work, calling it “nonsense” and “a grotesque mockery of what it is to be human.”
“The songs come from suffering,” he said, “by which I mean they are based on the complex internal human struggle of creation and, well, as far as I know, algorithms don’t feel.”
Painter Sam Leach disagrees with Cave’s idea that “creative genius” is exclusive to humans, but he encounters this kind of “general rejection of technology and everything to do with it” often.
“I have never been particularly interested in anything related to the purity of the soul. I see my practice as a way of investigating and understanding the world around me… I just don’t see that we can build a boundary between ourselves and the rest of the world that would allow us to define myself as a unique individual.”
Leach sees AI as a valuable artistic tool that allows him to tackle and interpret a wide range of creative outputs. He has customized a series of open-source models that he has trained on his own paintings, as well as reference photographs and historical artworks, to produce dozens of compositions, some of which he turns into surreal oil paintings, such as his portrait of a polar bear standing on a bunch of chrome bananas.
He justifies his use of sources by highlighting the hours of “editing” he does with his brush to refine his software’s suggestions. He even has art critique chatbots to interrogate his ideas.
For Leach, the biggest concern about AI is not the technology itself or how it is used, but who owns it: “We have this small handful of mega-companies that own the biggest models and they have incredible power.”
One of the most common concerns around AI is copyright – a particularly fraught issue for those working in the arts, whose intellectual property is used to train multi-million-dollar models, often without consent or compensation. Last year, for example, it was revealed that the Book3 dataset had used 18,000 Australian titles without permission or remuneration, in what Booker Prize-winning novelist Richard Flanagan described as “the biggest act of copyright theft in history”.
And last week, Australian music rights management organisation APRA AMCOS Survey results published which found that 82% of its members were concerned that AI could reduce their ability to make a living through music.
In the European Union, the Artificial Intelligence Act came into force on August 1 to mitigate these types of risks. However, in Australia, while Eight voluntary ethical principles for AI Although they have existed since 2019, there are still no specific laws or statutes regulating AI technologies.
This legislative vacuum is pushing some artists to create their own custom frameworks – and models – to protect their work and culture. Sound artist Rowan Savage, a Kombumerri man who acts as a saviour, developed the Koup Music AI model with musician Alexis Weaver as a tool to transform his voice into digital representations of the field recordings he makes in Country, a process he will present at the Now or Never festival.
Savage’s abstract dance music sounds like dense flocks of electronic birds: hybrid life forms with animal codes that are eerie and strange, yet familiar.
“Sometimes when people think of Aboriginal Australia, they think we’re associated with the natural world… there’s something infantilising about that, that we can use technology to respond to that,” says Savage. “We often think there’s a rigid divide between what we call natural and what we call technological. I don’t believe in that. I want to break down that divide and allow the natural world to infect the technological world.”
Savage designed Koup Music to give him complete control over the data it’s trained on, to prevent it from appropriating other artists’ work without their consent. In turn, the model protects Savage’s recordings from being fed into the larger networks Koup relies on — recordings he considers the property of his community.
“I think it’s fine for me personally to use the recordings I make from my country, but I wouldn’t necessarily put them out into the world (for anyone or anything to use),” says Savage. “(I wouldn’t feel comfortable) without talking to important people in my community. As Aboriginal people, we’re always community-minded, there’s no individual ownership of sources in the same way the English-speaking world might think about it.”
For Savage, AI offers great creative potential, but also “many dangers.” “My concern as an artist is: how can we use AI in an ethical way, but also allow us to do different and exciting things?”