For nearly two decades, Karla Ortiz has worked as a concept artist, bringing to life an entire universe of characters in projects such as Black Panther, Avengers: Infinite War And Thor: Ragnarok. She is credited with coming up with the main character design for Doctor Strange in her own blend of Impressionist and Magic Realist styles honed through decades of practice.
She was shocked to learn last year that her work at the forefront of multi-billion dollar franchises is being used to train generative artificial intelligence systems without her knowledge or consent. Imitations of her work are now floating all over the internet. Her name has been entered over 2,500 times into Midjourney, an AI art generator, to create art similar to hers. She paid nothing.
“You work your whole life doing what you do as a creative, and if a company can take advantage of that — literally using your work to train a model trying to replicate you — I get sick,” Ortiz tells me. The Hollywood reporter.
Ortiz is one of three artists to sue AI art generators Stability AI, Midjourney and DeviantArt for using their work to train generative AI systems. The first court case of its kind will test the boundaries of copyright law and could be one of the few cases to decide the legality of how large language models are trained.
Thanks to OpenAI’s GPT, Meta’s Llama or Google’s LaMDA, generative AI has had a remarkable year. It has proven to be a unifying ethos for Wall Street as media executives touted AI-themed announcements to solicit investors. Endeavor CEO Ari Emanuel opened his company’s earnings calls in February with comments generated by an AI company called Speechify. The leaders of YouTube, Spotify and BuzzFeed similarly announced plans to deploy the technology. But behind closed doors, companies are warning that the way most AI systems are built may be illegal. “We may not gain the upper hand in any pending or future litigation,” said a securities filing issued by Adobe in June. It cites intellectual property disputes that “could subject us to significant liabilities, require us to enter into royalties and licensing agreements on unfavorable terms,” and potentially “impose orders restricting our sales of products or services.” In March, Adobe unveiled the AI image and text generator Firefly. While the first model was trained only on stock photos, it says future versions will “utilize a variety of resources, technology, and training data from Adobe and others.”
Engineers build AI art generators by feeding AI systems, known as large language models, with massive databases of unlicensed images downloaded from the Internet. The artist’s lawsuit revolves around the argument that the practice of feeding copyrighted works to these systems constitutes intellectual property theft. A finding of infringement in the case could upend the way most AI systems are built if there are no regulations in place that put guardrails around the industry. If the AI companies are found to have infringed copyrights, they could be forced to destroy datasets trained on copyrighted works. They also risk heavy fines of up to $150,000 for each violation.
AI companies claim that their behavior is protected by fair use, which allows for the use of copyrighted works without permission, as long as that use is transformative. The doctrine permits the unlicensed use of copyrighted works under limited circumstances. The factors that determine eligibility include the purpose of use, the degree of similarity, and the impact of the derivative work on the market for the original. Central to the artist’s case is winning the argument that the AI systems do not create works for “transformative use,” defined as when the purpose of the copyrighted work is changed to create something with a new meaning or message.
Responding to the proposed artist class action, Stability AI responded in a statement that “anyone who believes this is not fair use misunderstands the technology and misunderstands the law.”
Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University School of Law, agrees that AI companies likely meet fair use criteria. “Nobody today complaining about their work being stolen has gotten there without stepping on the shoulders of others,” Goldman says. “People learn from each other.”
He points to a precedent of green-lighting copying of works to produce non-infringing products. The Authors Guild sued Google in 2005 for digitizing tens of millions of books to create a search function in a case closely watched by the Motion Picture Association and virtually every union group representing writers. A federal judge ultimately dismissed the copyright infringement claims, ruling that Google’s use of the authors’ copyrighted works constituted fair use. Central to the ruling was that Google allowed users to view fragments of text without providing the full work.
But compared to 2005, when there was more optimism that emerging technology could be harnessed to help industries rather than dismantle them, artists and media executives today have a much grimmer view of big tech and the industry’s plans with tools. There could be a potentially seismic shift in how people consume news if, for example, Google stopped driving traffic to publications and answering questions with a chatbot without attribution. Some media companies, such as Associated Press, have already signed licensing deals, while others, such as News Corp, are in talks with AI companies.
Matthew Butterick, an attorney representing the authors, emphasizes that AI companies are “creating completely new material to replace the training data” to take advantage of, while Google was merely creating an index to the books, as it referenced back. to the original works. “That has been the common thread in fair use legislation for a long time,” he says.
Artists’ argument that AI companies are actively harming their economic interests by creating competing works based on their art could tip the matter in their favor. For guidance, they could look to the recent Supreme Court decision that rejected a fair use defense Andy Warhol Foundation for Visual Arts vs. Goldsmith. The 7 to 2 majority in that case stressed that the analysis of whether the secondary work had been sufficiently transformed to protect against copyright infringement should also take into account the commercial nature of the use. Fair use is unlikely to exist if an original work and a derivative work have “the same or very similar purpose” and secondary use is commercial, the judges found. Associate Justice Sonia Sotomayor noted that ruling the other way would essentially give artists the ability to make minor changes to an original photograph and sell it by claiming transformative use.
“The framework set out in the Warhol A majority decision arguably supports the artist’s claim and weighs against fair use,” said Scott Sholder, an attorney specializing in intellectual property litigation. “Copying and using copyrighted works for AI training purposes is a non-transformative commercial practice that allows third parties to create, effectively on command, potential market substitutes for their works.”
In particular, Midjourney and other AI art generators allow users to create works “in the style of” other artists, making them potential competitors of the artists whose work they were trained to do. Horrified by AI companies indiscriminately traversing the internet collecting art, books and personal data, Ortiz hid her portfolio behind a password-protected page on her personal website. She says the reduced visibility of her work is worth protecting.
But the judge overseeing her trial may not even have to rule on fair use, which is typically analyzed based on a pretrial summary judgment. U.S. District Judge William Orrick said in July that he is “inclined to dismiss almost anything” (with the option of resubmitting the claims) because the artists have not yet pointed to specific examples of works that have been infringed or by AI-made products that infringe existing copyrights, which is necessary to claim infringement. The AI companies named in the lawsuit claim the artists will not be able to meet this demand because it is impossible for their systems to produce exact or near-exact replicas of copyrighted works.
However, some companies have warned that their models spontaneously copy works from training sets verbatim without compensation or attribution, such as GitHub with Copilot, which is mentioned in another copyright case. Midjourney has decided to block ‘Afghan Girl’ as a prompt after it was found that the art generator was making copies with slight variations of the 1984 Steven McCurry photo. A search for Dorothea Lange’s ‘Migrant Mother’ also turns up nearly identical works, too even if the photo is in the public domain.
The case against the artists will be settled as the courts increasingly veer toward enforcing intellectual property rights and moving away from dismissing copyright cases prematurely. The 9th U.S. Circuit Court of Appeals last year revived a lawsuit against M. Night Shyamalan, accusing him of infringing a 2013 independent film to Servant, concluding that expert testimony and discoveries are needed to judge whether the works are really similar. The decision was at least the third by a federal appeals court since 2020 to reverse a lower court’s decision to dismiss a copyright lawsuit, while the others implied the first. pirates of the Caribbean to film The shape of water.
Against this backdrop, some AI companies have turned to licensing data to avoid legal trouble. Some artists favor such licensing to earn compensation for their work. Ortiz isn’t so sure. “This is not just about automation, but automation decimating entire industries with your own work,” she says. “That’s why SAG-AFTRA and the WGA are fighting.”
This story first appeared in the Aug. 16 issue of The Hollywood Reporter magazine. Click here to subscribe.