Charlie Puth had a problem: he had written a lyric, ‘a little tune’, but didn’t know what to do with those lines. The “Left and Right” singer-songwriter had joined a Google AI incubator program, so he entered the new lyrics into the AI-powered tool, “like I would if I was collaborating with anyone else,” Puth said , speaking to a room of journalists and YouTube creators at the tech giant’s New York office on September 21. “It was really profound,” Puth recalled of what the system spit back, noting that it sang back the lyrics in his own voice, suggested styles and recommended that it be sung in A-flat minor.
Generative AI is about to go mainstream. While systems like DALL-E and GPT can still be used largely by first adopters, companies like YouTube and Meta are preparing to roll out AI-powered tools to the masses. At the same event, YouTube announced an AI tool that will recommend video ideas to creators, as well as the “Dream Screen,” which will allow creators to type a short prompt and have the AI turn it into a video background for them to use.
Meta CEO Mark Zuckerberg announced a slew of AI features at an event on September 27, including tools for creating and editing AI images, and AI “characters” that allow users to chat, with the likenesses and voices of real people (Tom Brady plays “Bru,” a “funny sports debater,” and Mr. Beast plays “Zach,” a “big brother who will roast you,” per Meta).
And Getty Images unveiled a tool on September 25 that aims to bring generative AI to its platform – with a twist: it will be suitable for commercial use, with indemnification for users and compensation for photographers whose work was used for training.
But as AI tools become mainstream, concerns and consternation continue to grow as artists wonder how their work will be used and whether they will be compensated or protected. Look no further than the WGA strike, where the use of AI was a final sticking point in the conversations. “We really base ourselves on three core principles,” YouTube CEO Neal Mohan said when asked The Hollywood Reporter about how the company is proactive in technology. “The first is: AI is here. It’s up to all of us to leverage that in a way that really builds the creative community. The second is that we want to do it in a way where creatives retain control and the monetization opportunities come to them. And finally – perhaps most importantly – we want to do this in a bold but also responsible way.”
It’s not just about compensation and copyright, but also about manipulation and other fears. “We’re talking about rights issues and creator tools,” Mohan explains. “But we also know that these tools can make it easier for bad actors to do things on platforms like YouTube and basically everywhere. There too, we have a track record of prioritizing responsibility over everything else, and in the area of generative AI we plan to do the same.”
But the dichotomy of generative AI – of possibility and risk – is everywhere. Puth can use AI to help him write a song, but other artists may want their voices as far away from the AI training sets as possible. “It is our job – the platforms and the music industry – to ensure that artists like Charlie who lean into it can benefit; it is also our shared responsibility to ensure that artists who do not want to participate are protected,” said Robert Kyncl, CEO of Warner Music Group.
YouTube executives say they are taking a “more cautious path” around some AI technologies such as voice cloning, while being acutely aware of the potential for abuse. As Big Tech invades Hollywood, major players are pushing for regulation. Michael Nash, chief digital officer of Universal Music Group, says THR that music publishers are calling for a federal right of publicity law to combat voice impersonation in AI tracks and “ensure that creatives are entitled to and able to leverage the brands they built.” Copyright law doesn’t cover the faces of actors or the voices of singers, but there are laws in some states – such as California, New York and Florida – that protect against unauthorized commercial use of a person’s name, likeness and persona. It is intended to give people the exclusive right to license their identity.
Kyncl compared the current moment to the first years after the invention of the printing press. “Since then, technologies have transformed industries and developed largely for the better and with much greater prosperity,” he said. “But when that happens, there is a period of uncertainty because there is a profound change taking place and the change is disturbing. And we are in that period of change.”
Between the lines of Mohan’s nod to “issues of rights and creator tools” was the idea that some generative AI technology might not be completely off the table. OpenAI, Meta, and Stability AI are facing a slew of lawsuits alleging massive copyright infringement due to their unlicensed use of copyrighted works as training data. While YouTube hasn’t revealed which AI system is used to power ‘Dream Screen’, it’s possible that it’s trained on data culled from the internet and outside the public domain.
Courts are grappling with whether this practice violates IP laws. AI companies say it’s fair use, which protects the use of copyrighted works to create a secondary creation, as long as it is “transformative.” OpenAI moved in August to dismiss a proposed class action filed by Sarah Silverman and others based on this argument, claiming they “misjudge the scope of copyright law.”
The company is likely to run afoul of the recent Supreme Court ruling Andy Warhol Foundation for Visual Arts vs. Goldsmith, effectively limiting the scope of fair use. In that case, the majority emphasized that an analysis of whether an allegedly infringing work had been sufficiently transformed must be balanced against the “commercial nature of the use.” If authors can determine that OpenAI’s delisting of their novels undermines their economic prospects of profiting from their works, for example by interfering with potential licensing deals that the company could have pursued instead, then there will likely be no reasonable usage. legal experts consulted by THR participation.
“OpenAI doesn’t even have to be in direct competition with the artists; it just needs to have used their art for commercial purposes,” said Shyam Balganesh, a professor at Columbia Law School. “Similarly for YouTube, the artists could argue that they could have had a vibrant licensing market.”
But before the court even analyzes fair use, plaintiffs must establish that their works have been used to train AI systems. One of their frustrations is that the training datasets are a black box. It’s a feature, not a bug. Because artists and authors cannot prove that their creations have been used, this can pose an obstacle in a lawsuit. OpenAI and Meta no longer release information about the sources of their datasets. Sam Altman-led OpenAI attributed the turnaround to the “competitive landscape and security implications of large-scale models like GPT-4,” though the decision came in March after it was sued over its datasets.
Liability may also rest with users. YouTube, unlike Getty, does not offer any compensation for the use of its AI tools. Users will be blamed if their work is found to infringe on copyright.
Getty has more reason to be confident in the legal status of its AI system as it licenses the underlying images on which the model is trained. It leans toward the decision to pay contributors for use of their content, with the company touting its technology as a “commercially safe generative AI tool.”
The Authors Guild – led by prominent authors such as George RR Martin, Jonathan Franzen and John Grisham – entered the legal battle against OpenAI on September 19. With more than 13,000 members, the trade group represents the most formidable opponent to challenge the company. A finding of infringement could lead to hundreds of millions of dollars in damages and an order to destroy systems trained on copyrighted works.
Behind closed doors, companies commercializing AI tools are already warning investors of potential liability. Adobe said in a securities filing issued in June that intellectual property disputes “could subject us to significant liabilities, require us to enter into royalty and licensing agreements on unfavorable terms” and potentially “impose injunctions that could affect our sales of products or restrict services’. In March, Adobe unveiled the AI image and text generator Firefly. While the first model was only trained on stock photos, it says that future versions will “leverage a variety of assets, technology, and training data from Adobe and others.”
That hasn’t stopped major companies from embracing the technology. Think of Fox Corp. The Murdoch-controlled media company announced on September 26 that its free streaming service Tubi had teamed up with ChatGPT maker OpenAI to launch “RabbitAI,” allowing users to ask for movie or TV show recommendations (try “shark movies that are funny,” for example) . And Fox’s local TV stations announced a partnership with generative AI company Waymark on September 28 to enable local businesses to use AI technology to create ads that can run on their local stations.
In the long term, Mohan sees a future where AI tools are used to find and identify cases where other AI tools have been misused. Cottage industries and companies are already springing up to capitalize on the fear surrounding generative AI. Technology company Metaphysics on September 14 unveiled a new product and service that it says could help actors and other consumers “manage” unauthorized use of their facial, voice and performance data by third parties using AI tools. Early adopters include Anne Hathaway, Tom Hanks, Octavia Spencer, Rita Wilson and Paris Hilton.
And Meta suggests it uses AI technology to help control its AI functions: “It’s important to know that we train and tune our generative AI models to limit the possibility that private information you share with generative AI features, appears in comments to other people,” the company wrote in an FAQ about its new offering. “We use automated technology and humans to assess interactions with our AI so that we can, among other things, reduce the likelihood that model results contain someone’s personal information and improve model performance.”
Mohan adds, “AI will, I think, be a great tool in terms of actually enforcing the guidelines that keep the entire ecosystem safe.”
A version of this story first appeared in the September 27 issue of The Hollywood Reporter magazine. Click here to subscribe.