The estate of comedian George Carlin settled a lawsuit Tuesday against the owners of a comedy podcast who claimed they used artificial intelligence to recreate the late stand-up’s voice. The lawsuit was one of the first in the US to highlight the legality of deepfakes that imitate the likeness of a celebrity.
The Dudesy podcast and its creators – former Mad TV comedian Will Sasso and writer Chad Kultgen – agreed to remove all versions of the podcast from the Internet and to permanently refrain from using Carlin’s voice, likeness or image in any content. Danielle Del, spokesperson for Sasso, declined to comment.
Carlin’s family and an attorney for his estate both praised the settlement. Neither party has disclosed the terms of the deal.
“I am pleased that this matter was resolved quickly and amicably, and I am grateful that the defendants acted responsibly by quickly removing the video they made,” Kelly Carlin, the comedian’s daughter, said in a statement .
Carlin’s estate filed a lawsuit in January after the Dudesy podcast, which touts itself as incorporating AI into its comedy routines, posted an hour-long special on YouTube titled George Carlin: I’m Glad I’m Dead. The estate’s lawsuit alleged that the podcast violated both Carlin’s publicity and copyright rights, calling it “an accidental theft of the work of a major American artist.”
The special was introduced by the podcast’s eponymous AI character “Dudesy”, who claimed to have viewed Carlin’s work and then created a stand-up set in the comedian’s style. Following the lawsuit, Sasso’s spokesman Del told the New York Times that the fictional Dudesy character was not AI-generated and that Kultgen wrote the entire fake Carlin special rather than training it on previous work. Because the case has not reached the discovery stage, it is unclear exactly which parts of the fake Carlin set were AI-generated.
“While it is a shame that this has happened at all, I hope this case serves as a warning of the dangers posed by AI technologies and the need for appropriate safeguards, not just for artists and creatives, but for every human being on earth ” said Kelly Carlin. .
Even if the podcast didn’t use Carlin’s comedy to train an AI, an attorney for the estate said that using the technology to create an impersonation of him was still a violation of Carlin’s rights and that the disclaimer before the special was insufficient. Clips from the special could have been decontextualized and misleadingly spread across the internet, claiming they were really of Carlin, who died in 2008.
“These types of fake videos create a real potential for harm because, for example, someone could just take a clip of it and send it around or post it on Twitter,” said Josh Schiller, a partner at Boies, Schiller, Flexner and an attorney. for Carlin’s estate. “Someone might believe they’re listening to the real George Carlin because they’ve never heard him before and don’t know he’s dead.”
The settlement comes at a sensitive time for the entertainment industry’s relationship with artificial intelligence. The emergence of publicly available generative AI tools over the past year and a half has heightened creators’ concerns about unauthorized imitations of both living and dead artists. Recent deepfakes of celebrities like Taylor Swift have also put pressure on lawmakers and AI companies to limit malicious or non-consensual use of the technology.
Earlier this week, more than 200 musicians signed an open letter calling on developers and tech companies to stop producing AI tools that could replace or undermine their rights and steal their likenesses. Meanwhile, a number of states have passed legislation around the use of deepfake technology – including Tennessee, which last month passed a law against blocking the replication of an artist’s voice without their consent.
While the case was settled quickly, it highlights the possibility of future lawsuits over whether AI-generated imitations can be considered parodies permitted under fair use. Shows like Saturday Night Live have long been allowed to pose as public figures on those grounds, but major legal tests have yet to take place where generative AI tools create similar impressions – a situation that Schiller says is fundamentally different than when a human does.
“There’s a big difference between using an AI tool to impersonate someone and make it look authentic versus someone putting on a gray wig and a black leather jacket,” says Schiller. “You know that person isn’t George Carlin.”