On January 29th, in testimony Before the Georgia Senate Judiciary Committee, Hunt-Blackwell urged lawmakers to remove criminal penalties from the bill and add exceptions for news outlets that want to republish deepfakes as part of their reporting. Georgia’s legislative session ended before the bill could advance.
Federal legislation on deepfakes The law is also expected to meet resistance. In January, lawmakers in Congress introduced the No AI FRAUD Act, which would grant ownership rights to people’s likeness and voice. This would allow people portrayed in any type of deepfake, as well as their heirs, to sue those involved in creating or spreading the fake. These rules aim to protect people from both pornographic deepfakes and artistic imitation. Weeks later, the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology filed written opposition.
Along with other groups, they argued that the laws could be used to suppress much more than illegal speech. The mere prospect of facing a lawsuit, the letter argues, could deter people from using the technology for constitutionally protected acts such as satire, parody or opinion.
In a statement to WIRED, bill sponsor Rep. Maria Elvira Salazar noted that “the No AI FRAUD Act contains explicit recognition of First Amendment protections for free speech and speech in the public interest.” Rep. Yvette Clarke, who has sponsored a companion bill requiring labeling of deepfakes that portray real people, told WIRED that it has been amended to include exceptions for satire and parody.
In interviews with WIRED, ACLU policy advocates and litigators noted that they don’t oppose narrowly targeted regulations targeting nonconsensual deepfake porn, but noted that existing anti-harassment laws are a solid framework for addressing the problem. “Of course, there could be issues that can’t be regulated under existing laws,” Jenna Leventoff, senior policy counsel at the ACLU, told me. “But I think the general rule is that existing law is sufficient to address a lot of these issues.”
However, this isn’t a consensus view among legal scholars. As Mary Anne Franks, a law professor at George Washington University and a leading advocate for strict rules against deepfakes, told WIRED in an email: “The obvious flaw with the ‘we already have laws to deal with this’ argument is that, if it were true, we wouldn’t be seeing an explosion of this abuse without a corresponding increase in criminal charges being filed.” Generally, Franks said, prosecutors in a stalking case must prove beyond a reasonable doubt that the alleged perpetrator intended to harm a specific victim — a high bar to meet when that perpetrator may not even know the victim.
Franks added: “One of the recurring themes among victims of this abuse is that there are no obvious legal recourses for them, and they are the ones who should know that.”
The ACLU has We have not yet sued any government over generative AI regulations. Representatives of the organization did not say whether they are preparing a case, but both the national office and several affiliates said they are closely following the legislative process. Leventoff assured me: “We tend to act quickly when something comes up.”