Home Tech Creating deepfake images is becoming easier: controlling their use is becoming almost impossible

Creating deepfake images is becoming easier: controlling their use is becoming almost impossible

0 comments
Creating deepfake images is becoming easier: controlling their use is becoming almost impossible

“Very creepy,” was April’s first thought when she saw his face on a generative AI website.

April is one half of the Maddison twins. She and her sister Amelia create content for Onlyfans, Instagram and other platforms, but they also existed as a custom generative AI model, created without her consent.

“It was really strange to see our faces, but not really our faces,” he says. “It’s really disturbing.”

Deepfakes – the creation of realistic but fake images, videos and audio using artificial intelligence – are on the political agenda after the federal government announced last week that it would introduce legislation to ban the creation and sharing of deepfake pornography as part of measures to combat violence against women.

“Sharing sexually explicit material using technology such as artificial intelligence will be subject to serious criminal penalties,” Prime Minister Anthony Albanese said at a press conference.

April and Amelia’s AI model was on a website called CivitAI, which allows you to upload open source AI image models based on a generator known as Stable Diffusion. The model based on the twins was clearly labeled with their names and indicated that it had been trained on more than 900 of their images. Anyone could then download the program and generate images with April and Amelia’s image.

Guardian Australia found creators responding to user requests for custom models from other Australian influencers and celebrities. The use of the platform to create sexual images, even of non-celebrities, is well documented.

“Assets trained to generate the image of a specific individual must be non-commercial or sexual and may be removed at the request of the person depicted,” a CivitAI spokesperson said.

While the government is moving forward with new rules, deepfakes are already being combated with existing laws, but they are not always easy to enforce.

Test cases involving prominent Australians

Distributing deepfake pornography without the consent of the person it represents would likely already be a crime in most states and territories, according to Dr Nicole Shackleton, a law professor at RMIT University. However, she says federal legislation could fill the gaps.

There are already test cases underway. In October of last year, Anthony Rotondo He was arrested and accused of allegedly sending fake images to Brisbane schools and sporting associations.

The eSafety commissioner separately launched proceedings against Rotondo for failing to remove “intimate images” of several prominent Australians last year from an ultrafake pornography website.

He initially refused to comply with the order while in the Philippines, the court heard, but the commissioner was able to pursue the case once Rotondo returned to Australia.

In December, was fined for contempt of court, after admitting he violated court orders by failing to remove the images. She later shared her password so commissioner officials could remove it. The eSafety case returns to court next week, while the state case is adjourned until June 13.

Image-based abuse, including deepfakes, can be reported to the eSafety commission, which claims to have “a 90% success rate in removing this distressing material.”

Attorney General Mark Dreyfus acknowledged last week that laws existed but said the new legislation sought to make it a clear crime.

“There are crimes that criminalize the exchange of private sexual material online to threaten, harass or offend others,” he said.

“These reforms will create a new offense to make clear that those who seek to abuse or degrade women by creating and sharing sexually explicit material without consent, using technology such as artificial intelligence, will be subject to serious criminal penalties.”

It is understood that the legislation will be added to the existing penal code, which already contains an offense for the distribution of private sexual material without consent with a maximum penalty of six years.

Electronic Frontiers Australia board member Amy Patterson says the organization is wary of “whack-a-mole” legislation to tackle new technologies without addressing the underlying issues about why people might try to create or distribute deepfakes.

“What we don’t need are rushed new powers for authorities who are not making full use of the powers they already have, as a lazy substitute for the harder work of tackling the systemic and recurring problems that do need more than these announcements. hoc symptomatic patches,” she says.

Patterson says more resources should also be dedicated to digital literacy, so people can learn to spot deepfakes. “These are real security issues that surely deserve a more considered, comprehensive and less reactive approach,” she says.

“We need to get better at removing these things”

For the Maddison twins, there is also concern that an AI model producing their image could affect their livelihood. When their images are stolen and distributed without their consent, they can use copyright requests to remove them, but the law is less clear regarding the creation of deepfakes.

Professor Kimberlee Weatherall, who specializes in intellectual property law at the University of Sydney, says copyright issues around deepfakes can be divided into two stages: the formation stage and the exit stage.

Amelia and April Maddison, pictured on the Gold Coast this month, fear people will make images of them doing things they wouldn’t do in reality. Photograph: Paul Harris/The Guardian

AI models require a lot of images to train, and if those images are taken without consent, then that training itself may be copyright infringement because it involves making copies, if it happens in Australia.

On the other hand, the existence of the model itself is unlikely to constitute copyright infringement.

In scenarios where a deepfake is created by attaching someone’s face to an existing pornographic video, for example, the copyright owner of the original clip may have a copyright claim. But if a model creates new images of someone, they may not be covered by existing law.

“If you include a text message and you generate images of a celebrity doing things that a celebrity wouldn’t have done… it’s actually quite difficult to attack that under copyright law,” Dr. Weatherall says. “That’s because I have no copyright to my appearance.”

Amelia Maddison describes the CivitAI model as “violating.” Both women fear that people will create images of them doing things they would not do in reality.

“People might ask for things that could be really disturbing,” April says. “If they can create an image of us that is harmless, then they can (also) do other things.”

The CivitAI model page warned: “Due to training data, this could result in nudity if not requested appropriately.”

In addition to new laws addressing non-consensual deepfakes, Shackleton says tech and AI companies must ensure their platforms are designed securely.

In a statement, a CivitAI spokesperson directed Guardian Australia to its “Real People Policy.” “While Civitai allows the generation and publication of AI-generated images of real people, its terms, conditions and moderation filters do not allow these images to be suggestive,” he said.

However, the platform’s filters to avoid explicit images of public figures must be created individually. “Since CivitAI is a US company operated by US staff and moderators, integrating filters for non-Western public figures may be delayed if the team is not previously aware of them,” she said.

The company also “actively encourages and relies” on its users to report any models or images of real people that are suggestive or sexually explicit.

People can contact the company and “revoke permissions” for the use of their image, which the Twin’s management team did this week. The model has already been removed from the CivitAI website.

But the responsibility to control their own image remains with the people represented by these models. “We need to find ways to improve the removal of these things,” Amelia says. “It’s really unfair.”

Shackleton says the new legislation may not have much impact if it does not also address the underlying reasons why people choose to create deepfake sexual images and distribute them online.

“It is vital that the government pays attention to the bigger picture when it comes to preventing the creation and distribution of deepfake sexual images and their use to silence and intimidate women and girls online,” she says.

You may also like