Home Tech Sex offender banned from using AI tools in landmark UK case

Sex offender banned from using AI tools in landmark UK case

0 comment
Sex offender banned from using AI tools in landmark UK case

A sex offender convicted of creating more than 1,000 indecent images of children has been banned from using any “AI creation tools” for the next five years in the first known case of its kind.

A UK court ordered Anthony Dover, 48, “not to use, visit or access” artificial intelligence generation tools without prior police permission as a condition of a sexual harm prevention order imposed in February.

The ban prohibits you from using tools such as text-to-image generators, which can create realistic images based on a written command, and “nudifying” websites used to create explicit “deepfakes.”

Dover, who was given a community order and a £200 fine, was also explicitly ordered not to use the Stable Diffusion software, which has reportedly been exploited by pedophiles to create hyper-realistic material about child sexual abuse, according to records of a sentencing hearing at Poole Magistrates’ Court.

The case is the latest in a series of prosecutions where AI generation has emerged as an issue and follows months of warnings from charities about the proliferation of AI-generated sexual abuse images.

Last week, the government announced the creation of a new crime That makes it illegal to make sexually explicit deepfakes of people over 18 without consent. Those convicted face judicial proceedings and an unlimited fine. If the image is shared more widely, offenders could be sent to jail.

Creating, possessing and sharing artificial child sexual abuse material was already illegal under laws in place since the 1990s, which prohibit both real and “pseudo” photographs of minors under 18 years of age. In previous years, the law has been used to prosecute people for crimes involving realistic images, such as those made with Photoshop.

Recent cases suggest that it is increasingly used to address the threat posed by sophisticated artificial content. In one of England’s courts, a defendant who had pleaded guilty to taking and distributing indecent “pseudophotographs” of children under 18 was released on bail with conditions that included not accessing a Japanese photo-sharing platform where he allegedly sold and distributed artificial images of abuse, according to court records.

In another case, a 17-year-old from Denbighshire, north-east Wales, was convicted in February of taking hundreds of indecent “pseudophotographs”, including 93 images and 42 videos of the most extreme Category A images. At least six other people have appeared in court accused of possessing, taking or sharing pseudophotographs (covering AI-generated images) in the last year.

He Internet Surveillance Foundation (IWF) said the prosecutions were a “historic” moment that “should sound the alarm that criminals producing AI-generated images of child sexual abuse are like one-man factories, capable of mass-producing some of the most terrifying images.”

Susie Hargreaves, chief executive of the charity, said that while AI-generated sexual abuse images currently make up a “relatively low” proportion of reports, they were seeing a “slow but steady increase” in cases, and that part “We hope the prosecutions send a clear message to those creating and distributing this content that it is illegal,” he said.

It’s unclear exactly how many cases there have been involving AI-generated images because they are not counted separately in official data, and fake images can be difficult to distinguish from real ones.

Last year, an IWF team infiltrated a dark web child abuse forum and found 2,562 artificial images that were so realistic the law would treat them as real.

The Lucy Faithfull Foundation (LFF), which manages confidential information stop it now helpline for people concerned about their thoughts or behaviour, said it had received multiple calls about AI imaging and that it was a “worrying trend that is growing at a rate”.

He is also concerned about the use of “nudification” tools used to create deepfake images. In one case, the father of a 12-year-old boy said he had found his son using an artificial intelligence app to take topless photos of friends.

skip past newsletter promotion

In another case, a caller to the NSPCC’s Childline helpline said a “stranger online” had made “fake nudes” of her. “It looks so real, it’s my face and my room in the background. They should have taken the photos from my Instagram and edited them,” said the 15-year-old girl.

The charities said that as well as targeting offenders, tech companies needed to stop image generators from producing this content in the first place. “This is not the problem of tomorrow,” said Deborah Denis, executive director of the LFF.

The decision to ban an adult sex offender from using artificial intelligence generation tools could set a precedent for the future monitoring of people convicted of indecent image offences.

Sex offenders have long faced restrictions on Internet use, such as being prohibited from browsing in “incognito” mode, accessing encrypted messaging apps, or deleting their Internet history. But there are no known cases in which restrictions have been imposed on the use of artificial intelligence tools.

In Dover’s case, it is unclear whether the ban was imposed because his crime involved AI-generated content or because of concerns about future crimes. These conditions are usually requested by prosecutors based on intelligence information held by the police. By law, they must be specific, proportionate to the threat posed and “necessary to protect the public.”

A Crown Prosecution Service spokesperson said: “Where we perceive there is an ongoing risk to children’s safety, we will ask the court to impose conditions, which may involve a ban on the use of certain technology.”

Stability AI, the company behind Stable Diffusion, said concerns about child abuse material were related to an earlier version of the software, which was released to the public by one of its partners. He said that since taking over the exclusive license in 2022 it had invested in features to prevent misuse, including “filters to intercept unsafe indications and results” and that it prohibited any use of its services for illegal activities.

You may also like