Home Tech American man used artificial intelligence to generate 13,000 images of child sexual abuse, FBI alleges

American man used artificial intelligence to generate 13,000 images of child sexual abuse, FBI alleges

0 comment
American man used artificial intelligence to generate 13,000 images of child sexual abuse, FBI alleges

The FBI has charged an American man with creating more than 10,000 sexually explicit and abusive images of children, which he allegedly generated using a popular artificial intelligence tool. Authorities also charged the man, Steven Anderegg, 42, with sending AI-created pornographic images to a 15-year-old boy via Instagram.

Anderegg created around 13,000 “hyper-realistic images of naked and semi-naked prepubescent children.” prosecutors declared In an indictment released Monday, images often appear showing children touching their genitals or being sexually abused by adult men. Evidence from the Wisconsin man’s laptop allegedly showed that he used the popular Stable Diffusion AI model, which converts text descriptions into images.

Anderegg’s charges came after the National Center for Missing and Exploited Children (NCMEC) received two reports last year that flagged his Instagram account, leading law enforcement officials to monitor his activity on the social network. obtain information from Instagram and eventually obtain a search warrant. . Authorities seized her laptop and found thousands of AI generative images, according to the indictment against her, as well as a history of using “extremely specific and explicit prompts” to create abusive material.

technology/2024/apr/23/can-ai-image-generators-be-policed-to-prevent-explicit-deepfakes-of-children"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"inAdvertisingPartnerABTest":false,"assetOrigin":"https://assets.guim.co.uk/"}"/>

Anderegg faces four counts of creating, distributing and possessing child sexual abuse material and sending explicit material to a child under 16 years of age. If he is convicted, he faces a maximum sentence of about 70 years in prison, with 404 Media inform That the case is one of the first times the FBI has accused someone of generating AI child sexual abuse material. Last month, a man in Florida was arrested for allegedly taking a photograph of his neighbor’s son and using artificial intelligence to create sexually explicit images from the photo.

Child safety advocates and artificial intelligence researchers have long warned that malicious use of generative AI could lead to a rise in child sexual abuse material. Reports of online child abuse to NCMEC increased approximately 12% in 2023 from the previous year, in part due to a sharp increase in AI-made material. threatening to overwhelm the organization’s tip line to flag potential child sexual abuse material (CSAM).

“NCMEC is deeply concerned about this rapidly growing trend, as bad actors may use artificial intelligence to create falsified sexually explicit images or videos based on any photograph of a real child or generate CSAM depicting computer-generated children. engaging in graphic sexual acts.” the NCMEC report said.

The rise of generative AI has led to the widespread creation of non-consensual deepfake pornography, which has targeted everyone from A-list celebrities to ordinary citizens. Images generated by AI and deepfakes of minors have also circulated in schools, in one case leading to arrest of two high school boys in Florida who created nude images of their classmates. Several states have passed laws against the non-consensual generation of explicit images, while the Department of Justice has said that generating sexual images of children with AI is illegal.

“The Department of Justice will aggressively pursue those who produce and distribute child sexual abuse material – or CSAM – regardless of how that material was created,” Assistant Attorney General Lisa Monaco said in a statement after the arrest. “Simply put, AI-generated CSAM is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

Stable Diffusion, which is an open source artificial intelligence model, has previously been used to generate sexually abusive images and has been modified by users to produce explicit material. TO last year’s report from the Stanford Internet Observatory also found that there was child sexual abuse material in their training data. Stability AI, which created Stable Diffusion, has said it prohibits the use of its model to create illegal content.

skip past newsletter promotion

Stability AI, the British company behind the widespread release of Stable Diffusion, said it believed the AI ​​model used in this case was an older version of the model originally created by startup RunwayML. Stability AI stated that since it took over the development of the Stable Diffusion models in 2022, it has implemented more safeguards into the tool. The Guardian has contacted RunwayML for comment.

“Stability AI is committed to preventing the misuse of AI and prohibiting the use of our models and imaging services for illegal activities, including attempts to edit or create CSAM,” the company said in a statement.

  • In the US, call or text child help abuse hotline at 800-422-4453 or visit their website for more resources and report child abuse or DM for help. For adult survivors of child abuse, help is available at ascasupport.org. In the United Kingdom, the NSPCC offers support to children on 0800 1111 and adults concerned about a child on 0808 800 5000. The National Association for People Abused as Children (Napac) offers support to adult survivors on 0808 801 0331. In Australia, children, young adults, parents and teachers can contact the children’s helpline on 1800 55 1800, or brave hearts at 1800 272 831, and adult survivors can contact Blue Knot Foundation at 1300 657 380. Other sources of help can be found at International Children’s Helplines

You may also like