HomeTech Artificial intelligence is outpacing efforts to catch child predators, experts warn

Artificial intelligence is outpacing efforts to catch child predators, experts warn

0 comment
Los depredadores infantiles utilizan la IA para crear imágenes sexuales de sus 'estrellas' favoritas: 'Mi cuerpo nunca volverá a ser mío'

The volume of sexually explicit images of children generated by predators using artificial intelligence is overwhelming law enforcement’s ability to identify and rescue real-life victims, child safety experts warn.

Prosecutors and child protection groups working to combat crimes against children say AI-generated images have become so realistic that in some cases it is difficult to determine whether real children have been subjected to actual harm in their production. A single AI model can generate tens of thousands of new images in a short period of time, and this content has begun to flood both the dark web and the offline web.

“We’re starting to see reports of images of real children, but they’ve been generated by artificial intelligence, but that child wasn’t sexually abused. But now their face is on a child who was abused,” said Kristina Korobov, senior staff attorney at the Zero Abuse Project, a child safety nonprofit based in Minnesota. “Sometimes, we recognize the bedding or the background in a video or image, the perpetrator or the series it came from, but now there’s another child’s face on it.”

There are already tens of millions of reports each year of actual child sexual abuse material (CSAM) created and shared online, which law enforcement and security groups struggle to investigate.

“We’re already drowning in this kind of stuff,” said one Justice Department prosecutor, who spoke on condition of anonymity because he was not authorized to speak publicly. “From a law enforcement perspective, crimes against children are one of the most resource-constrained areas, and there’s going to be an explosion of AI content.”

technology/article/2024/may/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}"/>

Last year, the National Center for Missing and Exploited Children (NCMEC) received reports of predators using artificial intelligence in a number of different ways, including hacking into text messages to generate child abuse images, altering previously uploaded files to make them sexually explicit and abusive, and uploading known sexually harmful material and generating new images based on those images. In some reports, offenders turned to chatbots for instructions on how to find children to have sex with or harm.

Experts and prosecutors are concerned about criminals attempting to evade detection by using generative artificial intelligence to alter images of a child who has been sexually abused.

“When charging in the federal system, AI doesn’t change what we can prosecute, but there are many states where you have to be able to prove that this is a real child. Arguing about the legitimacy of the images will cause problems at trial. If I were a defense attorney, that’s exactly what I would argue,” the Justice Department prosecutor said.

Possession of depictions of child sexual abuse is a crime under US federal law, and there have been several arrests of suspected authors of AI-generated sexually explicit material in the United States this year. However, most states do not have laws prohibiting the possession of AI-generated sexually explicit material depicting minors. The act of creating the images in the first place is not covered by existing laws.

However, in March, the Washington state legislature… passed a bill Prohibit the possession of AI-generated child sexual material and the knowing disclosure of AI-generated intimate images of others. In April, a bipartisan bill aimed at criminalizing the production of AI-generated child sexual material was introduced in Congress, which has been Approved by the National Association of Attorneys General (NAAG).


Child safety experts warn that the influx of AI content will strain the resources of NCMEC’s CyberTipline, which acts as a clearinghouse for child abuse reports from around the world. The organization sends these reports to law enforcement for investigation, after determining their geographic location, priority status and whether the victims are already known.

“Police are now dealing with a greater volume of content. And how do they know if it is a real child who needs to be rescued? They don’t know. It’s a huge problem,” said Jacques Marcoux, director of research and analysis at the Canadian Centre for Child Protection.

Known images of child sexual abuse can be identified by the digital fingerprints of the images, known as hash values. NCMEC maintains a database of more than 5 million hash values ​​against which images can be compared, a crucial tool for law enforcement.

When a known child sexual abuse image is uploaded, technology companies running software to monitor this activity have the ability to intercept and block them based on their hash value and report the user to authorities.

Material that does not have a known hash value, such as newly created content, is unrecognizable to this type of scanning software. Any editing or alteration of an image using AI also changes its hash value.

“Hash comparison is the first line of defense,” Marcoux said. “With AI, each generated image is considered a completely new image and has a different hash value. This erodes the efficiency of the existing first line of defense and could cause the hash comparison system to collapse.”


Child safety experts are tracking the escalation of AI-generated child sexual abuse in late 2022, coinciding with OpenAI’s launch of ChatGPT and the introduction of generative AI to the public. Earlier that year, the LAION-5B database was launched, an open-source catalogue of over 5 billion images that anyone can use to train AI models.

technology/2024/apr/16/child-sexual-abuse-content-online-ai"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}"/>

Previously detected child sexual abuse images are included in the database, meaning AI models trained on that database could produce CSAM, Stanford Researchers discovered this in late 2023. Child safety experts have highlighted that children have been harmed during the production process of most, if not all, AI-created child sexual abuse material.

“Every time you feed an image of child sexual abuse into an AI machine, it learns a new skill,” said Korobov of Project Zero Abuse.

When users upload known CSAM material to their imaging tools, OpenAI reviews it and reports it to NCMEC, a company spokesperson said.

“We have made significant efforts to minimize the potential for our models to generate content that is harmful to children,” the spokesperson said.

Skip newsletter promotion

In 2023, NCMEC received 36.2 million reports of online child abuse, up 12% from the previous year. The majority of the reports received were related to the circulation of actual photos and videos of children being sexually abused. However, it also received 4,700 reports of sexually exploitative images or videos of children made using generative AI.

NCMEC has accused AI companies of failing to actively try to prevent or detect the production of sexually threatening material (CSAM). Only five generative AI platforms submitted reports to the organization last year. More than 70% of reports of AI-generated CSAM came from social media platforms, which had been used to share the material, rather than the AI ​​companies.

“There are numerous sites and applications that can be accessed to create this type of content, including open source models, that do not interact with CyberTipline and do not employ other security measures, to our knowledge,” said Fallon McNulty, director of NCMEC’s CyberTipline.

With AI allowing predators to create thousands of new child sexual abuse images with little time and minimal effort, child safety experts anticipate that their resources will be increasingly stretched to combat child exploitation. NCMEC said it expects AI to drive an increase in reports to its CyberTipline.

This expected increase in reports will impact the identification and rescue of victims, threatening an area of ​​law enforcement that is already under-resourced and overwhelmed, child safety experts said.

Predators routinely share CSAM material with their communities on peer-to-peer platforms, using encrypted messaging applications to evade detection.

Meta’s decision to encrypt Facebook Messenger in December and plans to encrypt messages on Instagram have faced backlash from child safety groups, who fear that many of the millions of cases that occur on its platforms each year are now going undetected.

Meta has also introduced a number of generative AI features to its social media over the past year. AI-generated images have become one of the most popular pieces of content on the social network.

In a statement to the Guardian, a Meta spokesperson said: “We have detailed and robust policies against child nudity, abuse and exploitation, including child sexual abuse material (CSAM) and child sexualisation, and those created using GenAI. We report all apparent cases of CSAM to NCMEC, in accordance with our legal obligations.”


Child safety experts said companies developing AI platforms and lawmakers should largely be responsible for stopping the proliferation of AI-generated CSAM.

“It is imperative to design tools securely before they are released to ensure they cannot be used to generate child sexual abuse material,” McNulty said. “Unfortunately, as we have seen with some of the open source generative AI models, when companies do not follow security by design, there can be huge downstream effects that cannot be reversed.”

Additionally, Korobov said, platforms that can be used to exchange AI-generated CSAM material need to allocate more resources to detection and reporting.

“There will need to be more human moderators reviewing images or going into chat rooms and other servers where people are exchanging this material and seeing what is available, rather than relying on automated systems to do so,” he said. “We will need to look at it and recognise that it is child sexual abuse material as well – it’s just something new.”

Meanwhile, major social media companies have slashed resources for scanning and reporting child exploitation by eliminating jobs in their child moderation and safety teams.

“If big companies aren’t willing to take basic CSAM screening measures, why would we think they would take all these extra steps in this unregulated AI world?” said Sarah Gardner, executive director of Heat Initiative, a child safety group based in Los Angeles. “We’ve seen that purely voluntary doesn’t work.”

You may also like