Images of child sexual abuse generated by artificial intelligence tools are increasingly common on the open web and are reaching a “tipping point”, according to a safety watchdog.
He Internet Surveillance Foundation said the amount of illegal AI-created content it had seen online over the past six months had already surpassed the previous year’s total.
The organisation, which runs a hotline in the UK but also has global remit, said almost all of the content was found on publicly available areas of the internet and not on the dark web, which must be accessed using specialist browsers.
IWF acting chief executive Derek Ray-Hill said the level of sophistication of the images indicated that the artificial intelligence tools used had been trained on images and videos of real victims. “The last few months show that this problem is not going away and, in fact, it is getting worse,” he said.
According to an IWF analyst, the situation with AI-generated content was reaching a “tipping point” where law enforcement and authorities did not know whether an image involved a real child in need of help.
The IWF took action against 74 reports of AI-generated child sexual abuse material (CSAM), which was realistic enough to breach UK law, in the six months to September this year, compared to 70 in the 12 months until March. A single report could reference a web page containing multiple images.
In addition to AI images depicting real-life abuse victims, the types of material seen by the IWF included “deepfake” videos in which adult pornography had been manipulated to look like CSAM. In previous reports, the IWF has said that AI was being used to create images of celebrities who had been “de-aged” and then depicted as children in sexual abuse scenarios. Other examples of CSAM seen have included material where artificial intelligence tools have been used to “strip” images of clothed children found online.
More than half of the AI-generated content flagged by the IWF over the past six months is hosted on servers in Russia and the United States, with Japan and the Netherlands also hosting significant amounts. The addresses of the web pages containing the images are uploaded to an IWF URL list that is shared with the tech industry so they can be blocked and made inaccessible.
The IWF said eight out of 10 reports of illegal AI-created images came from members of the public who had found them on public sites such as AI forums or galleries.
Meanwhile, Instagram has announced new measures to counter Sextortion, where users are tricked into sending intimate images to criminals, usually posing as young women, and then subjected to threats of blackmail.
The platform will roll out a feature that blurs nude images sent to users in direct messages and urges users to be careful when sending any direct message (DM) that contains a nude image. Once a blurry image is received, the user can choose whether to view it or not, and will also receive a message reminding them that they have the option to block the sender and report the chat to Instagram.
The feature will be enabled by default for teen accounts worldwide starting this week and can be used in encrypted messages, although images flagged by the “on-device detection” feature will not be automatically reported to the platform or to the authorities.
It will be a voluntary function for adults. Instagram will also hide the following and follower lists of potential sextortion scammers who are known for threatening to send intimate images to those accounts.