Home Tech UK watchdog accuses Apple of failing to report sexual images of children

UK watchdog accuses Apple of failing to report sexual images of children

0 comment
UK watchdog accuses Apple of failing to report sexual images of children

Apple is failing to effectively monitor its platforms for child sexual abuse images and videos, child safety experts say, raising concerns about how the company can handle the growing volume of such material linked to artificial intelligence.

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) is accusing Apple of failing to accurately record how often child sexual abuse material (CSAM) appears on its products. In one year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in more cases in England and Wales alone than the company reported in all other countries combined, according to police data obtained by the NSPCC.

Using data collected through freedom of information requests and shared exclusively with the Guardian, the children’s charity found that Apple was implicated in 337 recorded child abuse image offences between April 2022 and March 2023 in England and Wales. In 2023, Apple submitted just 267 reports of suspected CSAM on its platforms worldwide to the National Centre for Missing and Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47 million and Meta reporting more than 30.6 million, according to NCMEC. annual report.

All US-based tech companies are required to report all cases of child sexual abuse they detect on their platforms to NCMEC. The Virginia-based organization acts as a clearinghouse for child abuse reports from around the world, reviewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple cannot see the content of users’ messages, but neither can Meta’s WhatsApp, which submitted approximately 1.4 million reports of suspected cases of child sexual abuse to NCMEC in 2023.

technology/2024/apr/16/child-sexual-abuse-content-online-ai"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}"/>

“There is a worrying discrepancy between the number of child abuse image offences occurring on Apple services in the UK and the almost negligible number of global reports of abusive content to law enforcement,” said Richard Collard, NSPCC policy director for child online safety. “Apple is clearly behind many of its peers in tackling child sexual abuse, when all tech companies should be investing in security and preparing for the implementation of the UK Online Safety Act.”

Apple declined to comment for this article. Instead, the company referred the Guardian to statements it made last August, in which it said it had decided not to move forward with a program that would scan iCloud photos for child sexual abuse material because it had instead chosen a path that “prioritizes the safety and privacy of (its) users.”

In late 2022, Apple dropped plans to launch the iCloud photo scanning tool. Apple’s tool, called neuralMatch, would have scanned images before they were uploaded to iCloud’s online photo storage and compared them to a database of known child abuse images through mathematical fingerprints known as hash values.

technology/2021/sep/03/apple-delays-plans-to-scan-cloud-uploads-for-child-sexual-abuse-images"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}"/>

However, the software came under fire from digital rights groups, who expressed concern that it could inevitably be used to compromise the privacy and security of all iCloud users. Child safety advocates criticized the removal of the feature.

“Apple is not detecting child sexual abuse in most of their large-scale environments — at all,” said Sarah Gardner, executive director of Heat Initiative, a Los Angeles nonprofit focused on child protection. “They are clearly underreporting and have not invested in trust and safety teams to be able to handle this.”

Apple’s announcement in June that it would launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.

“The race to implement Apple’s AI is concerning as AI-generated child abuse material puts children at risk and impacts law enforcement’s ability to protect young victims, especially now that Apple has delayed incorporating technology to protect children,” Collard said. Apple says the AI ​​system, which was built in partnership with OpenAI, will personalize user experiences, automate tasks and increase user privacy.

In 2023, NCMEC received more than 4,700 reports of AI-generated child sexual abuse and has said it expects the number of reports to increase in the future. Since AI models capable of creating child sexual abuse have been trained on “real-life” images of child abuse, AI-generated images are also implicated in the victimization of children. The Guardian reported in June that child predators are using AI to create new images of their favorite victims, further exacerbating the trauma of survivors or images of child abuse.

technology/article/2024/jun/12/predators-using-ai-generate-child-sexual-images"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}"/>

“The company is treading into territory that we know could be incredibly harmful and dangerous to children, without having a track record of being able to handle it,” Gardner said. “Apple is a black hole.”

You may also like