Pedophiles, fraudsters, hackers and criminals of all kinds are increasingly exploiting artificial intelligence (AI) to target victims in new and harmful ways, a senior police chief has warned.
Alex Murray, head of artificial intelligence at the national police, said use of the technology was growing rapidly due to its increasing accessibility and police had to “act quickly” to stay on top of the threat.
“We know from the history of policing that criminals are inventive and will use anything they can to commit crimes. “They are certainly now using AI to commit crimes,” he said.
“It can happen on a serious, international scale of organized crime, and it can happen in someone’s bedroom… You can think about any type of crime and put it through an artificial intelligence lens and say, ‘What’s the opportunity? here?'”
Speaking at the National Police Chiefs Council conference in London last week, Murray revealed concerns about emerging AI “heists” in which fraudsters use deepfake technology to impersonate company executives and deceive their colleagues to transfer large sums of money.
This year, a finance worker at a multinational company was tricked into paying HK$200m (£20.5m) to criminals after a video conference in which the fraudsters were able to convincingly impersonate the company financial director.
Similar cases have been reported in several countries, while the first such heist is believed to have targeted a British energy company in 2019.
Murray, director of threat leadership at the National Crime Agency, said the phenomenon was a “high-cost, low-prevalence crime” and he was personally aware of dozens of cases in the UK.
He said the highest volume of criminal AI use was by pedophiles, who have been using generative AI to create images and videos depicting child sexual abuse.
“We’re talking thousands and thousands and thousands of images,” Murray said. “All images, whether synthetic or not, are illegal, and people are using generative AI to create images of children doing the most horrible things.”
Last month, Hugh Nelson, 27, from Bolton, was jailed for 18 years after offering a paid service to online pedophile rings in which he used artificial intelligence to generate requested images of children being abused.
The same technology is also used for sextortion, a type of online blackmail in which criminals threaten to publish indecent images of victims unless they pay money or comply with their demands.
Previously, the phenomenon had used photographs that victims had shared of themselves, often with ex-partners or abusers using false identities to gain their trust, but now AI can be used to “undress” and manipulate photos taken on social media. .
Murray said hackers were also using AI to look for weaknesses in specific code or software and provide “focus areas” for cyber attacks. “Most AI crime at the moment has to do with images of child abuse and fraud, but there are many potential threats,” he added.
There is growing concern that seemingly benign chatbots could incite people to crime and terrorism after revelations that a man who attempted to attack Queen Elizabeth II with a crossbow in 2021 had received support from an “AI friend.” ”.
Jonathan Hall, the government’s independent reviewer of terrorism legislation, has been investigating the potential uses of AI by terrorist groups and highlighted “chatbot radicalization” as a threat along with propaganda generation and facilitation and planning. of attacks.
He discovered that he could create an Osama bin Laden chatbot using a popular commercially available platform and that it was “very easy to do.”
In a speech at Lancaster House last month, Hall warned: “Even if we don’t know precisely how terrorists are going to exploit generative AI, we need a common understanding of generative AI and confidence to act, and certainly not a reaction that says : “This is too difficult.”
“We need to avoid the mistakes of the early Internet era, when a pitched battle reigned.”
Murray said that with artificial intelligence technology becoming more advanced and more text and image generative software coming to market and widely used, its exploitation by criminals of all types was expected to increase.
“Sometimes you can detect if something is an AI image, but it will disappear very quickly,” he warned. “People using this type of software right now are still niche, but it will be very easy to use.
“Ease of entry, realism and availability are the three vectors that are likely to increase… We, as police, have to act quickly in this space to stay on top of things.
“I think it’s a reasonable assumption that between now and 2029 we will see significant increases in all of these types of crimes, and we want to prevent that,” he said.