Bank of England officials are so concerned about fraudsters using AI technology for scams in the UK that they held an urgent summit for big banks and tech companies last week, Money Mail can reveal.
Major banks and fraud prevention experts fear that advanced technology could open the door to a wave of new scams.
Insiders say Bank of England Governor Andrew Bailey opened the meeting by warning of the growing risks that high technology poses to the financial future of British households and called for an urgent response to counter the threat.
Executives from Google and OpenAI (the company behind ChatGPT and co-founded by Elon Musk) and social media giant Meta (owner of Facebook, Instagram and WhatsApp) were asked about their security measures to prevent the software from falling into the wrong hands.
Tech companies have been warned that scams “will become much more sophisticated” but Britain “still has a window to strengthen its defences”.
Fraud threat: Major banks and fraud prevention experts fear that new artificial intelligence technology will reach our shores and open the door to a wave of new scams.
The UK is grappling with a tsunami of fraud, which cost consumers more than £1.2 billion last year, according to trade body UK Finance.
But the number of attacks could soon get out of control as criminals begin to incorporate the use of AI to make their operations much more sophisticated.
Anti-fraud “champion” and MP Anthony Browne, who was at the summit, says AI scams are on the way.
It says: ‘The Government is taking the risks associated with AI very seriously. “It is clearly a threat we must prepare for.”
Banks are starting to roll out a series of AI-powered chatbots to combat scammers in a trial led by Stop Scams UK, the cross-industry group behind last Friday’s summit.
A group made up of banks, technology and telecommunications companies is piloting a project to collect information on scammers.
Members of the Stop Scams UK group are estimated to operate around 300 phone numbers and 100 email addresses, including Britain’s largest High Street and Challenger banks; BT, TalkTalk and Three telecommunications; and technology companies Meta and Google.
These numbers and accounts will be assigned ‘chatbots’, computer programs that simulate human conversation. They are often used for customer service inquiries as they can answer questions with great accuracy.
Because they can imitate human messages, criminals are known to use chatbots to manipulate victims and impersonate trusted authorities or companies.
Chatbots can send messages on social media platforms that appear genuine and trick victims into handing over money or personal data by posing as someone they know and trust.
Paul Davis, director of fraud prevention at TSB, says these chatbots allow criminals to expand their operations, with a large number of bots sending messages to multiple victims simultaneously.
However, the banks’ chatbots will face them in court. “There is an opportunity to delay chatbots to thwart criminals,” says Davis.
‘We could end up with two AI chatbots talking to each other: one controlled by banks and the other by criminals. They can waste each other’s time.
‘It’s a numbers game. Scammers can automate large parts of the process where they previously would have had to use a human. “That means the number of attacks will increase.”

Because they can imitate human messages, criminals are known to use chatbots to manipulate victims and impersonate trusted authorities or companies.
Alex West, director of banking fraud and investigations at PwC, which provided research at the AI summit at the Bank of England, says the technology is providing a powerful set of tools to detect scams and track down fraudsters. However, consumers should be wary of the high credibility of these scams.
West says: ‘Scammers can use AI tools to create very convincing scams. AI-generated messages, cloned voices and highly fake videos are increasingly being used to scam people out of money.’
Natalie Kelly, chief risk officer at Visa Europe, who is also a member of Stop Scams UK, says: ‘For every AI tool available on the world wide web, there are criminal versions. Criminals have quickly adapted to adopt AI.
“It’s a constant battle and we are focused on our AI always being better than the scammers.”
Scammers can use AI in a variety of ways to trick their target, and they typically use social media platforms to do so.
Money Mail’s Stop the Social Media Scammers campaign calls on technology companies to do more to protect users and prepare for new threats.
One of the fastest growing techniques employed by scammers is to use cheap online software to clone a voice.
By doing this, they can pose as a loved one or family friend and lure unsuspecting victims into handing over their money.
For example, criminals can make a phone call using a cloned voice to make it appear as if a loved one is in trouble and needs financial help.
All you need is three seconds of a voice recording or video to replicate someone’s voice, according to research from antivirus firm McAfee.

Imposter: Criminals make a phone call using a cloned voice to make it appear as if a loved one is in trouble and needs financial help.
Anyone can clone a voice and dictate the text they want the voice to say in an audio clip. Money Mail put this to the test on Play.Ht, an AI speech-generating startup, using their 5,000-word-per-month free trial.
The website allows you to clone someone’s voice using a 30-second audio clip or use an existing AI voice. The range of voices is AI-generated, but they are realistic and can be set to speak almost any language in the world.
Once you’ve uploaded an audio clip, you type in a text box the words you want the cloned voice to say. After a short wait, you can play and download a new audio clip of the voice saying it.
Technology is becoming so advanced that it is possible to evoke emotions in your voice: for example, making it sound happy, sad or worried.
In a high-profile case in the US, a mother was left distraught when scammers used AI to fake the kidnapping of her daughter.
The woman received a call from an unknown number, but when she answered she believed it was her 15-year-old daughter, who was training for a ski race. The scammers had used the software to generate a clip of the teenager’s voice pleading for help, before demanding a $1 million ransom.
The scam has already reached the United Kingdom, according to a McAfee investigation.
Oliver Devane, security researcher at McAfee, says: “In most cases we know of, the scammer says they have been in a serious accident, such as a car accident, and urgently need money.”
Chris Ainsley, head of fraud strategy at Santander, warns that criminals often impersonate bank staff, police and other trusted authorities as part of phishing scams. Action Fraud, the UK’s official fraud hotline, says it does not have reliable data on AI voice cloning scams, as most people do not realize that AI has been used.
However, 82 AI-related scams were reported between May 2019 and August 2023.
NatWest tells Money Mail that AI could be used in a number of ways to target customers and that it is increasingly important for consumers to stop and take a moment before making any payments that may seem unusual.
Liz Edwards of comparison site Finder.com says it’s crucial to focus on what’s being said every time you watch a video or receive a phone call.
Ms Edwards says: “Although these videos look realistic, much of the content will have been written using AI technology, so listen carefully and try to work out if it actually sounds like something a human would say.”
- Have you been a victim of an AI scam? Write to j.beard@dailymail.co.uk
Some links in this article may be affiliate links. If you click on them, we may earn a small commission. That helps us fund This Is Money and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.