Fraudsters have turned to artificial intelligence to overcome identification checks, as research reveals a third of Brits share unprotected sensitive documents online.
According to identity verification platform IDnow, people aged 18 to 24 are most at risk: a whopping 48 percent of young people have shared their ID documents through risky channels such as email, social media or messaging applications.
This compares to only 21 percent of people over 55 who reported doing the same.
However, while 45 percent of Britons said they knew that sending images or scans of ID documents through online platforms could be obtained by criminals and used in fraudulent activities, 33 percent did so of all modes.
Showdown: Fraudsters use deepfake technology and stolen ID documents to bypass identity verification systems
In fact, ignorance could be the biggest danger to the public: the survey indicates that less than a third of the British public knows what deepfakes are and is aware of the risks they entail.
Deepfakes are videos, photographs or audio clips that imitate a real person and can be very convincing.
IDnow fraud and documents director Lovro Persen said: “It is worrying that this research suggests that the UK public is not as concerned or aware as it should be of the risks associated with these types of digitally generated images or videos.”
‘Extraordinary advances in AI technology mean that it is now almost too easy for a fraudster to commit financial crimes. However, consumers should not make it even easier for scammers.
“Our advice is to always think twice before sending a scan or photo of your driving license or passport into the digital ether via unencrypted channels, such as social media or email.”
With the rise of AI and deepfake technology in recent years, criminals are increasingly finding ways to take advantage of the tools at their disposal.
This technology means that traditional methods of authenticating documents, usually through visual inspection, are no longer sufficient IF high-tech fraudsters can make fake documents good enough to pass as authentic.
Scammers and scammers are now using AI in their scam activities, according to IDnow co-founder and chief technology and security officer Armin Bauer.
Armin Bauer warns that images shared online could be used by scammers
He added that phishing attacks are becoming increasingly difficult to detect as the rise of AI results in fewer spelling and grammatical errors that previously plagued these scams.
Scammers are using AI to create realistic forged documents, as well as realistic videos of people, which can then be used to bypass identification verification processes, such as when opening a bank account or submitting a credit card application.
“Deepfakes are used to break into systems that require you to identify yourself,” Bauer told This is Money.
“Scammers typically try to generate a completely new persona that doesn’t actually exist, or they use a stolen ID card and generate (a deepfake) of the person it belongs to.”
Scammers then use these simulated IDs to gain access to whatever platform they target.
Scammers also use these techniques to carry out romance scams.
Dating app Tinder is introducing better identity checks, including matching video selfies to ID cards, after the app was continually targeted by romance scammers.
Alex Laurie, senior vice president at identity management firm Ping Identity, said: ‘Online dating platforms, such as Tinder, are particularly susceptible to catfishing and deception. “Users want authenticity in their connections, but the risk of encountering fraudulent people remains unless identities are rigorously verified.”
How to stay safe from identity scammers
While many organizations are turning to more advanced biometric security systems, such as those that recognize facial movements such as blinks, there are steps Brits can take to better protect their sensitive documents.
“Make sure your devices are up to date and that you don’t share too much information openly…any time you share images widely, scammers can grab those images and try to use them,” Bauer said.
‘The real way to fight this fraud is to use technology providers who are capable of combating it. “It is becoming a growing problem, and any process that is not equipped to combat that fraud will have problems.”
However, he added that the use of AI by scammers is an issue that needs to be addressed on a broader level.
And he added: ‘I think it is an issue that should be addressed by all parties. The government should be involved and there should be regulation… The industry also has to play its part in protecting its services for something like opening bank accounts, so that it is not possible to use deepfakes to open fraudulent bank accounts.’
“Of course, it also helps for users to follow security best practices and ensure their information is safe and cannot be abused.”
Some links in this article may be affiliate links. If you click on them, we may earn a small commission. That helps us fund This Is Money and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.