Home Money How criminals could use AI to scam Brits and what you can do to protect yourself

How criminals could use AI to scam Brits and what you can do to protect yourself

0 comment
Bad tools: For a monthly subscription fee, Labhost gave criminals access to malware to help them commit attacks against individuals and organizations.

A leading financial crime expert says Britain is a major target for fraudsters and they are at the forefront of adopting artificial intelligence technology. We reveal to you what they are doing and how to protect you.

Dozens of people around the world were arrested in April, accused of using the Labhost platform to commit scams.

For a monthly subscription fee, Labhost granted criminals access to malware to help them commit attacks against people and organizations, a practice often called “cybercrime as a service.”

One tool, LabRat, allowed fraudsters to monitor and control phishing attacks in real time and capture advanced security measures such as credentials and two-factor authentication codes.

Subscribers could also create web pages that mirror those of major brands, from banks to healthcare providers, that could trick people into handing over sensitive information.

Bad tools: For a monthly subscription fee, Labhost gave criminals access to malware to help them commit attacks against individuals and organizations.

Police said the platform led criminals to steal 480,000 credit cards, 64,000 PIN codes and more than 1 million passwords.

An estimated 70,000 Britons fell victim to Labhost’s tricks, but many others have been scammed by similar platforms.

These numbers could continue to rise as artificial intelligence develops and provides more sophisticated methods for digital thieves.

Why UK consumers are vulnerable to AI-powered fraud

Phil Rolfe, financial crime expert at consultancy Valcon, says the UK is particularly vulnerable to AI-related fraud for two key reasons.

Firstly, the English language, as widely spoken. Secondly, Britain is a rich country full of people with significant savings and investments.

Statistics on AI fraud are difficult to come by, but fraud itself has skyrocketed for a few years now, in part since the Covid-19 pandemic caused people to spend more time on their computers.

Overall fraud offenses in England and Wales rose 46 per cent to 465,894 in the year ending June 2023, according to banking regulator UK Finance.

Below are some ways criminals use AI for nefarious financial purposes.

Rising threat: Overall fraud offenses in England and Wales rose 46 per cent to 465,894 in the year ending June 2023, according to banking regulator UK Finance.

Rising threat: Overall fraud offenses in England and Wales rose 46 per cent to 465,894 in the year ending June 2023, according to banking regulator UK Finance.

Identity fraud

Phishing, a term originally coined by hackers, involves scammers sending emails or text messages with links to malicious websites that, once clicked, download a computer virus or encourage people to reveal your personal information.

It is the most common form of AI-driven financial crime, almost as old as the World Wide Web and increasingly advanced.

Rolfe says the old style of phishing was basically “a sweatshop, for lack of a better phrase,” in which criminals used lists of email addresses and common email phrases and “just cut, pasted and sent “.

If someone took the bait, they would be targeted by a person higher up the criminal food chain.

But, with AI, criminals can now run their phishing scams from one powerful machine.

They could also write a text free of spelling and punctuation errors that have traditionally made phishing difficult.

Voice cloning: McAfee researchers found that it took just three seconds for a person to speak to create a copy that was 85 percent similar to their original voice.

Voice cloning: McAfee researchers found that it took just three seconds for a person to speak to create a copy that was 85 percent similar to their original voice.

voice cloning

Imagine you are a British CEO. You receive a call from someone you think is the boss of your company’s German parent company asking you to transfer 220,000 euros to a Hungarian supplier within an hour.

Given the urgency of the request, you deposit the sum and receive a call that same day informing you that the UK company has been refunded. But the money never arrives.

That happened to a UK energy chief in 2019, according to the Wall Street Journal, which said Criminals may have used speech generating software. to pull off his daring robbery.

Although not as prevalent as phishing scams, voice cloning certainly grabs a lot of headlines. And, like phishing, it’s becoming more sophisticated.

McAfee researchers found that it was enough three seconds of a person speaking to create a copy with 85 percent similarity to your original voice, or 95 percent with some audio files.

Because so many people’s voices are online, whether on social media, podcasts or movies, scammers can replicate virtually anyone, especially politicians, celebrities and high-profile executives.

fake videos

When you mix a near-perfect voice clone with “deep learning” to make a real person appear to say something they never actually said, you have a cutting-edge tool that tricksters can exploit.

Deepfake videos are cheap and easy to produce and are increasingly popular among scammers; A survey last year by Regula, an IT services company, found that 29 percent of companies had fallen victim to them.

money item html_snippet module" data-channel-color="money"> 1707393328 462 Home insurance prices up 13 in a year heres

A multinational in Hong Kong lost $25.6 million after a digitally recreated version of its chief financial officer asked an employee on a video conference to transfer some money.

The staff member was reportedly asked to introduce himself and told to execute the transfer before the call ended abruptly.

They then continued to communicate with the scammers through messaging platforms, emails and phone calls.

Only after the cash was transferred did the employee and the unnamed company realize they had been scammed.

If a large company could lose such a large sum, imagine how vulnerable people are to losing their life savings due to a manipulated video.

Forged documents

Unlike cloning and phishing, the history of document forgery goes back thousands of years. The Romans even had laws prohibiting the falsification of records that transferred land to heirs.

With AI, algorithms can replicate minor details of documents, including images, watermarks, holograms, microprints and signatures, simplifying the process step by step for criminals to create false but credible forms of identification.

Rolfe believes that “any teenager studying computer science” can probably falsify a gas bill with fewer identity checks than are required to open a bank account.

Their ability to perpetrate document fraud has been facilitated by the extent to which their digital-native peers share information online.

Expert: Phil Rolfe's advice to avoid falling victim to AI-related fraud is not to rush and make sure you do the necessary checks

Expert: Phil Rolfe’s advice to avoid falling victim to AI-related fraud is not to rush and make sure you do the necessary checks

A survey conducted in February by identity verification platform IDnow found that almost half of 18- to 24-year-olds had submitted ID documents through less secure channels, such as email, social media or apps. Messenger service.

Most worryingly, 45 percent knew that sending scans of their documents through these channels could be used by criminals to commit fraud, but a third did it anyway.

How can you protect yourself?

Fraud is likely to remain endemic, regardless of the extent to which fraud-fighting technology reaches criminals.

Even the most technically savvy people are susceptible to AI scams today due to the high volume of AI crimes being committed.

Rolfe admits that he was recently exposed by a fake DocuSign email, although he caught on quickly enough to change his computer password before something terrible happened.

His advice to avoid falling victim to AI-related fraud is not to rush and make sure you do the necessary checks.

So, if you receive a random call, text message, or email asking you to transfer a large sum of money in a short space of time or hand over your personal information, be immediately suspicious.

Check the number or email address that sent the message to confirm its legitimacy. Look at the branding to see if it looks like a real organization.

If you receive a peculiar message from a family member or friend, contact them by another means or ask them to call you back.

If it’s a family member on a phone call, many safety specialists suggest agreeing on a secret “safe word” that can be repeated in an emergency or asking very personal questions that only they can know the answer to.

And as the IDNow survey attests, financial details should not be provided via text message, email, or phone until the recipient’s legitimacy is verified and security measures such as complicated passwords and two-factor authentication are implemented.

As Rolfe says: “The only thing you can do is try to be alert to these things, and if you have any questions or you’re not quite sure, I would prefer if you spent five more minutes asking if it was the right thing to do.” thing instead of moving forward and getting caught.

Some links in this article may be affiliate links. If you click on them, we may earn a small commission. That helps us fund This Is Money and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.

You may also like