Cybercriminals are turning to ChatGPT to generate highly convincing phishing emails, researchers warn – so how can internet users spot the scam?
Cybersecurity firm Norton warned that criminals are turning to AI tools like ChatGPT to create “decoys” to rob victims.
SCROLL DOWN FOR THE GUIDE
AI tools like ChatGPT make it much harder to spot scams (Alamy)
A report in New Scientist suggested that using ChatGPT to generate emails could reduce costs for cybercriminal gangs by up to 96 percent.
ChatGPT also completely removes the language barrier for cybercriminal gangs around the world, warns Julia O’Toole, CEO of MyCena Security Solutions.
O’Toole said there are still ways to spot scam emails generated by AI tools, but the technology is making it much more difficult to spot scam emails.
She said: “Phishing has increased significantly since email scams first hit the inboxes, but a lack of language and cultural proficiency is still a major barrier for scammers, who have struggled to make their emails realistic .
While still defrauding innocent people, many internet users were able to identify and remove the spoof.
But those days are over, she said.
ChatGPT is currently the “hottest topic” on the dark web, according to O’Toole, as cybercriminals figure out how to use it to scam victims.
There are protections built into ChatGPT intended to prevent it from being used in scams, but criminals are working on how to get around them.
She said, “ChatGPT’s quality and speed of execution make it a powerful productivity hack.
“It now allows criminals to multiply complex phishing campaigns, generating faster emails with a higher chance of success.”
O’Toole warns that ChatGPT’s ability to generate accurate content means it can effectively impersonate anyone – and warns that AI tools that access internet content may be a “weapon of cyber mass destruction.”
She said: ‘Hackers can use ChatGPT to trick people into giving up their usernames and passwords for their online accounts, or it can trick people into sending money or disclosing personal information to criminals, while tricking them into thinking that it is for legitimate purposes.
Cybercriminals can use complex clues to gather the information needed to launch a “tailor-made” cyberattack, she warned.
“When criminals use ChatGPT, there are no cultural barriers. When the target receives an email from their “apparent” bank or CEO, there are no language tell-tales that indicate the email is fake.
“The tone, the context and the reason for making the wire transfer do not prove that the email is a scam.”
Since its launch in November 2022, ChatGPT has fascinated the cybercriminal community.
Posters on notorious cybercrime forums talk about using the bot to create malware and even create new dark web marketplaces for selling stolen credit cards and other illegal goods.
There are multiple fake ChatGPT apps that collect user data – and cybersecurity vendor BitDefender spotted a phishing scam where users were redirected to a fake ChatGPT to collect banking information.
Cybersecurity vendor Norton warned that phishing emails are the tip of the iceberg – and that cybercriminals could use ChatGPT or similar software to create completely fake chatbots to scam internet users out of their money.
ChatGPT averaged 13 million daily users in January, making it the fastest-growing Internet app of all time, according to analytics firm CompareWeb.
It took TikTok about nine months after its global launch to reach 100 million users, and Instagram more than two years.
OpenAI, a private company backed by Microsoft Corp., made ChatGPT available to the public for free at the end of November.
The five ways to spot AI-generated phishing emails
Tracking down phishing emails generated by ChatGPT is much more difficult than tracking down phishing emails generated by humans, says Julia O’Toole, CEO of MyCena Security Solutions.
Here are five ways to recognize an email is a scam:
Hover over the email address to verify it
Julia O’Toole, CEO of MyCena Security Solutions
On a PC, you can hover your mouse over a “Contact Us” link to see where your email is really going, says O’Toole.
For any suspicious email, hover over the email address and verify that it really comes from the domain (ie website address) you would expect.
O’Toole says, “Despite the advanced ChatGPT, the email addresses used by phishers remain the same, so if it looks suspicious, it probably is.”
Consider the context
If your bank or other institution contacts you urgently to request information, you should be alert immediately.
Think about the context – why do they need this information? Why now?
O’Toole says, “Banks and security-conscious institutions avoid putting their clients in positions where confidential information is immediately requested.”
Hyperlinks to banking websites embedded in an email may seem like an easy way to do things, but a legitimate bank also allows you to call.
O’Toole says, “If an email comes in asking for personal information, never click the link. Check its authenticity first.
“For example, if your bank contacts you via email and asks for personal information, hang up and call the bank back at the phone number on their website.”
Pay attention to the artwork
ChatGPT may be able to generate clear copies, but criminal gangs do not have access to the proper digital assets.
That means everything from page headers to the links you need to click can look wrong.
O’Toole says, “Attackers often cut and paste images of a company directly from the internet, but this distorts the image and makes it look washed out or out of focus. If images or graphics in an email look poor quality, this could also indicate a phishing scam.
Check every email against the legitimate website
While ChatGPT is great at generating text, it’s not so great at finer details, which could indicate an email is malicious, O’Toole warns.
She says, “If you receive an email that concerns you, go directly to the website of the apparent sender. Are there any phrases or branding they commonly use in communication? Is this information in the e-mail?’
If something looks suspicious, it probably is.