Scammers generate fake dating profiles and social media accounts to extort people.
OnlyFans Models generated by AI image makers have sprouted in the X-rated service, while Tinder ‘coaches’ use ChatGPT to help clients get dates. Apps that use AI chatbots to provide users with a “companion” are also on the rise.
These fake accounts are used for “romantic scams” where a person persuades a person to become their partner and use the relationship to get money from them.
The Federal Trade Commission said this scam will cost Americans $1.3 billion by 2022. Other actors have also used AI-generated images to target companies.
DailyMail.com spoke to cybersecurity experts for tips on recognizing an AI profile. These include looking at the hands, teeth, and shadows.
This image was generated in Midjourney in seconds in response to a simple prompt
Martin Cheek, fraud and cybersecurity expert at SmartSearch, told DailyMail.com, “ChatGPT makes it very easy for cybercriminals to impersonate different people without having tremendous literary skills.”
“(They can) use the language model to generate more personalized messages and trick people into falling victim to cyberattacks.”
AI generated faces are not perfect. There are still some flaws in the technology and recognizing them can help you know if your Tinder match is real.
Romance scams work by using social media or dating apps to lure a person into an online relationship. They usually use photos of a very attractive person – such as a model or actor – that they found online.
They then start asking their target for money using everyday issues such as car breakdowns, medical bills, or other expenses.
“If an online love interest asks you for money, walk away — no matter how compelling the story,” the FTC says.
Look at the hands
Can you see what is wrong t
Like many artists, AI technology is struggling
“With AI, there are quite a few things that AI can replicate quite badly – especially hands,” Cyril Noel-Tagoe – Principal Security Researcher at cybersecurity company Netacea, in the UK, said:
‘Sometimes you see distortion in them, sometimes they are too big or they have an extra finger.’
This phenomenon occurs because AI uses pattern searching to generate images.
It can detect the pattern of people who have hands, and that hands have fingers, but it doesn’t know that there must be a certain number of fingers.
Hands drawn by AI often also have bizarre-looking fingers. Often they are too long or disjointed.
This is because the platform doesn’t understand what hands are and what their function is, only what they might look like.
Look for telltale signs in the background
Shadows can be a giveaway that an AI generated image (this one is)
Images created by AI will also be rife with odd lighting and textures.
Vonny Gamot, a senior leader at McAfee, told DailyMail.com: “AI-generated images often have clear signs that they are indeed fake, so it’s important to pay close attention.
“If the outline of the person is blurry, for example, or if small details like shadows don’t seem right, then you know the image isn’t real.”
Since AI generated images have no real lighting, shadows on them will now be accurate.
Instead, AI simply uses a fusion of different images to create fake shadows, which are often inconsistent with how natural light works.
Sometimes the outlines of people in an AI-generated photo also appear blurry, as the subject of the piece blends into the background.
Look at tone, hair, eyes, face and teeth (THEFT)
Mr Gamot says McAfee, a security software company based in San Jose, California, told DailyMail.com that his company has a checklist to check whether images are fake.
Called theft, McAfee employees check subjects’ tone, hair, eyes, face, and teeth to see if there are any signs of AI generation.
The AI technology struggles to render all these features correctly.
Skin blemishes, uneven skin tones, or flickering on the edges of the face are all signs of fake images and videos.
This is because the images the generator pulls from consist of people with minor imperfections and different skin tones.
These generators create an ‘average’ of them all when taking a photo, and these small differences will lead to strange colors.
Is the hair a little TOO perfect?
No one has perfect hair, except for fake people in AI-generated images.
Real people almost always have a few wispy strands of hair, loose hairs that don’t want to be held back.
But in AI images, many of these small imperfections will not be there, making the models look almost too good to be true.
Other glitches can come through the eyes.
Glasses sometimes look asymmetrical or non-functional.
Eyes may appear expressionless or may be directed in two different directions.
Likewise, the light reflected from their irises can look odd in a way that doesn’t match the environment.
Image generators often create irregular faces that have all the features of a human being, but are slightly misaligned in a way that makes them look creepy.
Similar to hands, the AI can understand the features of a face after scanning thousands of photos of it.
But the machine just generates images and doesn’t fully understand why things look a certain way.
This means that although the AI knows that a human has two eyes, a nose and a mouth, it cannot see the relationship between them.
For example, if a person looks sideways, the program may not turn the nose sideways properly either, because it doesn’t detect that they’re in the same plane.
This can lead to odd looking faces that have the right feature but are slightly skewed.
Teeth don’t always show up well in deep fakes, sometimes they look more like white bars instead of showing the usual irregularities we see in people’s smiles.
The pearl white is often too perfect – bizarre white and straight – or has abnormal irregularities.
This could be that the teeth are too long or too short or that a person has too many teeth in their mouth.
Many scammers use ChatGPT to message someone and use the AI chatbot to write replies.
This is a favorite among foreign scammers who may not speak English very well themselves, but can use the platform to write human-like text for them.
But because ChatGPT is a combination of text spanning decades, some of the words and phrases it uses can feel like a blast from the past.
“If you’re worried about being targeted by a criminal using ChatGPT, you should watch out for the AI’s limitations,” says Thomas Platt – Bot Specialist at Netacea.
“Ask it about really hot, current things, like what was on TV last night.”
“These AIs are trained on historical data, so if you ask about yesterday’s episode of Love Island, it’s going to be very difficult.”
Short sentences and repeated words
While AI can produce very compelling sentences, giveaways are still common, says Vonny Gamot, head of EMEA at online security firm McAfee.
Gamot says, “There are a few telltale signs of an AI-written message. AI often uses short sentences and reuses the same words.
“Stay alert and review any texts, emails, or direct messages you receive from strangers. ‘
Ask them to prove who they are
A classic sign of a scammer is not showing their face on camera or verifying their identity, says Cheek.
Cheek says, “Verifying the identity of whoever you interact with online should be the first step. If they’re reluctant to show their face, ask yourself why.”
Check when they activated their account
A warning sign that you may be dealing with a scammer with AI is the date on their social media or dating account.
Think about how long you’ve had yours – and why someone might have just joined last week.
David Emm of cybersecurity firm Kaspersky says: “If you see an account that has just been activated, it could be a fake account trying to blend in. If you can, check when they joined the platform, it’s pretty easy to do – this is often a dead giveaway that the account was created for the sole purpose of scamming you.’