Home Tech Four of these faces were produced entirely by AI… can YOU tell who’s real? Nearly 40% of people got it wrong in new study

Four of these faces were produced entirely by AI… can YOU tell who’s real? Nearly 40% of people got it wrong in new study

0 comments
Researchers asked 260 participants to identify whether an image was real or fake, but nearly 40 percent of people guessed wrong.

Recognizing the difference between a real photo and an AI-generated image is becoming increasingly difficult as deepfake technology becomes more realistic.

Researchers at the University of Waterloo in Canada set out to determine whether people can distinguish AI images from real ones.

They asked 260 participants to label 10 images collected through a Google search and 10 images generated by Stable Diffusion or DALL-E (two artificial intelligence programs used to create deepfake images) as real or fake.

The researchers noted that they expected 85 percent of the participants to be able to accurately identify the images, but only 61 percent of people guessed correctly.

Scroll to the end of this article for the answers.

Researchers asked 260 participants to identify whether an image was real or fake, but nearly 40 percent of people guessed wrong.

Researchers asked 260 participants to identify whether an image was real or fake, but nearly 40 percent of people guessed wrong.

The study, published in Springer Link, found that the most common reasons people identified images as real or fake were by looking at details such as eyes and hair, while other, more generalized reasons were that the image “looked like strange”.

Participants were allowed to look at the images for an unlimited amount of time and focus on small details, something they probably wouldn’t do if they were simply browsing online, also known as ‘doomscrolling’.

However, the survey asked participants not to think too much about their answers and said it is recommended to pay “similar attention as you would to a news headline photo.”

“People are not as adept at making distinctions as they think,” said Andreea Pocol, a doctoral candidate in computer science at the University of Waterloo and lead author of the study.

Researchers chose 10 FAKE images generated by AI

Researchers chose 10 FAKE images generated by AI

Researchers chose 10 FAKE images generated by AI

The researchers said they were motivated to conduct the study because not enough research had been done on the topic, so they published a survey asking people to identify real versus AI-generated images on Twitter, Reddit and Instagram, among others. .

Along with the images, participants were able to justify why they believed it was real or fake before submitting their answers.

The study said nearly 40 percent of participants classified the images incorrectly, demonstrating “that people are not good at separating real images from fake ones, easily allowing false and potentially dangerous narratives to spread.”

They also separated participants by gender (male, female, or other) and found that female participants performed better, guessing with approximately 55 to 70 percent accuracy, while male participants were 50 percent accurate. and 65 percent.

The researchers chose 10 REAL images

The researchers chose 10 REAL images

The researchers chose 10 REAL images

Meanwhile, those who identified themselves as “other” had a smaller range in guessing the fake versus real images with an accuracy of 55 to 65 percent.

The participants were then classified into age groups and those aged 18 to 24 were found to have an accuracy rate of 0.62 and it was shown that as the participants grew older, the probability of them guessing correctly decreased, dropping to only 0.53 for people aged 60 to 64. old.

The study said this research is important because “deepfakes have become more sophisticated and easier to create” in recent years, “raising concerns about their potential impact on society.”

The study comes as AI-generated images, or deepfakes, are becoming more prevalent and realistic, affecting not only celebrities but also everyday people, including teenagers.

For years, celebrities have been targets of deepfakes, with fake Scarlett Johanson sex videos appearing online in 2018, and two years later, actor Tom Hanks was the target of AI-generated images.

Then, in January of this year, pop star Taylor Swift was the target of fake pornographic images that went viral online and garnered 47 million views on X before they were removed.

Deepfakes also emerged at a New Jersey high school when a teenage boy shared fake pornographic photos of his female classmates.

“Disinformation is not new, but its tools have been constantly changing and evolving,” Pocol said.

“It can reach a point where people, no matter how trained they are, still have difficulty differentiating real images from fake ones.

‘That’s why we need to develop tools to identify and counter this. “It’s like a new AI arms race.”

ANSWERS:

ANSWERS:

ANSWERS:

You may also like