Home Tech ‘Time is running out’: can a future of undetectable deepfakes be avoided?

‘Time is running out’: can a future of undetectable deepfakes be avoided?

by Elijah
0 comment
‘Time is running out’: can a future of undetectable deepfakes be avoided?

WWith more than 4,000 shares, 20,000 reactions and 100,000 reactions on Facebook, the photo of the older woman sitting behind her homemade 122nd birthday cake has undoubtedly gone viral. “I started decorating cakes when I was five years old,” the caption reads, “and I can’t wait to continue my baking journey.”

The photo is also undoubtedly fake. If the strange candles – it seems as if you are floating in the air, attached to nothing – or the strange amorphous blobs on the cake in the foreground do not give this away, then the fact that the celebrant is almost the oldest person in the world would be five years.

Fortunately, the stakes for viral super-centenarian cake decorators are low. That’s good, because as generative AI continues to improve, the days of looking for tell-tale signs to spot a fake are almost over. And that has created a race against time: can we think of other ways to detect fakes before the fakes become indistinguishable from reality?

“We are running out of time because we can still do manual detection,” said Mike Speirs of the AI ​​consultancy Faculty, where he leads the company’s work on combating disinformation. “The models are developing at a speed and pace that is, well, technically incredible, and quite alarming.

“There are a variety of manual techniques to spot fake images, from misspelled words to irregularly smooth or wrinkled skin. Hands are a classic, and eyes are also a good sign. But even today it’s time-consuming: it’s not something you can really scale. And time is running out – the models are getting better and better.”

Since 2021, Dall-E, OpenAI’s image generator, has released three versions, each radically more capable than the last. Indie competitor Midjourney has released six in the same period, while the free and open source Stable Diffusion model has reached its third version and Google’s Gemini has joined the fray. As technology has become more powerful, it has also become easier to use. The latest version of Dall-E is built into ChatGPT and Bing, while Google offers its own tools for free to users.

Technology companies have begun to respond to the oncoming flood of generated media. The Coalition for Content Provenance and Authenticity, which also includes the BBC, Google, Microsoft and Sony, has set standards for watermarks and labels, and in February OpenAI announced they would adopt them for Dall-E 3. Now images generated by the tool has a visible label and a machine-readable watermark. At the end of distribution, Meta has started adding its own tags to AI-generated content and says it will remove posts that aren’t tagged.

These policies can help tackle some of the most viral forms of disinformation, such as jokes or satire that spreads beyond its original context. But they can also create a false sense of security, says Spiers. “As audiences become accustomed to seeing AI-generated images with a watermark on them, does that mean they will implicitly trust images without a watermark?”

That’s a problem, as labeling is by no means universal – and probably won’t be. Big companies like OpenAI might agree to label their creations, but startups like Midjourney don’t have the capacity to dedicate additional engineering time to the problem. And for open source projects such as Stable Diffusion, it is impossible to enforce the watermark, as it is always an option to simply fork the technology and build your own.

And seeing a watermark doesn’t necessarily have the effect you’d like, says Henry Parker, head of government affairs at fact-checking group Logically. The company uses both manual and automatic methods to check contents, Parker says, but labeling can only go so far. “If you tell someone he or she is watching a deepfake before he or she even watches it, the social psychology of watching that video is so powerful that he or she will still refer to it as if it were fact . So all you can do is ask: How can we shorten the time this content is in circulation?”

Ultimately, this will require automatically finding and removing AI-generated content. But that’s difficult, says Parker. “We’ve been trying to do this for five years, and we’re very honest about the fact that we’ve gotten to about 70%, in terms of the accuracy we can achieve.” In the short term, it’s about an arms race between detection and creation: even image generators that have no malicious intentions will try to beat the detectors, since the ultimate goal is to create something as truthful as a photo.

skip the newsletter promotion

Logically, he thinks the answer is to look around the picture, Parker says: “How do you actually try to look at the way disinformation actors behave?” That means monitoring conversations across the internet to spot bad actors in the planning stages on sites like 4chan and Reddit, and keeping an eye on the swarming behavior of suspicious accounts co-opted by a state actor. Even then, the problem of false positives is difficult. “Am I looking at a campaign that Russia is running? Or am I watching a bunch of Taylor Swift fans sharing information about concert tickets?

Others are more optimistic. Ben Colman, CEO of image detection startup Reality Defender, thinks there will always be the possibility of detection, even if the conclusion simply marks something as possibly fake rather than ever reaching a definitive conclusion. Those signals can be anything from “a filter at higher frequencies indicating too much smoothness” to, for video content, the inability to display the invisible but noticeable flush that everyone shows every time their heart gets fresh blood around it. his face is right.

“Things will continue to evolve on the fake side, but the real side won’t change,” Colman concludes. “We believe we will get closer to a single model that is greener.”

Technology is of course only part of the solution. If people really believe that a photo of a 122-year-old woman holding a cake she baked herself is real, then it wouldn’t take state-of-the-art image generators to trick them into believing other, more damaging things. But it’s a start.

Join Alex Hern for a Guardian Live online event on AI, deepfakes and elections, on Wednesday 24 April at 8pm BST. Book tickets here

You may also like