Advertisements
<pre><pre>AI is worse at identifying household items from countries with a lower income

You may have seen news reports last week about researchers developing tools that can detect deepfakes with an accuracy of more than 90 percent. It is reassuring to think that with research like this the damage caused by fakes generated by AI will be limited. Easily execute your content via a deepfake detector and bang, the wrong information is gone!

Advertisements

But software that can recognize AI-manipulated video & # 39; s will only partially solve this problem, experts say. As with computer viruses or biological weapons, the threat of deepfakes is now a permanent feature in the landscape. And although it is debatable whether or not deepfakes are a huge danger from a political perspective, they are certain damage the lives of women here and now through the spread of fake nudes and pornography.

Hao Li, an expert in computer vision and associate professor at the University of Southern California, says The edge that every deep-sea detector will only work for a short time. In fact, he says, "at some point it is likely that it will not be possible at all to detect (AI fakes). So a different kind of approach must be implemented to resolve this."

Li should know – he is part of the team that helped design one of the latter deep detectors. He and his colleagues built an algorithm capable of detecting AI edits of videos from famous politicians such as Donald Trump and Elizabeth Warren by following small facial movements that are unique to each individual.

These markers are known as "soft biometrics" and are too subtle for AI to imitate at this time. These include how Trump speaks up before answering a question, or how Warren raises her eyebrows to emphasize a point. The algorithm learns to recognize these movements by studying earlier images of individuals and the result is a tool that is at least 92% accurate in recognizing different types of frozen food.

Li says, however, that it will not be long before the work is useless. As he and his colleagues have outlined in their paper, deepfake technology is developing with a virus / antivirus dynamic.


One deepfake detection algorithm works by following subtle movements in the face of the target.

Flashing. Back in June 2018, researchers found it that because deepfake systems were not trained on images of people with their eyes closed, the videos they produced showed unnatural blinking patterns. AI clones did not blink often enough or sometimes did not blink at all – features that can be easily noticed with a simple algorithm.

Advertisements

But what happened afterwards was somewhat predictable. "Shortly after this forensic technique was made public, the next generation of synthesis techniques introduced blinking in their systems," wrote Li and his colleagues. In other words: goodbye, freezer detector.

Ironically, this mimics the technology back and forth at the heart of deepfakes: the generative opposite network, or GAN. This is a type of machine-learning system that consists of two neural networks that work together. One network generates the fake and the other tries to detect it, with the content bouncing back and forth and improving with every salvo. This dynamic is replicated in the wider research landscape, where every new deepfake detection paper offers deepfake makers a new challenge to overcome.

Delip Rao, VP of research at the AI ​​Foundation, agrees that the challenge is much greater than simple detection, and says that these papers should be put into perspective.

One deepfake detection algorithm revealed last week boasted 97 percent accuracy, for example, but as Rao points out, that 3 percent could still be harmful when thinking on the scale of internet platforms. "Suppose that Facebook implements that (algorithm) and assumes that Facebook receives around 350 million images a day, that's MANY misidentified images," says Rao. "With every false positive of the model, you compromise the trust of the users."

It is incredibly important that we develop technology that can spot counterfeit goods, Rao says, but the greater challenge is to make these methods useful. Social platforms have still not clearly defined their policy on deepfakes, as Facebook's struggle with a fake video from Mark Zuckerberg recently demonstrated, and an outright ban would be unwise.

"At least the video & # 39; s should be labeled if something is being manipulated based on automated systems," says Li. He says, however, it is only a matter of time before the counterfeits cannot be detected. "Video & # 39; s are just pixels in the end."

Rao and his colleagues from the AI ​​Foundation are working on approaches that include human judgment, but others claim that verify really video & # 39; s and images should be the starting point, rather than spotting counterfeiting. To that end, they have developed programs that can be used automatically watermark and identify photos taken on cameras, while others have suggested using blockchain technology check the contents of trusted sources.

Advertisements

None of these techniques will "solve" the problem; not while the internet exists in its current form. As we have seen with fake news, just because a piece of content can easily be debunked, this does not mean that it will not be clicked and read and shared online.

More than anything, the dynamics that determine the web – frictionless sharing and revenue generation – means that deepfakes will always find an audience.

Take the latest news from a developer and create an app that allows anyone to take fake photos of dressed women. The resulting images are clear fabrications. If you look closely, the skin and flesh is blurred and unclear. But they are convincing enough at a glance, and sometimes that's all that's needed.