Fight against deepfakes is getting hot: researchers unveil new tool to identify unnatural movements that reveal whether a video has been manipulated
- A new tool could help combat disinformation spread by & # 39; deepfaked & # 39; video & # 39; s
- The algorithm was able to identify a fake video with 92 accuracy
- An AI uses a & # 39; soft biometrics & # 39; which finds unique movements of the face and body
- Emphasis on deepfakes has exploded amid fears of misinformation
The fight against video & # 39; s changed through the use of artificial intelligence just got a new ally.
According to researchers at UC Berkeley and the University of Southern California, a new algorithm can help determine if a video has been manipulated through a process known as & # 39; deepfaking & # 39 ;.
Counterintuitive, the tool that scientists say to help them in their crusade against falsified videos, is exactly the same tool that helps to put the videos first: artificial intelligence.
The fight against video & # 39; s changed through the use of artificial intelligence just got a new ally. On the photo is a selection of a deep fake video in which the face of Steve Buscemi is superimposed on the body of Jennifer Lawrence
WHAT IS A DEEPFAKE VIDEO?
Deepfakes are so called because they use deep learning, a form of artificial intelligence, to make fake videos.
They are made by giving a computer an algorithm or set of instructions, as well as many images and audio from the target person.
The computer program then learns to imitate the person's facial expressions, manners, voices, and inflections.
If you have enough video and audio from someone, you can combine a fake video of the person with a fake audio and let them say whatever you want.
Scientists say they've trained their own AI to submit to a subject that identifies traits and manners, including the subtle way someone tilts their head or moves their mouth, to do a & # 39; soft biometric & # 39; develop a profile.
According to them, their AI has shown initial success in its goal of spotting fake videos and identifying deepfakes with an accuracy of 92 percent.
The algorithm was even able to identify fake videos that had poor image quality due to the high compression.
In the future, the experts say they plan to train the algorithm to assess the speech signature of a subject as an additional level of protection.
The use of algorithms to fake and manipulate video has received increasing attention since skeptics warn that the technology can be used by bad actors to disseminate misinformation and even influence political campaigns.
In June, members of the US House of Representatives held an unprecedented hearing about technology, assuming it could threaten national security.
Deepfishing AI has already been used to create digitally altered videos from world leaders, including former President Barack Obama and Russian President Vladimir Putin.
While regulators and technologists try to understand the problem, the refinement of algorithms that can fake videos seems to have been extensive.
In a recent example, researchers from the Samsung AI center in Moscow demonstrated an algorithm that can produce video & # 39; s with just one image, as opposed to the usually large data collection that the AI of its kind needs.
Deepfishing has drawn attention to the 2020 presidential election in the US because legislators are concerned that the technology can be used to influence the outcomes.
The results of their system bring popular faces to life such as those of surrealist painter Salvador Dali and actress Marilyn Monroe with a single photo.
Similarly, a startup named Dessa, staffed by former IBM and Microsoft employees, recently released multiple audio clips demonstrating machine-learning software that mimics the voice of popular podcaster, Joe Rogan, to an extent that is almost invisible from the real one. thing.
Researchers in California are not the only ones working on the misuse of in-depth technology.
Adobe scientists also unveiled a tool last month that they believe is capable of detecting edits to images that might go unnoticed to the naked eye, especially in photographed deepfake videos.
. [TagsToTranslate] Dailymail