The technology required to treat images and videos from doctors is developing rapidly and is becoming more user-friendly, experts warn.
Government agencies and academics are racing to fight so-called deepfakes, amid the growing threat they impose on societies.
Advances in artificial intelligence can soon make compelling fake audio and video relatively easy, which the Pentagon fears will be used to sow discord prior to the US presidential election.
Deepfakes combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as a generative network of opponents.
Scroll down for video
The video that triggered concern last month was a video from Nancy Pelosi, the speaker of the American House of Representatives. It was just delayed to about 75 percent to make her look drunk or to despise her words
HOW DOES DEEPNUDE WORK?
It is a downloadable offline app that works on Windows and Linux.
The software is believed to be based on pix2pix, an open-source algorithm developed by researchers from the University of California, Berkeley in 2017.
Pix2pix uses generative adversarial networks (GAN & # 39; s) that work by training an algorithm on a huge data set with images.
An image is entered into the software and then a nude version is generated at the touch of a button.
They are used to produce or modify video content so that it presents something that actually did not happen.
They started in porn – there is a thriving online market for celebrity faces that are put on the body of porn actors – but so-called revenge porn – the malicious sharing of explicit photos & videos of a person – is also a huge problem.
The video that triggered concern last month was a video from Nancy Pelosi, the speaker of the American House of Representatives.
It was just delayed to about 75 percent to make her look drunk or to despise her words.
The images were shared millions of times on every platform, including by Rudi Giuliani – the attorney of Donald Trump and the former mayor of New York.
The danger is that someone gives the impression that they are saying or doing something that they do not have the potential to take the war of disinformation to a whole new level.
The threat is spreading because smartphones have made cameras omnipresent and social media from individuals have made broadcasters.
This leaves companies running these platforms and governments uncertain about how to tackle the problem.
& # 39; Although synthetically generated video & # 39; s are still easily detectable by most people, that window is quickly closed, said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup developing image verification technology, said Wall Street Journal.
& # 39; I would predict that we will see visually undetectable deepfakes in less than 12 months, & # 39; said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that develops image verification technology. & # 39;
& # 39; Society distrusts every piece of content they see. & # 39;
McGregor & # 39; s company Truepic cooperates with Qualcomm Inc. – the largest supplier of mobile phone chips – to add its technology to the mobile phone hardware.
The technology automatically marks pictures and videos when they are taken with data such as time and location for later verification.
Truepic also offers a free app that consumers can use to take verified photos on their smartphones.
The goal is to create a system similar to Twitter & # 39; s method to verify accounts, but for photos & # 39; s and video & # 39; s is Roy Azoulay, the founder and CEO of Serelay, a British startup that also develops ways to label images as authentic when they are created, the WSJ.
When a photo or video is taken, Serelay can capture data, such as where the camera was in relation to mobile towers or GPS satellites.
In the meantime, the US Department of Defense is investigating forensic technology that can be used to detect whether a photo or video has been manipulated after it has been taken.
The forensic approach will look for inconsistencies in images and videos that can serve as starting points for whether images have been promoted, for example inconsistent exposure.
Last month, Facebook was forced to evaluate how it will handle deepfake videos, the hyper-realistic hoax clips made by artificial intelligence and high-tech tools.
CEO Mark Zuckerberg suggested that it might be logical to treat such videos differently from other forms of misinformation, such as fake news.
The fact that these videos are made so easily and then widely shared via social media platforms does not bode well for 2020, said Hany Farid, a photo, an expert in digital forensic medicine at the University of California, Berkeley.
His comments about the scourge of Deepfakes come when he defends Facebook's decision to keep the patented clip of House Speaker Nancy Pelosi live on his site.
Facebook has long believed that it should not arbitrate between what is and what is not, and instead places such reviews in the hands of external fact checkers.
The recently modified video from House Speaker Nancy Pelosi that made her sound like she was spoiling her words does not meet the definition of a Drepfake and stayed on the site.
Facebook had even refused to remove Mrs. Pelosi's Deepfake instead of choosing to downgrade & # 39; the video & # 39; in an effort to minimize its spread.
The fact that these videos are made so easily and then shared widely on social media platforms does not bode well for 2020, said Hany Farid, an expert in digital forensic medicine at the University of California, Berkeley .
& # 39; The clock is ticking, & # 39; said Mr. Farid. & # 39; The Nancy Pelosi video was a canary in a coal mine. & # 39;
Social media companies do not have a clear policy that prohibits fake videos, partly because they do not want to be in a position to decide whether something is satire or is meant to mislead people – or both.
This could also open them up to censorship or political prejudice.
. (TagsToTranslate) Dailymail (t) sciencetech