Home Tech Real-time deepfake video scams are here. This tool tries to attack them

Real-time deepfake video scams are here. This tool tries to attack them

0 comments
Real-time deepfake video scams are here. This tool tries to attack them

This announcement is not the first time a technology company has shared plans to help detect deepfakes in real time. In 2022, Intel released its Fake Catcher tool for the detection of deepfakes. FakeCatcher is designed to analyze changes in a face’s blood flow to determine whether a video participant is real. Intel’s tool is also not publicly available.

Academic researchers are also studying different approaches to address this specific type of deepfake threat. “These systems are becoming sophisticated enough to create deepfakes. We need even less data now,” says Govind Mittal, a doctoral candidate in computer science at New York University. “If I have 10 photos of myself on Instagram, someone can take them. “They can target normal people.”

Real-time deepfakes are no longer limited to billionaires, public figures, or those who have a wide online presence. Mittal’s research at New York University, with professors Chinmay Hegde and Nasir Memon, proposes a potential challenge based approach to blocking AI bots from video calls, where participants would have to pass a sort of video CAPTCHA test before joining.

As Reality Defender works to improve the detection accuracy of its models, Colman says access to more data is a critical challenge to overcome, a common refrain from the current crop of AI-focused startups. He’s hopeful that more partnerships will fill these gaps and, without specifics, hints that multiple new deals are likely next year. After ElevenLabs was linked to a fake voice call from US President Joe Biden, the AI ​​audio startup reached an agreement with Reality Defender to mitigate potential misuse.

What can you do now to protect yourself from video call scams? Much like WIRED’s top tip on how to avoid AI voice call fraud, not being cavalier about whether you can spot deepfake videos is key to avoiding being scammed. The technology in this space continues to evolve rapidly, and any telltale signals you rely on now to detect AI deepfakes may not be as reliable with upcoming updates to the underlying models.

“We don’t ask my 80-year-old mother to flag ransomware in an email,” says Colman. “Because she’s not a computer expert.” In the future, real-time video authentication is possible, if AI detection continues to improve and proves to be reliably accurate, it will be taken for granted like that malware scanner quietly humming in the background of your email inbox .

You may also like