Home Money Google cracks down on explicit deepfakes

Google cracks down on explicit deepfakes

0 comment
Google cracks down on explicit deepfakes

A few weeks ago, a Google search for “Jennifer Aniston deepfake nudes” yielded at least seven results purportedly showing explicit images of the actress generated by artificial intelligence. They have now disappeared.

Google product manager Emma Higham says new tweaks to how the company ranks results, which rolled out this year, have already reduced exposure to false explicit images by more than 70 percent in searches seeking such content about a specific person. Where problematic results might previously appear, Google’s algorithms aim to promote news articles and other non-explicit content. Searching for Aniston now shows articles like “Why Taylor Swift’s AI-powered deepfake porn poses a threat” and other links like A warning from Ohio’s attorney general on “false celebrity advertising scams” ​​targeting consumers.

“With these changes, people can read about the impact deepfakes have on society, rather than seeing pages with real, non-consensual fake images,” Higham wrote in a company blog post on Wednesday.

The ranking change follows a WIRED investigation this month that revealed that Google management in recent years has rejected numerous ideas proposed by staff and outside experts to combat the growing problem of intimate depictions of people being disseminated online without their permission.

While Google has made it easy to request removal of unwanted explicit content, victims and their advocates have urged more proactive measures. But the company has sought to avoid becoming an over-regulator of the internet or harming access to legitimate pornography. At the time, a Google spokesperson said in response that multiple teams were working diligently to strengthen safeguards against what it calls non-consensual explicit imagery (NCEI).

According to victim advocates, the increasing availability of AI-powered image generators, including some with few restrictions on their use, has led to a rise in NCEI. The tools have made it easy for almost anyone to create faked explicit images of any individual, whether a high school classmate or a megacelebrity.

In March, a WIRED analysis found that Google had received more than 13,000 requests to remove links to a dozen of the most popular websites hosting explicit deepfakes. Google removed results in about 82 percent of the cases.

As part of Google’s new crackdown, Higham says the company will begin applying three of the measures to reduce the chance of discovering real but unwanted explicit images to those that are synthetic and unwanted. After Google accepts a removal request for a sexual deepfake, it will attempt to keep duplicates out of the results. It will also filter explicit images from results for queries similar to those cited in the removal request. And finally, websites subject to “a high volume” of successful removal requests will face a demotion in search results.

“These efforts are designed to give people additional peace of mind, especially if they are concerned about similar content appearing about them in the future,” Higham wrote.

Google has acknowledged that the measures don’t work perfectly, and former employees and victim advocates have said they could go much further. The search engine prominently warns people in the United States who search for images of naked children that such content is illegal. The effectiveness of the warning is unclear, but it is a potential deterrent that advocates support. However, despite laws banning the sharing of naked images of children, similar warnings do not appear for searches for sexual deepfakes of adults. A Google spokesperson has confirmed that this will not change.

You may also like