Home Tech CEO of world’s largest advertising company victim of deepfake scam

CEO of world’s largest advertising company victim of deepfake scam

0 comment
CEO of world's largest advertising company victim of deepfake scam

The head of the world’s largest advertising group was the target of an elaborate deepfake scam involving an artificial intelligence voice clone. WPP CEO Mark Read detailed the attempted fraud in a recent email to leadership, warning others at the company to be on the lookout for calls purporting to come from senior executives.

The scammers created a WhatsApp account with a publicly available image of Read and used it to schedule a Microsoft Teams meeting that appeared to be between him and another senior WPP executive, according to the email obtained by The Guardian. During the meeting, the impostors displayed a voice clone of the executive, as well as images of them on YouTube. The scammers impersonated Read off-camera using the meeting chat window. The scam, which was unsuccessful, targeted an “agency leader” and asked him to set up a new business in an attempt to solicit money and personal details.

technology/2024/apr/08/how-to-tell-if-an-image-is-ai-generated"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"assetOrigin":"https://assets.guim.co.uk/"}"/>

“Fortunately, the attackers were not successful,” Read wrote in the email. “We all need to be on the lookout for techniques that go beyond emails to take advantage of virtual meetings, artificial intelligence and deepfakes.”

A WPP spokesperson confirmed in a statement that the phishing attempt was unsuccessful: “Thanks to the vigilance of our people, including the executive in question, the incident was avoided.” WPP did not respond to questions about when the attack took place or which executives other than Read were involved.

Once a concern primarily related to online harassment, pornography, and political misinformation, number of deepfake attacks in the business world has increased over the last year. AI voice clones have fooled banks, deceived millions of financial companies, and put cybersecurity departments on alert. In one high-profile example, an executive at the defunct digital media company Ozy pleaded guilty to fraud and identity theft after it was reported that he had used voice spoofing software to impersonate a YouTube executive in an attempt to trick Goldman Sachs into investing $40 million in 2021.

The fraud attempt at WPP also appeared to use generative AI for voice cloning, but also included simpler techniques such as taking a publicly available image and using it as a contact display image. The attack is representative of the many tools that fraudsters now have at their disposal to imitate legitimate corporate communications and impersonate executives.

“We have seen increasing sophistication in cyberattacks on our colleagues, and in those targeting senior leaders in particular,” Read said in the email.

Read’s email listed a number of points to look out for as red flags, including passport applications, money transfers and any mention of a “secret acquisition, transaction or payment that no one else knows about.”

“Just because the account has my photo doesn’t mean it’s me,” Read said in the email.

WPP, a publicly traded company with a market capitalization of around $11.3 billion, also stated on its website that it has been dealing with fake sites using its brand and that it is working with relevant authorities to stop the fraud. .

“Please be aware that the name of WPP and its agencies have been used fraudulently by third parties (often communicating via messaging services) on unofficial websites and applications,” reads a pop-up message on the website. company contact.

Many companies are grappling with the rise of generative AI, directing resources toward the technology while also confronting its potential harms. WPP announced last year that it was partnering with chipmaker Nvidia to create ads with generative AI, touting it as a game-changer for the industry.

“Generative AI is changing the world of marketing at incredible speed. “This new technology will transform the way brands create content for commercial use,” Read said in a statement last May.

In recent years, low-cost deepfake audio technology has become widely available and much more convincing. Some AI models can generate realistic imitations of a person’s voice using just a few minutes of audio, which is easily obtained from public figures, allowing scammers to create manipulated recordings of almost anyone.

The rise of deepfake audio has targeted political candidates around the world, but it has also reached other, less prominent targets. The principal of a school in Baltimore was go on vacation this year through audio recordings that appeared to be making racist and anti-Semitic comments, only to turn out to be a deepfake perpetrated by one of their colleagues. The bots have impersonated Joe Biden and former presidential candidate Dean Phillips.

You may also like