Home Tech ‘It’s not me, it’s just my face’: the models who found their portraits had been used in AI propaganda

‘It’s not me, it’s just my face’: the models who found their portraits had been used in AI propaganda

0 comments
'Have I done something terrible?' The real actors behind the AI ​​deepfakes that support dictatorships - video

The well-groomed young man, dressed in a crisp blue shirt and speaking with a soft American accent, seems an unlikely supporter of the military junta leader of the West African state of Burkina Faso.

“We must support… President Ibrahim Traoré… Homeland or death we will win!” he says in a video that began circulating in early 2023 on Telegram. It was just a few months after the dictator came to power through a military coup.

Other videos starring different people, appearing similarly professional-looking and repeating exactly the same script in front of the Burkina Faso flag, emerged around the same time.

A few days later, in a verified account on X, the same young man, in the same blue shirt, claimed to be Archie, the CEO of a new cryptocurrency platform.

These videos are fake. They were generated with artificial intelligence (AI) developed by a startup based in east London. The company Synthesia has created a stir in an industry competing to perfect realistic AI videos. Investors have poured cash in, catapulting it to “unicorn” status, a label for a private company valued at more than $1 billion.

Synthesia’s technology is aimed at clients looking to create marketing materials or internal presentations, and any deepfake is a violation of their terms of use. But this means little to the models whose images are behind the digital “puppets” that were used in propaganda videos such as those apparently supporting the dictator of Burkina Faso. The Guardian located five of them.

“I’m in shock, there are no words right now. “I’ve been in the (creative) industry for over 20 years and I’ve never felt so violated and vulnerable,” said London-based creative director Mark Torres, who appears in the blue shirt in the deepfake videos.

“I don’t want anyone to see me like that. Just the fact that my image is out there could say anything: promoting a military regime in a country I didn’t know existed. “People will think I’m involved in the coup,” Torres added after The Guardian showed him the video for the first time.

He shoot

In the summer of 2022, Connor Yeates received a call from his agent offering him the opportunity to be one of the first AI models for a startup.

Yeates had never heard of the company, but he had just moved to London and was sleeping on a friend’s sofa. The offer – almost £4,000 for a day of filming and use of the footage over a three-year period – seemed like a “good deal”.

“I’ve been modeling since college and that’s been my main income since I finished. Then I moved to London to start doing stand-up,” said Yeates, who grew up in Bath.

Filming took place at Synthesia’s studio in east London. First, they took him to get his hair and makeup done. Half an hour later, he entered the recording room where a small team was waiting for him.

Yeates was asked to read lines while looking directly into the camera and wearing a variety of costumes: a lab coat, a high-visibility vest and construction helmet, and a corporate suit.

“There’s a teleprompter in front of you with lines, and you say this so they can pick up the gestures and replicate the movements. They would say be more enthusiastic, smile, frown and get angry,” Yeates said.

Everything lasted three hours. Several days later, he received a contract and the link to his AI avatar.

“They paid on time. “I don’t have rich parents and I needed the money,” said Yeates, who didn’t think much about it afterwards.

Like Torres, Yeates’s image was used in the propaganda of Burkina Faso’s current leader.

A Synthesia spokesperson said the company had banned the accounts that created the videos in 2023 and had strengthened its content review processes and “hired more content moderators and improved our moderation capabilities and automated systems to better detect and prevent the misuse of our technology.” ”.

But neither Torres nor Yeates were aware of the videos until The Guardian contacted them a few months ago.

‘Have I done something terrible?’ The real actors behind the AI ​​deepfakes that support dictatorships – video

The ‘unicorn’

Synthesia was founded in 2017 by Victor Riparbelli, Steffen Tjerrild and two academics from London and Munich.

A year later, it launched a dubbing tool that allowed production companies to translate an actor’s speech and lip-sync automatically using artificial intelligence.

It was shown on a BBC program in which a news presenter who only spoke English was made to appear as if he magically spoke Mandarin, Hindi and Spanish.

What earned the company its coveted “unicorn” status was a pivot toward the mass-market digital avatar product available today. This allows a business or individual to create a presenter-led video in minutes for as little as £23 a month. There are dozens of characters to choose from offering different genders, ages, ethnicities, and appearances. Once selected, digital puppets can be placed in almost any environment and given a script, which they can then read in over 120 languages ​​and accents.

Synthesia now has a dominant market share and counts Ernst & Young (EY), Zoom, Xerox and Microsoft among its clients.

The product’s advances led Time magazine to include Riparbelli among the 100 most influential people in AI in September.

But the technology has also been used to create videos linked to hostile states, including Russia, China and others, to spread misinformation and disinformation. Intelligence sources suggested to The Guardian that there was a high probability that the Burkina Faso videos that circulated in 2023 had also been created by Russian state actors.

The personal impact

Around the same time that the Burkina Faso videos began circulating online, two pro-Venezuela videos featuring fake news segments hosted by Synthesia avatars also appeared on YouTube and Facebook. In one of them, a blonde presenter in a white shirt condemned “Western media claims” about economic instability and poverty, instead presenting a very misleading portrait of the country’s financial situation.

Dan Dewhirst, a London-based actor and Synthesia model whose image was used in the video, told The Guardian: “Countless people contacted me about it… But there were probably other people who saw it and didn’t say anything , or silently judged me for it. I may have lost clients. But it’s not me, it’s just my face. But they’ll think I agree.

“I was furious. It was really very damaging to my mental health. (It caused) an overwhelming amount of anxiety,” he added.

Do you have information about this story? Email manisha.ganguly@theguardian.com or (using a non-work phone) use Signal or WhatsApp to send a message to +44 7721 857348.

The Synthesia spokesperson said the company had been in contact with some of the actors whose images had been used. “We sincerely regret any negative personal or professional impact these historic incidents have had on those you have spoken to,” he said.

But once they circulate, the damage of deepfakes is difficult to undo.

Dewhirst said seeing his face used to spread propaganda was the worst-case scenario, adding: “Our brains often suffer catastrophe when we worry. But then to see that worry come true… it was horrible.”

The ‘roller coaster’

Last year, more than 100,000 unionized actors and artists in the United States went on strike to protest the use of AI in the creative arts. The strike was called off last November after the studios agreed to safeguards in contracts, such as informed consent before digital replication and fair compensation for any such use. Video game artists continue to strike over the same issue.

Last month, a bipartisan bill was introduced in the United States, titled the NO COUNTERFEIT Act, which aims to hold companies and individuals liable for damages for infringements related to digital replicas.

However, there are virtually no practical redress mechanisms for the artists themselves, apart from AI-generated sexual content.

“These AI companies are treating people to a really dangerous rollercoaster,” said Kelsey Farish, a London-based media and entertainment lawyer specializing in generative AI and intellectual property. “And guess what? “People keep going on this roller coaster and now people are starting to get hurt.”

Under the GDPR, models can technically request that Synthesia delete their data, including their portrait and image. In practice this is very difficult.

A former Synthesia employee, who wanted to remain anonymous for fear of retaliation, explained that the AI ​​could not “unlearn” or erase what it may have learned from the model’s body language. To do so would require replacing the entire AI model.

The Synthesia spokesperson said: “Many of the actors we work with return to collaborate with us for new shoots… At the beginning of our collaboration, we explain our terms of service and how our technology works so they know what the problems are. . platform can do and the safeguards we have in place.”

It said the company did not allow “the use of stock avatars for political content, including content that is factually accurate but may create polarization,” and that its policies were designed to prevent its avatars from being used for “manipulation, deceptive practices, impersonations and false associations.”

“While our processes and systems may not be perfect, our founders are committed to continually improving them.”

When The Guardian tested Synthesia’s technology using a variety of disinformation scripts, although it blocked attempts to use one of its avatars, it was possible to recreate the Burkina Faso propaganda video with a personally created avatar and download it, none of which should be allowed. according to Synthesia policies. Synthesia said this was not a violation of its terms as it respected the right to express a personal political stance, but later blocked the account.

The Guardian was also able to create and download clips of an audio-only avatar saying “heil Hitler” in multiple languages, and another audio clip saying “Kamala Harris rigged the election” in an American accent.

Synthesia removed the free AI audio service after being contacted by The Guardian and said the technology behind the product was a third-party service.

the aftermath

The experience of knowing his likeness had been used in a propaganda video. has left Torres with a deep feeling of betrayal: “Knowing that this company that I trusted with my image will get away with it makes me very angry. “This could potentially cost lives, cost me my life when crossing a border for immigration.”

Torres was invited to another photo shoot with Synthesia this year, but declined. His contract ends in a few months, when his Synthesia avatar will be removed. But what about his avatar in the Burkina Faso video? It is not clear even to him.

“Now I realize why it is so dangerous to show them faces for them to use. “It’s a shame we were a part of this,” he said.

The propaganda video featuring Dewhirst has since been removed by YouTube, but remains available on Facebook.

Torres and Yeates remain on the cover of Synthesia in a video advertisement for the company.

You may also like