Home Tech How China uses AI news anchors to spread its propaganda

How China uses AI news anchors to spread its propaganda

0 comments
AI-generated presenter compares Taiwan president to water spinach – video

tThe news anchor has a profoundly strange air as he delivers a partisan, pejorative message in Mandarin: Taiwan’s outgoing president, Tsai Ing-wen, is as effective as limp spinach, her period in office plagued by poor economic performance, social problems and protests.

“Water spinach looks at the water spinach. It turns out that the water “Spinach is not just a name,” says the presenter, in an extensive metaphor about Tsai. being “Hollow Tsai”, a play on words related to the Mandarin word for water spinach.

AI-generated presenter compares Taiwan president to water spinach – video

This is not a conventional television journalist, although the lack of impartiality is no longer a surprise. The anchor is generated by an artificial intelligence program and the segment attempts, however clumsily, to influence Taiwan’s presidential election.

The video’s source and creator are unknown, but the clip is designed to make voters doubt politicians who want Taiwan to stay away from China, which claims the self-ruled island is part of its territory. It is the latest example of a subgenre of the AI-generated disinformation game: the deepfake news anchor or TV host.

These avatars are proliferating on social media, spreading state-backed propaganda. Experts say this type of video will continue to spread as the technology becomes more accessible.

“It doesn’t have to be perfect,” said Tyler Williams, director of research at Graphika, a disinformation research firm. “If a user is just scrolling through X or TikTok, they don’t pick up on little nuances on a smaller screen.”

Beijing has already experimented with AI-generated news anchors. In 2018, state news agency Xinhua featured Qiu Hao, a digital news anchor, who promised to bring viewers the news “24 hours a day, 365 days a year.” Although the Chinese public is generally enthusiastic about the use of digital avatars in media, Qiu Hao failed to capture the popularity of it.

China is at the forefront of the disinformation element of this trend. Last year, pro-China bot accounts on Facebook and X distributed AI-generated deepfake videos of news anchors representing a fictional broadcaster called Wolf News. In one video, the US government was accused of failing to address gun violence, while another highlighted China’s role at an international summit.

In a report published in April, Microsoft said Chinese state-backed cyber groups had attacked the Taiwanese election with AI-generated disinformation content, including the use of fake news anchors or television-style anchors. In a clip cited by Microsoft, the AI-generated host made unsubstantiated claims about the private life of the ultimately successful pro-sovereignty candidate, Lai Ching-te, alleging that he had fathered children out of wedlock.

Microsoft said the news anchors were created by the video editing tool CapCut, developed by the Chinese company ByteDance, which owns TikTok.

Clint Watts, general manager of Microsoft’s threat analysis center, points to China’s official use of synthetic news anchors in its domestic media market, which has also allowed the country to refine the format. It has now become a tool for disinformation, although so far there has been little discernible impact.

“The Chinese are much more focused on trying to introduce AI into their systems (propaganda, disinformation), and they did it very quickly. They are trying everything. It’s not particularly effective,” Watts said.

Third-party vendors like CapCut offer the newscaster format as a template, making it easy to adapt and produce in high volume.

There are also clips showing avatars acting like a cross between a professional TV presenter and an influencer speaking directly to the camera. A video produced by a Chinese state-backed group called Storm 1376, also known as Spamouflage, shows a blonde, AI-generated host alleging that the United States and India are secretly selling weapons to the Myanmar military.

AI-generated presenter produced by Chinese state-backed Storm-1376 group – video

The overall effect is far from convincing. Although the presenter appears realistic, the video is undermined by a stiff voice that is clearly computer-generated. Other examples uncovered by NewsGuard, an organization that monitors misinformation and disinformation, show a TikTok account linked to Spamouflage using AI-generated avatars to comment on US news, such as food costs and gas prices. A video shows an avatar with a computer-generated voice. discussing Walmart prices under the slogan: “Is Walmart lying to you about the weight of its meat?”

NewsGuard said the avatar videos were part of a pro-China network that was “expanding” ahead of the US presidential election. It noted 167 accounts created since last year that were linked to Spamouflage.

Other nations have experimented with deepfake anchors. Iranian state-backed hackers recently disrupted television streaming services in the United Arab Emirates to broadcast a deepfake news anchor offering a report on the war in Gaza. On Friday the The Washington Post reported that the Islamic State terrorist group is using artificial intelligence-generated news anchors (complete with helmets and uniforms) to broadcast propaganda.

And one European state is openly testing AI-generated presenters: Ukraine’s Foreign Ministry has appointed an AI spokesperson, Victoria Shi, using the image of Rosalie Name, a Ukrainian singer and media personality who gave permission for Your image will be used. The result is, at first sight At least impressive.

Last year, Beijing published guidelines to tag content, indicating that AI-generated images or videos must have a clear watermark. But Jeffrey Ding, an assistant professor at George Washington University who specializes in technology, said it was an “open question” how the labeling requirements would be applied in practice, especially when it comes to state propaganda.

And although China’s guidelines Require that “mis” information in AI-generated content be minimized., the priority for Chinese regulators is to “control information flows and ensure that the content produced is not politically sensitive and does not cause social disruption,” Ding said. That means that when it comes to fake news, “for the Chinese government, what is considered disinformation on the Taiwan front could be very different from what is the proper or truest interpretation of the disinformation.”

Experts don’t think computer-generated news anchors are effective victims yet: Tsai’s pro-sovereignty party won in Taiwan, despite the avatar’s best efforts. Macrina Wang, deputy news verification editor at NewsGuard, said the avatar content she had seen was “pretty crude” but was increasing in volume. To the trained eye, these videos were obviously fake, she said, with stilted movements and a lack of changing lights or shadows on the avatar’s figure among the clues. However, some of the comments under the TikTok videos show that people have fallen for it.

“There is a risk that the average person will think this (avatar) is a real person,” he said, adding that AI was making video content “more engaging, more engaging and more viral.”

Microsoft’s Watts said a more likely evolution of the newscaster tactic was footage of a real-life newscaster being manipulated rather than a fully AI-generated figure. We could see “any major media anchor being manipulated into saying something he didn’t say,” Watts said. This is “much more likely” than a fully synthetic effort.

In their report last month, Microsoft researchers said they had not found many examples of AI-generated content having an impact in the offline world.

“Rarely have nation-states using AI-enabled generative content achieved much reach on social media, and only in a few cases have we seen actual audience deception from such content,” the report reads.

Instead, the public gravitates toward simple deepfakes, such as fake text stories with fake media logos.

Watts said there was a possibility that a completely AI-generated video could affect an election, but the tool to create such a clip did not yet exist. “I’m guessing the tool used with that video…isn’t even on the market yet.” The most effective AI video messenger may not yet be a news anchor. But it underscores the importance of the video for states trying to sow confusion among voters.

Threat actors will also be waiting for an example of an AI-created video that captures the audience’s attention and then replicates it. Both OpenAI and Google have demonstrated AI video creators in recent months, although neither has released their tools to the public.

“Effective use of synthetic characters in videos that people actually watch will happen first in a commercial space. And then you’ll see threat actors move on to that.”

Additional research by Chi Hui Lin

You may also like