Home Tech China’s plan to make AI watermarking a reality

China’s plan to make AI watermarking a reality

0 comments
China's plan to make AI watermarking a reality

Chinese regulators probably learned from the EU AI Law, says Jeffrey Ding, assistant professor of political science at George Washington University. “Chinese policymakers and academics have said they have looked to EU laws as inspiration for things of the past.”

But at the same time, some of the measures taken by Chinese regulators are not really replicable in other countries. For example, the Chinese government is asking social platforms to analyze user-uploaded content for AI. “This seems like something very new and might be unique to the China context,” Ding says. “This would never exist in the American context, because the United States is famous for saying that the platform is not responsible for the content.”

But what about freedom of expression online?

The draft regulation on AI content labeling is seeking public comments until October 14, and it may take several months for it to be amended and approved. But there is little reason for Chinese companies to delay preparation for when it comes into effect.

Sima Huapeng, founder and CEO of Chinese company AIGC Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers and replicate living and dead people, says his product now allows users to voluntarily choose whether to flag the generated product like AI. But if the law passes, it may have to be made mandatory.

“If a feature is optional, companies will most likely not add it to their products. But if it becomes mandatory by law, then everyone will have to implement it,” says Sima. It’s not technically difficult to add watermarks or metadata tags, but it will increase operating costs for compliant businesses.

Policies like this can prevent AI from being used for scams or invasion of privacy, he says, but they could also trigger the growth of a black market for AI services where companies try to avoid legal compliance and save costs.

There is also a fine line between holding AI content producers accountable and controlling individual speech through more sophisticated tracking.

“The big underlying human rights challenge is to ensure that these approaches do not further compromise privacy or freedom of expression,” says Gregory. While implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can allow platforms and the government to have stronger control over what users post on the internet. In fact, concerns about how AI tools can go rogue has been one of the main drivers of proactive AI legislation efforts in China.

At the same time, the Chinese AI industry is lobbying the government to give it more room to experiment and grow as they already lag behind their Western peers. An earlier Chinese law on generative AI was significantly watered down between the first public draft and the final bill, removing requirements on identity verification and reducing penalties imposed on companies.

“What we’ve seen is that the Chinese government is really trying to walk a fine tightrope between ‘making sure we maintain control of content’ and also ‘allowing these AI labs in a strategic space to have the freedom to innovate'” says Ding. “This is another attempt to do that.”

You may also like