Amid growing concerns that AI could facilitate the spread of misinformation, Microsoft is offering its services, including a digital watermark that identifies AI content, to help combat deepfakes and improve cybersecurity ahead of several elections. global.
in a blog post Co-authored by Microsoft President Brad Smith and Microsoft Corporate Vice President, Technology for Fundamental Rights, Teresa Hutson, the company said it will offer several services to protect election integrity, including the launch of a new tool that leverages the Content Credentials watermarking system developed by the Coalition for Content Provenance Authenticity (C2PA). The goal of the service is to help candidates protect the use of their content and image, and prevent misleading information from being shared.
Called Content Credentials as a Service, users such as election campaigns can use the tool to attach information to the metadata of an image or video. The information could include the provenance of when, how, when and who created the content. It will also say if AI was involved in creating the content. This information becomes a permanent part of the image or video. C2PA, a group of companies founded in 2019 that works on developing technical standards to certify the provenance of content, launched Content Credentials this year. C2PA member Adobe in October launched a Content Credentials symbol that will be attached to photos and videos.
Content Credentials as a Service will launch in the spring of next year and will be available for the first time for political campaigns. Microsoft’s Azure team created the tool. The edge contacted Microsoft for more information about the new service.
“Given the technological nature of the threats involved, it is important that governments, technology companies, the business community and civil society adopt new initiatives, including building on each other’s work,” Smith and Huston said.
Microsoft said it has formed a team that will provide advice and support to campaigns to strengthen cybersecurity protections and work with AI. The company will also establish what it calls an Election Communications Center where governments around the world will be able to access Microsoft security teams ahead of the election.
Smith and Hutson said Microsoft will support the Protect elections from misleading AI law presented by Senator Amy Klobuchar (D-MN), Chris Coons (D-DE), Josh Hawley (R-MO), and Susan Collins (R-ME). The bill seeks to prohibit the use of AI to create “materially misleading content that falsely represents federal candidates.”
“We will use our voice as a company to support legislative and legal changes that will increase the protection of campaigns and electoral processes against deepfakes and other harmful uses of new technologies,” Smith and Huston wrote.
Microsoft also plans to work with groups such as the National Association of State Electoral Directors, Reporters Without Borders and the Spanish news agency EFE to display reputable sites with electoral information on Bing. The company said this expands on its previous partnership with Newsguard and Claim Review. It hopes to regularly publish reports on foreign influences in key elections. has already published the first report analyzing threats from foreign evil influences.
Some political campaigns have already been criticized for circulating manipulated photos and videos, although not all of them were created with AI. Bloomberg reported that Ron DeSantis’ campaign posted fake images of rival Donald Trump posing with Anthony Fauci in June and that the Republican National Committee promoted a fake video of an apocalyptic America, blaming the Biden administration. Both were relatively benign acts, but were cited as examples of how technology creates opportunities to spread misinformation.
Misinformation and deep fakes are always a problem in any modern election, but the ease of using generative AI tools to create misleading content fuels concerns that it will be used to mislead voters. The US Federal Election Commission (FEC) is debating whether to ban or limit AI in political campaigns. Rep. Yvette Clark (D-NY) also introduced a bill in the House to force candidates to disclose their use of AI.
However, there are concerns that watermarks, such as content credentials, may not be enough to completely stop misinformation. The watermark is a central feature of the Biden administration’s executive order on AI.
Microsoft is not the only big tech company hoping to curb the misuse of AI in elections. Meta is now requiring political advertisers to disclose AI-generated content after it banned them from using its generative AI advertising tools.