Meta, which owns Facebook and Instagram, announced major changes to its policies on digitally created and modified media on Friday, ahead of the election, testing its ability to monitor misleading content generated by artificial intelligence technologies.
The social media giant will begin applying ‘Made with AI’ labels to AI-generated videos, images and audio posted to Facebook and Instagram in May, expanding a policy that previously only applied to a small portion of spoofed videos was aimed. policy, says Monika Bickert in a blog post.
Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially misleading the public about an important issue,” regardless of whether the content was created using AI or other tools . Meta will immediately begin applying the more prominent “high-risk” labels, a spokesperson said.
The approach will shift the way the company deals with manipulated content, from a focus on removing a limited number of posts to keeping the content up to date and providing viewers with information about how it was created .
Meta previously announced a plan to detect images created with other companies’ generative AI tools using invisible markers built into the files, but did not provide a start date at the time.
A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Different rules apply to other services, including WhatsApp and Quest virtual reality headsets.
The changes come months ahead of the US presidential election in November, which tech researchers warn could be transformed by generative AI technologies. Political campaigns have already started deploying AI tools in countries like Indonesia, pushing the boundaries of guidelines from providers like Meta and generative AI market leader OpenAI.
In February, Meta’s regulatory board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted to Facebook last year that altered real footage to falsely suggest that the US president had behaved inappropriately.
The footage was allowed to remain because Meta’s existing “manipulated media” policy only bans misleading altered videos if they are produced by artificial intelligence or if they cause people to say words they never said.
The board said the policy should also apply to non-AI content, which is “not necessarily less misleading” than content generated by AI, as well as audio-only content and videos that show people doing things they never would have said or done.