Home Tech Meta to label AI-generated images shared on Facebook and Instagram – but in ‘coming months’ as US presidential race heats up

Meta to label AI-generated images shared on Facebook and Instagram – but in ‘coming months’ as US presidential race heats up

by Elijah
0 comment
Meta is launching a tool to identify AI-generated content created on its platform

Meta is introducing a tool to identify AI-generated images shared on its platforms amid a global rise in synthetic content that spreads misinformation.

Due to various systems on the web, the company owned by Mark Zuckerberg aims to expand the labels to others such as Google, OpenAI, Microsoft and Adobe.

Meta said it will fully roll out the tagging feature in the coming months and plans to add a feature that allows users to flag AI-generated content.

However, the US presidential race is in full swing, leaving some to wonder if the labels will come out in time to stop the spread of false content.

The move comes after the Meta Oversight Board urged the company to take steps to label manipulated audio and video that could mislead users.

Meta is launching a tool to identify AI-generated content created on its platform

Meta launched an AI image generator in September last year and will identify all images that used its generator.

Meta launched an AI image generator in September last year and will identify all images that used its generator.

“The Board’s recommendations go further, as it recommended that the company expand the Manipulated Media policy to include audio, clearly state the harms it seeks to reduce, and begin labeling these types of posts more broadly than was announced. “said a spokesman for the Supervisory Board. Dan Chaison told Dailymail.com.

He continued: ‘Labeling allows Meta to leave more content and protect free expression.

‘However, it is important that the company clearly defines the issues it seeks to address, given that not all modified posts are objectionable, without a direct risk of real-world harm.

“Those harms can include inciting violence or misleading people about their right to vote.”

Meta said Tuesday that it is working with industry partners on technical standards that will make it easier to identify images and, eventually, videos and audio generated by artificial intelligence tools.

What remains to be seen is how well it will work at a time when it is easier than ever to create and distribute AI-generated images that can cause harm, from election misinformation to non-consensual fake nudes of celebrities.

AI-generated images have become increasingly worrying.

Thousands of internet users are being tricked into sharing fake images, such as that of French President Emmanuel Macron at a protest.

Thousands of internet users are being tricked into sharing fake images, such as that of French President Emmanuel Macron at a protest.

Thousands of internet users are being tricked into sharing fake images, from French President Emmanuel Macron at a protest to Donald Trump being arrested by police in New York City.

Nick Clegg, president of global affairs at Meta, said it’s important to implement these labels now, at a time when elections are taking place around the world that could lead to misleading content.

“As the distinction between human and synthetic content becomes blurred, people want to know where the line is,” Clegg said.

‘People often encounter AI-generated content for the first time and our users have told us they appreciate the transparency around this new technology.

“That’s why it’s important that we help people know when the photorealistic content they’re viewing was created using AI.”

Clegg also explained that Meta will work to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans to add metadata to images created by their tools.”

Several fake images have presented misleading and sometimes dangerous information that could incite violence if left unchecked.

The Oversight Board said Meta’s current Manipulated Media Policy lacks “persuasive justification, is incoherent and confusing to users, and does not clearly specify the harms it seeks to prevent.”

“As it stands, the policy makes little sense,” Michael McConnell, co-chair of the board, told Bloomberg.

‘It bans doctored videos that show people saying things they don’t say, but it doesn’t ban posts that show an individual doing something they didn’t do. It only applies to videos created through AI, but leaves other fake content off the hook.’

A hoax image of Donald Trump's arrest has gone viral, sparking angry outbursts among people who believed the image was real.

A hoax image of Donald Trump’s arrest has gone viral, sparking angry outbursts among people who believed the image was real.

Last year, an image appeared to show former President Donald Trump being arrested outside a New York City courthouse, sparking an outburst from people believing the image was real.

The Meta Oversight Board said the move to label AI-generated images is a victory for media literacy and will give users the context they need to identify misleading content.

The board is still in talks with Meta about expanding the labels to cover media and audio and is asking the company to clearly state the harms associated with misleading media.

Meta has not responded to the Oversight Board’s request for the company to implement additional labels to identify any alterations made to published content.

The idea is that by labeling misleading content, Meta will not have to remove the posts, which in turn can protect people’s right to free speech and their right to express themselves.

However, alterations such as the robocall that imitated President Joe Biden’s voice and told New Hampshire voters not to vote in the primary would still justify the board’s decision to remove the content.

To combat misleading information, Meta is studying the development of technology that automatically detects AI-generated content.

“This work is especially important as it is likely to become an increasingly contentious space in the coming years,” Clegg said.

‘Individuals and organizations who want to actively mislead people with AI-generated content will look for ways to circumvent safeguards put in place to detect it.

“In our industry and in society at large, we will have to continue to look for ways to stay one step ahead.”

Dailymail.com has contacted Meta for comment.

You may also like