Facebook reveals that it will use AI to verify photos and videos of false facts

The image is a fake video (left) and the article (right) posted on Facebook. The false news claimed that NASA had confirmed that the Earth would darken for several days. Facebook is expanding its fake news tracking software to include photos and videos

Facebook is expanding its fake news detection systems to include photos and videos as part of its ongoing battle to stop the spread of erroneous information in its service.

After successful trials in France, India and Mexico, the company said it will now implement the system in 17 countries around the world in an attempt to stop what it has called "misinformation in these new visual formats."

The Artificial Intelligence (AI) system provides potentially false content to human data verifiers, who use visual verification techniques, such as the search for inverse images and the analysis of image metadata to verify the accuracy of photos and videos.

Previously, the company's efforts to address misinformation had focused on eliminating fake articles and links to web pages.

Russian agents and other malicious groups seeking to influence democratic elections in the US UU And in other places they have used images and videos repeatedly.

These have a greater visual appeal than text or fake articles and are also more difficult to detect by using fake news tracking software, which generally searches for keywords in the text.

Scroll down to watch the video

The image is a fake video (left) and the article (right) posted on Facebook. The false news claimed that NASA had confirmed that the Earth would darken for several days. Facebook is expanding its fake news tracking software to include photos and videos

The image is a fake video (left) and the article (right) posted on Facebook. The false news claimed that NASA had confirmed that the Earth would darken for several days. Facebook is expanding its fake news tracking software to include photos and videos

HOW FALSE TRACK FACEBOOK PHOTOS AND FALSE VIDEOS?

Facebook uses artificial intelligence to track potentially fake photos and videos.

This machine learning software uses several signals, including the opinion of Facebook users, to identify fake content.

Then, the company sends these photos and videos to human data verifiers for review, as well as their fake news systems for deceptive articles.

Factual inspectors use & # 39; visual verification techniques & # 39; to qualify if an image is false or not.

These include the search for inverse images and the analysis of when and where the photo or video was taken.

Once something like fake news has been archived, a warning appears on the site marking it as such.

Facebook said it has been testing image imaging controls since spring, beginning with a test with the French news agency AFP.

Now, he will send controversial photographs and videos to 27 fact-checking organizations in 17 countries to verify the marked content.

The company has remained silent according to the criteria it uses to evaluate photos and videos and how much an image can be edited before it is falsified.

Antonia Woodford, product manager at Facebook, said: "People share millions of photos and videos on Facebook every day.

"We know that this type of exchange is particularly convincing because it is visual.

It also creates an easy opportunity for the manipulation of bad actors.

"We have created an automatic learning model that uses various signs of participation, including comments from people on Facebook, to identify potentially fake content.

"Then we send those photos and videos to the data inspectors for review, or the data verifiers can discover the content on their own."

She said Facebook's fact-checkers and algorithms are looking for three types of fake news commonly disseminated through images and videos.

These include content that has been "manipulated or manufactured", used out of context, or combined with text or audio that makes false claims.

The algorithms of Facebook are looking for three types of false news commonly spread through images and videos. These include content that has been "manipulated or manufactured". (left), used out of context (center) or combined with text or audio that makes false claims (right)

The algorithms of Facebook are looking for three types of false news commonly spread through images and videos. These include content that has been "manipulated or manufactured". (left), used out of context (center) or combined with text or audio that makes false claims (right)

The algorithms of Facebook are looking for three types of false news commonly spread through images and videos. These include content that has been "manipulated or manufactured". (left), used out of context (center) or combined with text or audio that makes false claims (right)

Data inspectors use visual verification techniques, such as the search for inverse images and the analysis of when and where the photo or video was taken.

The teams combine this research with research from experts, academics and government agencies, Facebook said.

"As we get more ratings from data inspectors in photos and videos, we can improve the accuracy of our machine learning model," said Ms. Woodford.

"We are also taking advantage of other technologies to better recognize false or misleading content.

Facebook is updating its fake news detection software to scan photos and videos in its fight to prevent the spread of erroneous information in its service. In the photo, Facebook CEO Mark Zuckerberg at the firm's F8 developer conference in May, where spreading fake news was a key issue

Facebook is updating its fake news detection software to scan photos and videos in its fight to prevent the spread of erroneous information in its service. In the photo, Facebook CEO Mark Zuckerberg at the firm's F8 developer conference in May, where spreading fake news was a key issue

Facebook is updating its fake news detection software to scan photos and videos in its fight to prevent the spread of erroneous information in its service. In the photo, Facebook CEO Mark Zuckerberg at the firm's F8 developer conference in May, where spreading fake news was a key issue

& # 39; For example, we use optical character recognition (OCR) to extract text from photos and compare that text with factual article holders.

"We are also working on new ways to detect if a photo or video has been manipulated."

Russia has been repeatedly accused of using memes and other viral images to influence Western elections.

WHAT TYPES OF FALSE PHOTOS AND VIDEO ARE YOU LOOKING FOR FACEBOOK?

She said Facebook's fact-checkers and algorithms are looking for three types of fake news commonly disseminated through images and videos.

1) Handled or manufactured: Content that has been edited or adulterated to spread false news.

Facebook gives an example in which the face of Mexican politician Ricardo Anaya was photographed on a Green Card of the United States before a key election.

The photo was created to make people believe that he was from Atalanta, Georgia, despite coming to the elections in Mexico.

2) ORcontext ut: Publications on Facebook that take images from their original context to spread misinformation.

An example given by Facebook shows a user who claims that a Syrian girl seen in several photos is an "actor" used as part of a Western propaganda campaign.

The message seems to suggest that the wounded child was seen in pictures of three "attacks" carried out by the forces of Bashar Hafez al-Assad, backed by Putin.

Facebook's fake news system was able to confirm that the photos published came from the same attack in the Syrian city of Aleppo.

3) Text or audio claim: Facebook photo or video with layers of text or audio that contain false news.

A photo published with a deceptive subtitle chosen by Facebook stated that Indian Prime Minister Narendra Modi was qualified by the & # 39; researchers & # 39; of the BBC as the seventh & # 39; most corrupt prime minister in the world & # 39; of 2018.

The Russian agents who tried to interfere with the US presidential elections UU In 2016, the edited photos and the intense visuals on Facebook were commonly disseminated.

Facebook is better prepared to defend itself from efforts to manipulate the platform to influence elections, according to CEO Mark Zuckerberg.

The 34-year-old man said the platform had recently thwarted foreign influence campaigns targeting several countries.

This image shows Facebook CEO Mark Zuckerberg making the main speech at F8, the Facebook developer conference in San Jose, California.

This image shows Facebook CEO Mark Zuckerberg making the main speech at F8, the Facebook developer conference in San Jose, California.

This image shows Facebook CEO Mark Zuckerberg making the main speech at F8, the Facebook developer conference in San Jose, California.

Zuckerberg, posting on his Facebook page, outlined a series of steps that the leading social network has taken to protect against misinformation and manipulation campaigns aimed at disrupting elections.

"We have identified and eliminated fake accounts before the elections in France, Germany, Alabama, Mexico and Brazil," Zuckerberg said.

"We have found and eliminated campaigns of foreign influence from Russia and Iran that try to interfere in the US, the United Kingdom, the Middle East and other places, as well as groups in Mexico and Brazil that have been active in their own country."

Zuckerberg repeated his admission that Facebook was not prepared for the great efforts of influence on social networks in the 2016 US elections.

But he added that "today, Facebook is better prepared for this type of attack".

The billionaire also warned that the task is difficult because "we face sophisticated and well-financed adversaries, they will not give up and will continue to evolve."

WHAT HAS FACEBOOK DONE TO CONFRONT FALSE NEWS?

In 2016, after the November 2016 election results in the United States, Mark Zuckerberg said: "Of all the content on Facebook, more than 99% of what people see is authentic."

He also warned that the company should not rush to verify the facts.

But Zuckerberg was soon criticized after false news emerged that had helped to influence the election results.

In response, the company launched a system of marked & # 39; Disputed & # 39; that he announced in a December 2016 publication.

The system meant that users were responsible for marking items that they believed were false, instead of the company.

In April 2017, Facebook suggested that the system had been a success.

He said that "in general, false news decreased on Facebook", but did not provide any evidence.

"It is difficult for us to measure because we can not read everything that is published," he said.

But it soon emerged that Facebook was not providing the full story.

In July 2017, Oxford researchers discovered that "computer propaganda is one of the most powerful tools against democracy", and Facebook was playing an important role in spreading false news.

In response, Facebook said it would prohibit pages that post cheating stories from advertising in August 2017.

In September, Facebook finally admitted during the interrogation of Congress that a Russian propaganda factory had placed ads on Facebook to influence voters around the 2016 campaign.

In December 2017, Facebook admitted that its fake news tagging system was a failure.

Since then, he has used third-party data inspectors to identify frauds, and then he gave less importance to such stories in Facebook News Feed when people share links to them.

In January, Zuckerberg said that Facebook would prioritize the news & # 39; reliable & # 39; by using member surveys to identify high quality outlets.

Facebook has quietly begun to take "fact-check" photos and videos to reduce false news. However, the details of how he is doing it are not clear.

.