Facebook is testing a new popup that asks users if they have READ an article before sharing it

0

Facebook is adding more censorship to its platform with a new popup asking users if they’ve read an article before sharing it.

The prompt, which is currently being tested with selected users, only appears if the person clicks “share” without opening the article.

The goal, the company said, is to help users to be better informed and to stop the spread of misinformation.

For now, the feature is only rolling out to about a small percentage of Android users.

Scroll down for video

Facebook is testing a popup with Android users who will ask if they are sure they want to share an article if they haven’t opened it yet

In a tweet Monday, Facebook said the feature promotes “ more informed news article sharing. ”

“If you’re going to share a link to a news article that you haven’t opened yet, we’ll show a prompt encouraging you to open and read the link before sharing it with others.”

A sample pop-up shows a fictional article about a voluntary evacuation in an unnamed state due to flooding.

“You are about to share an article without opening it,” read an overlay message. “Sharing articles without reading them can mean missing important facts.”

Facebook and other platforms have tested various features in the wake of conspiracy theories about the coronavirus pandemic and the 2020 presidential election proliferation on social media

Facebook and other platforms have tested various features in the wake of conspiracy theories about the coronavirus pandemic and the 2020 presidential election proliferation on social media

The user has the option to open the article or ‘continue sharing’.

The response on Twitter was mostly positive, with one user suggesting that Facebook add a similar feature to respond.

“Click, read, learn – and share or comment,” they tweet.

Facebook’s announcement on Monday comes a few days after the Oversight Board enforced a ban on former President Donald Trump’s account.

Trump was banned from Facebook, Instagram and Twitter in January in response to the uprising in the Capitol, which social media sites claim he fueled.

It was unprecedented censorship on a world leader and sparked a global debate about how much control social media and big tech should have over freedom of speech.

Twitter began testing a similar popup in June 2020 and rolled it out more widely in September.

The company said users opened articles before sharing them 40 percent more often than before.

“It’s easy for articles to go viral on Twitter,” said Suzanne Xie, Twitter’s director of Product Management TechCrunch.

‘Sometimes this can be great for sharing information, but it can also be detrimental to the discourse, especially if people haven’t read what they tweet,’

Both platforms have been touting various features in the wake of disinformation and conspiracy theories about the coronavirus pandemic and the 2020 presidential election proliferation on social media.

Last year, Facebook launched a popup that warned users if they shared anything over 90 days old, and another with the source and date of COVID-19 links.

Last week, Twitter announced that it is adding a feature that will prompt users to review “ potentially harmful or offensive ” replies before posting.

The feature, first tested last year, uses artificial intelligence (AI) to detect malicious language in a freshly written response to another user before it is posted.

It sends users a popup notification asking if they want to view their post before posting.

According to Twitter, the prompt gives users the option to “ take a moment ” to think about the tweet by making changes or deleting the post altogether.

Users are also free to ignore the warning message and post their response anyway.

Likewise, Twitter is working on an Undo Send timer for tweets that will give users five seconds to think about your post.

Last week, Twitter announced that it is adding a feature that will prompt users to review `` potentially harmful or offensive '' replies before posting.  The feature, which was first tested last year, uses artificial intelligence (AI) to detect malicious language

Last week, Twitter announced that it is adding a feature that will prompt users to review “ potentially harmful or offensive ” replies before posting. The feature, which was first tested last year, uses artificial intelligence (AI) to detect malicious language

However, not every prompt is designed to create a more civilian Internet: In February, Facebook began testing a popup for iPhone users, informing them of its data collection practices.

The prompt was launched prior to the iOS 14 update, which required developers to request permission to track users ‘across apps and websites’, Bloomberg reported.

Facebook’s version has a more positive tone, offering users ‘a better ad experience’.

Apple’s new prompt suggests there is a trade-off between personalized ads and privacy; when in fact we can and will provide both, ”Facebook wrote in a blog post.

“The Apple prompt also doesn’t provide context about the benefits of personalized ads.”

.