WhatsNew2Day
Latest News And Breaking Headlines

Reddit analysis reveals 16% of users publish posts that are ‘toxic’

Reddit can be a fun website where users discuss niche topics with others who share their interests.

However, a new analysis of more than two billion posts and comments on the platform has revealed an alarming amount of hate speech, harassment and cyberbullying.

Computer scientists from Hamad Bin Khalifa University in Qatar found that 16 percent of users publish posts and 13 percent comments that are considered “toxic.”

The study was conducted to find out how the toxicity of Redditors changes depending on the community they participate in.

It found that 82 percent of those who post comments change their toxicity level depending on the community or subreddit they contribute to.

In addition, the more communities a user participates in, the higher the toxicity of their content.

The authors suggest limiting hate speech on the site by limiting the number of subreddits each user can post to.

Reddit analysis reveals 16 of users publish posts that are

Computer scientists from Hamad Bin Khalifa University, Qatar, found that 16 percent of Reddit users publish posts and 13 percent comment that they are considered “toxic”

WHAT HAS THE RESEARCH DISCOVER ABOUT REDDIT USERS?

16.11 percent of Reddit users publish toxic posts.

13.28 percent of Reddit users publish toxic comments.

30.68 percent of those publications, and 81.67 percent publishing comments, their level of toxicity varies depending on the subreddit they post in.

This indicates that they are adapting their behavior to the norms of the community.

The authors of the article wrote: ‘Toxic content often contains insults, threats and abusive language, which in turn infects online platforms by preventing users from participating in discussions or inducing them to leave.

“Several online platforms have implemented prevention mechanisms, but these efforts are not scalable enough to stem the rapid growth of toxic content on online platforms.

“These challenges require the development of effective automatic or semi-automatic solutions to detect toxicity from a large stream of content on online platforms.”

The explosion in popularity of social media platforms has been accompanied by an increase in malicious content such as harassment, profanity and cyberbullying.

This can be motivated by various selfish reasons, such as increasing the perpetrator’s popularity, or allowing them to defend their personal or political beliefs by participating in hostile discussions.

Studies have shown that toxic content can influence non-malicious users and cause them to misbehave, negatively impacting the online community.

In their newspaper, released today in PeerJ Computer Sciencethe authors outline how they assessed the relationship between a Reddit user’s community and the toxicity of their content.

They first built a dataset of 10,083 Reddit comments, which were labeled as non-toxic, slightly toxic, or highly toxic.

This was according to the definition of Perspective API of ‘a rude, disrespectful or unreasonable comment that is likely to leave you out of a discussion’.

The dataset was then used to train an artificial neutral network — a model that attempts to simulate how a brain works to learn — to categorize comments and messages.

1660847248 828 Reddit analysis reveals 16 of users publish posts that are

1660847248 828 Reddit analysis reveals 16 of users publish posts that are

The researchers built a dataset of 10,083 Reddit comments, which were labeled as non-toxic, mildly toxic or highly toxic. This was according to Perspective API’s definition of “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” Pictured: Sample comments from each class

The dataset was then used to train an artificial neutral network — a model that attempts to simulate how a brain works to learn — to categorize comments and messages based on their toxicity.  Pictured: A Reddit post from the 'r/science' subreddit with its discussion threads

The dataset was then used to train an artificial neutral network — a model that attempts to simulate how a brain works to learn — to categorize comments and messages based on their toxicity.  Pictured: A Reddit post from the 'r/science' subreddit with its discussion threads

The dataset was then used to train an artificial neutral network — a model that attempts to simulate how a brain works to learn — to categorize comments and messages based on their toxicity. Pictured: A Reddit post from the ‘r/science’ subreddit with its discussion threads

It assessed the toxicity levels of 87,376,912 posts from 577,835 users and 2,205,581,786 comments from 890,913 users on Reddit between 2005 and 2020.

The analysis found that 16 percent of users publish toxic posts and 13 percent of users post toxic comments, with a rating accuracy of 91 percent.

The scientists also used the model to examine changes in the online behavior of users who publish in multiple communities or subreddits.

It found that nearly 31 percent of users who publish posts, and nearly 82 percent of those who publish comments, show changes in their toxicity across various subreddits.

This indicates that they are adapting their behavior to the norms of the community.

A positive correlation was found between the number of communities a Reddit user belonged to and the degree of toxicity of their content.

Thus, the authors suggest that limiting the number of subreddits a user can contribute to could limit the spread of hate speech.

They wrote: ‘Tracking the change in user toxicity can be an early detection method for toxicity in online communities.

The proposed methodology can identify when users show a change by calculating the toxicity percentage in posts and comments.

“This change, combined with the level of toxicity our system detects in user messages, can be used efficiently to stop the spread of toxicity.”

Polite warnings on Twitter can reduce hate speech by up to 20 percent, study shows

Twitter may cautiously warn users that they may face repercussions if they continue to use hate speech, according to new research.

A team at New York University’s Center for Social Media and Politics tested several warnings for users they identified as “candidates for suspension.”

These are people who have been suspended for violating the platform’s hate speech policy.

Users who received these warnings generally decreased their use of racist, sexist, homophobic or otherwise prohibited language by 10 percent after receiving a prompt.

If the warning was worded politely, the foul language dropped to 20 percent.

Read more here

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More