Mastodon, the decentralized network seen as a viable alternative to Twitter, is riddled with child sexual abuse material (CSAM), according to a new study of Stanford Internet Observatory (through the washington post). In just two days, the researchers found 112 instances of known MASN across 325,000 posts on the platform; the first instance appeared after just five minutes of searching.
To conduct their research, the Internet Observatory scanned the 25 most popular Mastodon instances for MASI. The researchers also used Google’s SafeSearch API to identify explicit images, along with PhotoDNA, a tool that helps find flagged CSAM. During their search, the team found 554 pieces of content that matched hashtags or keywords commonly used by online child sexual abuse groups, all of which were identified as explicit at “highest confidence” by Google SafeSearch.
CSAM’s open posting is ‘disturbingly frequent’
There were also 713 uses of the top 20 MASI-related hashtags on Fediverse in posts containing media, as well as 1,217 text-only posts that targeted “off-site MASI trade or minor grooming.” The study notes that the open publication of MASI is “disturbingly frequent.”
One example concerned the extended mastodon.xyz server outage we noticed earlier this month, which was an incident that occurred due to a CSAM posted on Mastodon. In a post about the incidentthe sole person responsible for maintaining the server stated that they were alerted to content containing child sexual abuse content, but notes that moderation is carried out at their leisure and may take a few days; this is not a giant operation like Meta with a worldwide team of contractors, it’s just one person.
While they said they took action against the content in question, the domain host mastodon.xyz had suspended it anyway, making the server inaccessible to users until they could contact someone to restore their listing. After the issue was resolved, the mastodon.xyz administrator says the registrar added the domain to a “false positive” list to prevent future removals. However, as the researchers point out, “what triggered the action was not a false positive.”
“We got more photoDNA results in a two-day period than we’ve probably had in our organization’s entire history of doing any kind of social media analysis, and it’s not even close,” David Thiel, one of the report’s researchers, said in a statement to Washington Post. “Much of this is just the result of what appears to be a lack of tools that centralized social media platforms use to address child safety concerns.”
As decentralized networks like Mastodon grow in popularity, security concerns are also increasing. Decentralized networks do not use the same moderation approach as mainstream sites like Facebook, Instagram, and Reddit. Instead, each decentralized instance has control over moderation, which can create inconsistencies in Fediverse. That’s why the researchers suggest that networks like Mastodon employ stronger tools for moderators, along with PhotoDNA integration and CyberTipline reporting.