Home Tech Social media platforms have work to do to comply with the Online Safety Act, says Ofcom

Social media platforms have work to do to comply with the Online Safety Act, says Ofcom

0 comments
Social media platforms have work to do to comply with the Online Safety Act, says Ofcom

Social media platforms have a lot of work to do to comply with the UK’s Online Safety Act and have yet to introduce all necessary measures to protect children and adults from harmful content, the communications regulator has said.

Ofcom on Monday published codes of practice and guidelines that technology companies must follow to comply with the law, which carries the threat of significant fines and site closures if companies fail to comply.

The regulator said many of the measures it recommends are not followed by the largest and riskiest platforms.

“We don’t believe any of them are enforcing all the measures,” said Jon Higham, Ofcom’s director of online safety policy. “We think there is a lot of work to do.”

All sites and apps covered by the law – from Facebook, Google and X to Reddit and Onlyfans – now have three months to assess the risk of illegal content appearing on their platform.

From March 17 they will have to start implementing safeguards to address those risks, and Ofcom will monitor their progress. Ofcom’s codes of practice and guidelines set out ways to address those risks. Sites and apps that adopt them will be deemed to comply with the law.

The law applies to sites and apps that publish content created by users for other users, as well as large search engines, which cover more than 100,000 online services. It lists 130 “priority crimes” (covering a variety of content types, including child sexual abuse, terrorism and fraud) that tech companies will have to proactively address by adapting their moderation systems.

Writing in The Guardian, technology secretary Peter Kyle said the codes and guidelines were “the biggest change ever made to online safety policy”.

“Internet terrorists and child abusers will no longer be able to behave with impunity,” he wrote. “Because, for the first time, technology companies will be forced to proactively remove illegal content that plagues our Internet. “If they fail to do so, they will face huge fines and, if necessary, Ofcom can ask the courts to block access to their platforms in Britain.”

Codes and guidelines published by Ofcom include: appointing a senior executive to be responsible for compliance with the law; have adequately staffed and funded moderation teams that can quickly remove illegal material, such as extreme suicide-related content; better testing of algorithms (which select what users see in their feeds) to make it harder to spread illegal material; and delete accounts operated by or on behalf of terrorist organizations.

Technology platforms are also expected to operate “easy-to-find” tools for filing content complaints that acknowledge receiving a complaint and indicate when it will be addressed. Larger platforms are expected to provide users with options to block and mute other accounts on the platform, along with the option to disable comments.

skip past newsletter promotion

Ofcom also expects platforms to implement automated systems to detect child sexual abuse material, including so-called “hash matching” measures that can match such suspicious material to known examples of the content. The new codes and guidelines will now apply to file-sharing services such as Dropbox and Mega, which are at “high risk” of distributing abusive material.

Child safety campaigners said Ofcom’s announcement did not go far enough. The Molly Rose Foundation, set up by the family of Molly Russell, who took her own life aged 14 in 2017 after seeing suicidal content on social media, said it was “astonished” there were no specific measures to deal with her own life. -Content related to harm and suicide that reaches the criminality threshold. The NSPCC said it was “deeply concerned” that platforms like WhatsApp are not required to remove illegal content if it is not technically feasible.

Fraud, a widespread problem on social media, will be addressed by requiring platforms to establish specific reporting channels with bodies such as the National Crime Agency and the National Cyber ​​Security Centre, which can flag examples of fraud to platforms.

It will also hold consultations in the spring on creating a protocol for crisis events such as the riots that broke out in the summer in response to the Southport murders.

You may also like