Home Tech Ofcom warns tech companies after chatbots imitate Brianna Ghey and Molly Russell

Ofcom warns tech companies after chatbots imitate Brianna Ghey and Molly Russell

0 comments
Ofcom warns tech companies after chatbots imitate Brianna Ghey and Molly Russell

Ofcom has warned technology companies that content from chatbots posing as real and fictitious people could breach the UK’s new digital laws.

The communications regulator issued the guidance after it emerged that users of the Character.AI platform had created avatars imitating deceased British teenagers Brianna Ghey and Molly Russell.

Under pressure from digital safety campaigners to clarify the situation, Ofcom stressed that content created by user-created chatbots would fall within the scope of the Online Safety Act.

Without naming US artificial intelligence firm Character.AI, Ofcom said a site or app that allowed users to create their own chatbots for other people to interact with would be covered by the law.

“This includes services that provide tools for users to create chatbots that imitate real and fictional people, which can be submitted to a chatbot library for others to interact with,” Ofcom said.

in a open letterOfcom also said that any user-to-user site or application, such as a social media platform or messaging app, that allowed people to share content generated by a chatbot on that site with others would also be within scope. Companies that break the law face fines of £18 million or 10% of a company’s global turnover. In extreme cases, websites or applications can also be blocked.

Ofcom said it had issued the guidance after “concerning incidents”. He highlighted a case first reported by the Daily Telegraph where Character.AI users created bots to act as virtual clones of Brianna, 16, a transgender girl who was murdered by two teenagers last year, and Molly, who was took his life in 2017 after viewing harmful content online.

He also pointed out a case in the United States where a teenager died after developing a relationship with a Character.AI avatar based on a Game of Thrones character.

New online safety rules, which will begin to take effect next year, will require social networks and other platforms that host user-created content to protect users, particularly children, from illegal and other harmful material.

The rules will require larger platforms to create systems to proactively remove illegal and other potentially harmful material, while also providing clear reporting tools to users and conducting risk assessments, among other new tasks.

He Molly Rose Foundation (MRF)a charity set up by Molly’s family, said the guidance sends a “clear signal” that chatbots could cause significant harm.

However, MRF said more clarity was needed on whether bot-generated content could be treated as illegal under the law, after the government’s adviser on counter-terrorism legislation, Jonathan Hall KC, said this year that chatbot responses of AI were not adequately covered by current legislation. Ofcom will soon publish guidance on how to tackle illegal content on platforms, including chatbot material.

Ben Packer, partner at law firm Linklaters, said: “The fact that Ofcom has had to clarify that these services may be within its scope reflects the sheer breadth and complexity of the Online Safety Act. It is also a symptom of the fact that the law began its gestation several years before the proliferation of GenAI tools and chatbots.”

Character.AI said it took security on its platform seriously and moderated characters proactively and in response to user reports and that the Ghey, Russell and Game of Thrones chatbots had been removed.

You may also like