Home Tech Violent online content ‘unavoidable’ for UK children, Ofcom finds

Violent online content ‘unavoidable’ for UK children, Ofcom finds

by Elijah
0 comment
Violent online content ‘unavoidable’ for UK children, Ofcom finds

Violent online content is now ‘inevitable’ for children in the UK, with many first exposed to it while still in primary school, according to research by the charity media monitoring.

Every British child surveyed in the Ofcom study had watched violent material on the internet, ranging from videos of local schools and street fights shared in group chats, to explicit and extreme graphic violence, including gang-related content.

Children knew that even more extreme content was available in the deepest corners of the web, but had not sought it out themselves, the report concluded.

These findings prompted the NSPCC to accuse tech platforms of standing idly by and “ignoring their duty of care towards young users”.

Rani Govender, senior policy officer for children’s online safety, said: “It is deeply concerning that children are telling us that being unwittingly exposed to violent content has become a normal part of their lives online.

“It is unacceptable that algorithms continue to spread harmful content that we know can have devastating mental and emotional consequences for young people. »

The research, carried out by the Family, children and young people agency, is part of Ofcom’s preparation for its new responsibilities under the Online Safety Act, passed last year, which gave the regulator the power to crack down on social networks that fail to protect their users, especially children.

Gill Whitehead, director of Ofcom’s online safety group, said: “Children should not feel that seriously harmful content – including content depicting violence or encouraging self-harm – is an inevitable part of or inevitable in their online lives.

“Today’s research sends a powerful message to tech companies that now is the time to act so they are prepared to meet their child protection obligations under new child safety laws. line. Later this spring, we will consult on how we expect the industry to ensure children can enjoy a safer, age-appropriate online experience.

Almost all the big tech companies were mentioned by children and young people surveyed by Ofcom, but apps Snapchat and Meta, Instagram and WhatsApp, came up most frequently.

“Children explained that there were private accounts, often anonymous, solely for sharing violent content – ​​most commonly fights in local schools and streets,” the report said. “Almost all of the kids in this research who interacted with these accounts said they were found on Instagram or Snapchat.”

“There’s peer pressure to pretend it’s funny,” said one 11-year-old girl. “You feel uncomfortable on the inside, but act like it’s funny on the outside.” Another 12-year-old girl described feeling “slightly traumatized” after seeing a video of animal cruelty: “Everyone was joking about it. »

Many older children in the study “appeared to have become desensitized to the violent content they encountered.” Professionals also expressed particular concern about violent content normalizing offline violence, and reported that children tended to laugh and joke about serious violent incidents.

On some social networks, exposure to graphic violence comes from above. On Thursday, Twitter, now known as X after its takeover by Elon Musk, removed a graphic clip purporting to show sexual mutilation and cannibalism in Haiti after it went viral on the social network. The clip was reposted by Musk himself, who tweeted it to the NBC news channel in response to a report from the channel which accused him and other right-wing influencers of spreading unverified claims about the chaos in the country.

Other social platforms offer tools to help children avoid violent content, but little support. Many children, as young as eight years old, told researchers that it was possible to report content they didn’t want to see, but that they weren’t sure the system would work.

For private chats, they worried that reports would label them as “snitches,” which would lead to embarrassment or punishment from their peers, and they did not believe the platforms would impose meaningful consequences on those who posted violent content.

The rise of powerful algorithmic timelines, like those of TikTok and Instagram, added an additional dimension: children were torn by the belief that if they spent time on violent content (e.g., reporting it) , they would be more likely to do so. be recommended.

Professionals participating in the study expressed concern that violent content was affecting children’s mental health. In a separate report published on Thursday, England’s Children’s Commissioner revealed that more than 250,000 children and young people were waiting for mental health support after being referred to NHS services, meaning a in 50 children in England are on the waiting list. For children who received assistance, the average wait time was 35 days, but last year almost 40,000 children had to wait more than two years.

A Snapchat spokesperson said: “There is absolutely no place for violent content or threatening behavior on Snapchat. When we detect this type of content, we promptly remove it and take appropriate action regarding the offending account.

“We have easy-to-use and confidential in-app reporting tools and work with the police to support their investigations. We support the aims of the Online Safety Act to protect people from harm online and continue to work constructively with Ofcom on the implementation of the Act.

Meta has been contacted for comment. X declined to comment.

You may also like