Home Tech The new law that could protect UK children online – as long as it works

The new law that could protect UK children online – as long as it works

0 comments
The new law that could protect UK children online - as long as it works

tThe UK Online Safety Act is quietly one of the most important pieces of legislation to come out of this government. It is true that the competition is scarce. But as time goes on and more and more the law begins to take effect, we begin to see how it will reshape the Internet.

From our story last week:

Social media companies have been told to “tame aggressive algorithms” that recommend content harmful to children, as part of Ofcom’s new safety codes of practice.

Child safety codes, introduced as part of the Online Safety Act, allowed Ofcom to set tough new rules for internet companies and how they can interact with children. Ask services to make their platforms safe for children by default or implement strong age controls to identify children and provide them with safer versions of the experience.

There is much more to the Online Safety Act than just the child-focused aspects, but these are some of the stricter powers given to Ofcom under the new regulatory regime. Websites will be required to have age verification technology to know which of their users are children or, alternatively, to ensure that all their content is safe for children to use.

Content viewed by children will have to be subject to a much stricter set of rules than the adult web, with some types of content – ​​including pornography and material related to suicide, self-harm and eating disorders – strictly prohibited on the sites. youth feeds.

Most immediately interesting, however, is the requirement I cited above. It is one of the first efforts worldwide to impose a strict requirement on the curation algorithms that underpin most of the largest social networks, and will require services like TikTok and Instagram to suppress the spread of “violent, hateful or abusive”. online harassment and content that promotes dangerous challenges” on children’s accounts.

Some fear Ofcom is trying to have its cake and eat it. After all, the easiest way to remove such content is to block it, something that doesn’t require wasting time with recommendation algorithms. Anything less, and there’s an inherent gamble: is it worth risking a hefty Ofcom fine if you decide to allow some violent material into children’s feeds, even if you can argue that you’ve suppressed it beyond where it would normally be? ?

It might seem like an easy fear to dismiss. Who is going to fight for the right to show violence to children? But I’m already counting the days until a well-intentioned government awareness campaign – perhaps about safer streets, perhaps something to do with drug policy – ​​is suppressed or blocked under these rules, and the pendulum swings back in the other direction. Jim Killock, director of the Open Rights Group, an internet policy think tank, said he was “concerned that moderation systems could deny young people educational and supportive material, especially as it relates to sexuality, gender identity, drugs and other sensitive topics.” ”.

Of course, there is opposition from the other side as well. After all, the Online Safety Act was designed to sit squarely in the Goldilocks policy zone:

Goldilocks’ political theory is quite simple. If Mama Bear says his latest government bill is too hot and Papa Bear says his latest government bill is too cold, then he can snuggle up knowing the actual temperature is just right.

Unfortunately, the Goldilocks theory sometimes fails. You learn that what you really have in front of you is not so much a perfectly warmed bowl of oatmeal but rather a roast chicken that you put in the oven still frozen: frozen on the inside, burnt on the outside, and harmful to your health if you try. to eat it.

And so, while Killock worries about the chilling effect, others worry that this act hasn’t gone far enough. Beeban Kidron, an acting colleague and a leading advocate for child safety rules online, worries that the whole issue is too broad to be useful. She wrote in the FT (£):

However, the code is weak in terms of design features. While research shows that live streaming and direct messaging are high risk, few mandatory mitigations are included to address them. Similarly, the requirement that measures have an existing evidence base fails to incentivize new approaches to security… How can you provide evidence that something doesn’t work if you don’t try it?

While we celebrate the arrival of the draft code, we should already demand that its loopholes be fixed, exceptions re-addressed and lobbyists reined in.

The code is available for consultation, but my feeling is that it is a formality; Everyone involved appears to expect the rules as written to remain largely unchanged when they become binding later this year. But the fight over what a safe Internet means for children is just beginning.

AI thinks, therefore AI I exist

AI. Photography: JYPIX/Alamy

One of the reasons I still find the AI ​​sector fascinating (although I know many readers have already made up their minds about it) is that we are still learning pretty fundamental things about how artificial intelligence works.

skip past newsletter promotion

Do a step-by-step reasoning. One of the most useful discoveries in the field of “rapid engineering” was that LLMs like GPT respond much better to complex questions if they are asked to explain their thinking step by step before giving the answer.

There are two possible reasons for this: it can be anthropomorphized into “memory” and “thought.” The first is that LLMs do not have the ability to reason silently. All they do is generate the next word (technically, the next “token”, sometimes just a fragment of a word) in the sentence, which means that, unless they are actively generating new tokens, their ability to handle thoughts complexes is limited. By asking them to “think step by step,” you allow the system to write down each part of your answer and use those intermediate steps to reach your final conclusion.

The other possibility is that step-by-step reasoning literally allows the system think more. Every time an LLM prints a token, it passes through its neural network. No matter how difficult the next tile is, you can’t think of more or less what it should be (this is wrong, but it’s wrong in the same way that everything you learned about atoms in school is wrong). Stepwise thinking could help change that: Letting the system spend more passes answering a question gives it more time to think. If that’s the case, step-by-step thinking is less like a notepad and more like buying time while answering a difficult question.

Then what is? New article suggests the latter:

Chain-of-thought responses from language models improve performance on most benchmarks. However, it is still unclear to what extent these performance improvements can be attributed to human-like task decomposition or simply the increased computation enabled by the additional tokens. We show that transformers can use meaningless filler tokens (e.g., ‘……’) instead of a chain of thought to solve two difficult algorithmic tasks that they could not solve by answering without intermediate tokens.

In other words, if you teach a chatbot to print a dot every time it wants to think, you become better thinking. That is, researchers warn, it is easier said than done. But the discovery has important ramifications for how we use LLMs, in part because it suggests that that Systems write when they show that their operation may not be so relevant to the final answer. If your reasoning can be replaced by many points, you were probably already doing the real work in your head anyway.

The Broadest TechScape

Could the Internet be good for us? Photography: Markus Mainka/Alamy

You may also like