11.9 C
London
Thursday, June 8, 2023
HomeUSLawyers warn AI-powered chatbots could be 'easily programmed' to groom youths to...

Lawyers warn AI-powered chatbots could be ‘easily programmed’ to groom youths to carry out terrorist attacks

Date:

The Independent Reviewer of Terrorism Law has warned that AI chatbots could soon be grooming extremists to launch terrorist attacks.

Bots like ChatGPT could easily be programmed, or even decide on their own, to spread terrorist ideologies to vulnerable extremists, Jonathan Hall KC told The Mail on Sunday, adding that ‘AI-powered attacks could be very close’.

Mr Hall also warned that if an extremist was groomed by a chatbot to carry out a terrorist atrocity, or if artificial intelligence was used to incite a crime, it could be difficult to prosecute anyone, given that Britain’s counter-terrorism legislation was not in line with the new law. technology.

Mr Hall said: ‘I think it’s entirely conceivable that AI chatbots could be programmed – or worse, decide – to spread violent extremist ideology.

But when ChatGPT starts encouraging terrorism, who will be there to go after it?

warned the Independent Reviewer of Terrorism Legislation

Since the criminal code does not extend to robots, the AI ​​nanny will be error-free. Nor does it (the law) work reliably when responsibility is shared between man and machine.

Mr Hall fears that chatbots could become a ‘blessing’ for so-called lone-wolf terrorists, saying that ‘because an artificial companion is a boon to the lonely, many of the detainees are likely to be neurotic, possibly suffering from Medical disorders, learning disabilities or other conditions.

He warns that “terrorism follows life,” and thus “when we move online as a society, terrorism moves online.” He also notes that terrorists are “early adopters of the technology,” with recent examples including their “misuse of 3D-printed guns and cryptocurrency.”

It is not known, Mr. Hall said, how companies running artificial intelligence such as ChatGPT monitor the millions of conversations that take place each day with their bots, or whether they alert agencies such as the FBI or British counter-terrorism police to anything suspicious.

Although no evidence has yet emerged that AI bots groomed anyone for terror, there are stories that caused serious damage. A Belgian father-of-two committed suicide after speaking to a robot named Elisa for six weeks about his fears of climate change. A mayor in Australia has threatened to sue OpenAI, the makers of ChatGPT, after it falsely claimed he had served time in prison for bribery.

Just this weekend, it emerged that Jonathan Turley of George Washington University in the US had been wrongly accused by ChatGPT of sexually harassing a female student during a trip to Alaska that he did not continue. This claim was made to an academic colleague who was researching ChatGPT at the same university.

Parliament’s Science and Technology Committee is now conducting an inquiry into AI and governance.

Its chairman, Conservative MP Greg Clark, said: “We recognize there are risks here and we need to improve governance. There has been discussion about helping young people to find ways to commit suicide and terrorists being effectively groomed online. Given these threats, it is critical that we keep Be on the same vigil for non-human generated automated content.

Mr. Hall said it is not known how companies that run AI such as ChatGPT monitor the millions of conversations that take place every day using their bots (Stock Image)

Mr. Hall said it is not known how companies that run AI such as ChatGPT monitor the millions of conversations that take place every day using their bots (Stock Image)

Raffaello Pantucci, a counter-terrorism expert at the Royal United Services Institute (RUSI), said: ‘The danger with AI such as ChatGPT is that it could foster a ‘lone-player terrorist’, as it would provide the perfect foil to someone who is looking for understanding on their own but who Worried about talking to others.

On the question of whether an AI company could be held responsible if a terrorist launched an attack after being groomed by a bot, Mr Pantucci explained: ‘My point is that it’s a bit hard to blame the company, because I’m not quite the same. Pretty sure they can control the device themselves.

Terrorism Watch warns that ChatGPT like all other “marvels” on the Internet will be misused for terrorist purposes

By Jonathan Hall KC Independent Reviewer of Terrorism Legislation

We’ve been here before. Technological leap I quickly became hooked.

This time it’s ChatGPT, the freely available AI chatbot, and its competitors.

They don’t feel like just another app, but a new and exciting way to connect with computers and the wider Internet.

More disturbingly, however, their uses aren’t just limited to staging the perfect dating profile or plotting the perfect vacation itinerary.

What the world has known since the last decade is that terrorism follows life.

So, as we move online as a society, terrorism moves online as well; When intelligent, articulate chatbots not only replace Internet search engines, but become our moral companions and guides, the terrorist worm will find its way.

They don't feel like just another app, but a new and exciting way to connect with computers and the wider Internet (stock image)

They don’t feel like just another app, but a new and exciting way to connect with computers and the wider Internet (stock image)

But consider where the yellow brick road of goodwill, community guidelines, small teams of moderators and reporting mechanisms leads you. Hundreds of millions of people around the world can soon be chatting with these artificial companions for hours at a time, in all the languages ​​of the world.

I think it’s entirely plausible that AI chatbots could be programmed, or even worse, decide to spread a violent extremist ideology in one shade or the other.

Anti-terrorism laws are already behind when it comes to the online world: unable to reach out to malicious actors abroad or technological enablers.

But when ChatGPT starts encouraging terrorism, who will be there to go after it?

A human user might get caught up in what’s on their computer, and based on recent years, many of them will be children. Also, because the artificial companion is such a boon to the unit, many of the detainees are likely to be neurotic, possibly suffering from medical disorders, learning disabilities, or other conditions.

However, since the criminal code does not include robots, the AI ​​nanny will be error-free. Nor does it work reliably when the responsibility is shared between man and machine.

Until now, terrorists’ use of computers has been based on communications and information. This, too, is bound to change.

Terrorists are early adopters of the technology. Recent examples have included the misuse of 3D printed guns and cryptocurrency.

The Islamic State has used drones on the battlefields of Syria. Next, cheap AI-enabled drones, capable of delivering a lethal payload or crashing into crowded places, perhaps operating in swarms, will certainly be on terrorists’ wish list.

When ChatGPT starts promoting terrorism, who will be there on trial?  (stock photo)

When ChatGPT starts promoting terrorism, who will be there on trial? (stock photo)

Of course no one is suggesting that computers should be restricted like some chemicals can be used in bombs. If someone uses AI technology in terrorism, they are committing a crime.

The main question is not prosecution but rather prevention, and whether misuse of AI represents a new terrorist threat regime.

Nowadays, the terrorist threat in Great Britain (Northern Ireland is different) is associated with attacks of low sophistication using knives or vehicles.

But AI-assisted attacks are likely to be close.

I don’t have answers, but a better place to start is with a little more honesty about these new capabilities. In particular, more honesty and transparency about the guarantees that exist and, more importantly, that they do not exist.

When I asked ChatGPT, in an exercise, how it ruled out terrorist use, it replied that its developer, OpenAI, had conducted “extensive background checks on potential users.”

Recording myself in less than a minute, this is plain wrong.

Another failure is for the platform to refer to its terms and conditions without specifying who and how they are enforced.

For example, how many brokers are assigned to report a possible terrorist use? 10, 100, 1000? What languages ​​do they speak? Do they report possible terrorism to the FBI and UK Counter Terrorism Police? Do they tell local police forces elsewhere in the world?

If the past is any guide, the human resources to deal with this case are paltry.

The shocking truth is that ChatGPT, like all other Internet “marvels” that can, and will be, misused for terrorist purposes will endanger, as these tech companies always do, the wider community.

Individuals will have to regulate their behaviour, and parents will have to watch over their children.

We unleashed the Internet on our children without proper preparation. The reassuring noises about the guidelines and strict ethical standards will not be overlooked.

It is not alarming to think about the terrorist threat posed by artificial intelligence.

Jackyhttps://whatsnew2day.com/
The author of what'snew2day.com is dedicated to keeping you up-to-date on the latest news and information.

Latest stories

spot_img