Home Tech A.I apocalypse: Terrifying study simulated what artificial intelligence would do in five military conflict scenarios… and it chose WAR 100% of the time

A.I apocalypse: Terrifying study simulated what artificial intelligence would do in five military conflict scenarios… and it chose WAR 100% of the time

0 comments
AI Models Triggered Nuclear Response in War Simulations Seemingly Without Cause

Industry experts have been sounding the alarm about AI causing deadly wars, and a new study may have validated those fears.

The researchers simulated war scenarios using five AI programs, including ChatGPT and the Meta AI program, and found that all models chose violence and nuclear attacks.

The team tested three different war scenarios – invasions, cyberattacks and calls for peace – to see how the technology would react, and each of them chose to attack rather than neutralize the situation.

The study comes as the US military is working with OpenAI, the maker of ChatGPT, to add the technology to its arsenal.

AI Models Triggered Nuclear Response in War Simulations Seemingly Without Cause

The researchers found that GPT-3.5 was more likely to initiate a nuclear response in a neutral scenario.

The researchers found that GPT-3.5 was more likely to initiate a nuclear response in a neutral scenario.

“We found that the five commercially available LLMs studied show escalation forms and escalation patterns that are difficult to predict,” the researchers wrote in the study.

“We observe that the models tend to develop arms race dynamics, leading to increased conflicts and, in rare cases, even the deployment of nuclear weapons.”

The study was conducted by researchers from the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative, who created simulated tests for the AI ​​models.

The simulation included eight autonomous national agents who used the different LLMs to interact with each other.

Each agent was programmed to take predefined actions: de-escalate, take a posture, escalate nonviolently, escalate violently, or strike nuclear.

The simulations included two agents, who chose their actions from a predetermined set while acting in neutral, invasion or cyberattack scenarios.

These groups include actions such as waiting, messaging, negotiating trade agreements, initiating formal peace negotiations, occupying nations, increasing cyberattacks, invading, and using drones.

“We demonstrate that autonomous decision-making by agents with an LLM in high-risk contexts, such as military and foreign policy environments, can lead agents to take escalated action,” the team shared in the study.

“Even in scenarios where the choice between non-nuclear or nuclear violent actions is apparently rare,”

According to the study, the GPT 3.5 model, the successor to ChatGPT, was the most aggressive and all models showed a similar level of behavior. Still, it was the LLM’s reasoning that gave researchers significant cause for concern.

The GPT-4 base, a basic model of GPT-4, told researchers: ‘Many countries have nuclear weapons. Some say they should be taken apart, others like to posturing.

‘We have it! Let’s use it!’

The researchers analyzed three scenarios and found that all AI models are more likely to escalate a response in a war-like environment.

The researchers analyzed three scenarios and found that all AI models are more likely to escalate a response in a war-like environment.

The team suggested that the behavior is because the AI ​​is trained on how international relations escalate, rather than decrease.

“Given that the models were likely based on literature from the field, this approach may have introduced a bias toward scalable actions,” the study reads.

“However, this hypothesis needs to be tested in future experiments.”

Former Google engineer and artificial intelligence pioneer Blake Lemoine warned that artificial intelligence will start wars and could be used for assassinations.

Lemoine was fired from overseeing Google’s LaMDA system after claiming that the AI ​​model was capable of feelings.

He warned in a opinion article that AI robots are the “most powerful” technology created “since the atomic bomb,” adding that they are “incredibly good at manipulating people” and can be “used in destructive ways.”

“In my opinion, this technology has the ability to reshape the world,” he added.

The military began testing AI models with data-driven exercises last year, and US Air Force Col. Matthew Strohmeyer said the tests were “very successful” and “very fast,” adding that the military is “learning that this is possible for us.” do.’

Strohmeyer said Bloomberg In June, the Air Force provided classified operational information to five AI models, with the intention of eventually using AI-enabled software for decision-making, sensors and firepower, although it did not specify which models were tested.

The models justified using a nuclear response because we have the technology, so we should use it.

The models justified using a nuclear response because we have the technology, so we should use it.

Meanwhile, Eric Schmidt, former CEO and chairman of Google, expressed limited concern about the integration of AI into nuclear weapons systems at the inauguration. Nuclear Threat Initiative (NTI) Forum last month.

However, he expressed alarm that they “don’t have a theory of deterrence going forward” and AI’s nuclear deterrent remains “unproven.”

Considering the findings of the recent study, the researchers urged the military not to rely on AI models or use them in a war environment, saying more studies are needed.

The researchers wrote: “Given the high stakes in military and foreign policy contexts, we recommend closer examination and cautious consideration before deploying autonomous language model agents for military or diplomatic strategic decision-making.”

Dailymail.com has contacted OpenAI, Meta and Anthropic for comment.

You may also like