Home Tech AI poses ‘extinction-level’ threat and US government must be given new ’emergency powers’ to control technology, warns State Department report

AI poses ‘extinction-level’ threat and US government must be given new ’emergency powers’ to control technology, warns State Department report

by Elijah
0 comment
A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for

A new study funded by the US State Department calls for a temporary ban on the creation of advanced AI that exceeds a certain threshold of computational power.

The technology, its authors claim, represents a “threat of extinction for the human species.”

The study, commissioned as part of a $250,000 federal contract, also calls for “defining emergency powers” for the executive branch of the US government to “respond to dangerous and rapidly evolving incidents involving AI,” such as “intelligence robotics.” swarm”.

Treating high-end computer chips as international contraband, and even monitoring how the hardware is used, are just some of the drastic measures the new study calls for.

The report joins a chorus of industry, government and academic voices calling for aggressive regulatory attention to the much-persecuted and revolutionary but socially disruptive potential of artificial intelligence.

Last July, the United Nations agency for science and culture (UNESCO), for example, combined its concerns about AI with equally futuristic concerns about brain chip technology, à la Elon Musk’s Neuralink, warning of the “neurovigilance” that violates “mental privacy.”

AI poses extinction level threat and US government must be given

A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for “defining emergency powers” for the executive branch of the US government “to respond to dangerous and rapidly evolving AI-related threats.” incidents

AI poses extinction level threat and US government must be given

AI poses extinction level threat and US government must be given

The Gladstone AI report presents a dystopian scenario in which machines can decide for themselves that humanity is an enemy that must be eradicated, in the style of the Terminator movies: “if developed using current techniques, (AI) could behave in ways that are adverse to human beings by default.”

While the new report notes at the outset, on its first page, that its recommendations “do not reflect the views of the U.S. Department of State or the U.S. government,” its authors have been briefing the government on A.I. since 2021.

The study’s authors, a four-person artificial intelligence consultancy called Gladstone AI firm led by brothers Jérémie and Edouard Harris, said TIME that his previous presentations on the risks of AI were frequently heard by government officials without authority to act.

That has changed with the US State Department, they told the magazine, because its Office of International Security and Nonproliferation is specifically tasked with curbing the spread of catastrophic new weapons.

AND the Gladstone report on AI devotes considerable attention to the “risk of militarization.”

In recent years, Gladstone AI CEO Jérémie Harris (inset) has also appeared before the Canadian House of Commons Standing Committee on Industry and Technology (pictured).

In recent years, Gladstone AI CEO Jérémie Harris (inset) has also appeared before the Canadian House of Commons Standing Committee on Industry and Technology (pictured).

In recent years, Gladstone AI CEO Jérémie Harris (inset) has also appeared before the Canadian House of Commons Standing Committee on Industry and Technology (pictured).

There is a huge AI gap in Silicon Valley. Brilliant minds are divided over systems progress: some say it will improve humanity and others fear technology will destroy it.

There is a huge AI gap in Silicon Valley. Brilliant minds are divided over systems progress: some say it will improve humanity and others fear technology will destroy it.

There is a huge AI gap in Silicon Valley. Brilliant minds are divided over systems progress: some say it will improve humanity and others fear technology will destroy it.

Advanced, offensive AI, they write, “could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.”

But the report raises a second, even more dystopian scenario, which they describe as a risk of “loss of control” of AI.

There is, they write, “reason to believe that they (weaponized AI) may be uncontrollable if developed using current techniques, and could behave in ways that are adverse to humans by default.”

In other words, machines can decide for themselves that humanity (or some subset of humanity) is simply an enemy that must be eradicated forever.

Gladstone AI CEO Jérémie Harris also presented similarly dire scenarios to hearings held by the Canadian House of Commons Standing Committee on Industry and Technology last year, on December 5, 2023.

“It is no exaggeration to say that the coldest conversations in the border AI security community frame AI in the near future as a weapon of mass destruction,” Harris told Canadian lawmakers.

“Publicly and privately, cutting-edge AI labs tell us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting the design of biological weapons, among many other alarming capabilities, in the future.” coming years,” according to IT World CanadaThe coverage of your comments.

“Our own research,” he said, “suggests this is a reasonable assessment.”

Harris and his co-authors noted in their new State Department report that private-sector AI companies, heavily funded by venture capital, face incredible “incentives to scale” to beat their competition more than any “incentive to invest in security.” “that balances.

The only viable means to curb their scenario, they advise, is outside of cyberspace, through strict regulation of high-end computer chips used to train artificial intelligence systems in the real world.

The Gladstone AI report calls nonproliferation work on this hardware the “most important requirement for safeguarding long-term global security from AI.”

And it wasn’t a suggestion they made lightly, given the inevitable likelihood of industry protests: “It’s an extremely difficult recommendation to make, and we spent a lot of time looking for ways to suggest measures like this,” they said.

One of the co-authors of the new Harris brothers report, former Defense Department official Mark Beall, served as Chief of Strategy and Policy for the Pentagon’s Joint Artificial Intelligence Center during his years of government service.

Beall appears to be acting with urgency on the threats identified in the new report: The former Department of Defense AI strategy chief has since left Gladstone to launch a super PAC dedicated to AI risks.

The PAC, called Americans for AI Safety, launched Monday with the stated hope of “passing AI safety legislation by the end of 2024.”

You may also like