Home US Pentagon launches tech to stop AI-powered killing machines from going rogue on the battlefield due to robot-fooling visual ‘noise’

Pentagon launches tech to stop AI-powered killing machines from going rogue on the battlefield due to robot-fooling visual ‘noise’

by Jack
0 comment
Computer scientists at defense contractor MITER Corp. managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left outdoors, and even people.

Pentagon officials have sounded the alarm about “unique classes of vulnerabilities for AI or autonomous systems,” which they hope new research can address.

The program, called Ensuring AI Robustness Against Deception (GARD), is tasked from 2022 with identifying how visual data or other electronic signal inputs to AI could be manipulated through the calculated introduction of noise.

Computer scientists at one of GARD’s defense contractors have experimented with kaleidoscopic patches designed to trick AI-based systems into creating fake IDs.

“Basically, by adding noise to an image or a sensor, maybe you can break a machine learning algorithm,” a senior Pentagon official who led the research explained Wednesday.

The news comes as fears that the Pentagon has been “building killer robots in the basement” have allegedly led to stricter AI rules for the US military, mandating that all systems must be approved before deployment.

Computer scientists at defense contractor MITER Corp. managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left outdoors, and even people.

Computer scientists at defense contractor MITER Corp. managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left outdoors, and even people.

1712127153 136 Pentagon launches tech to stop AI powered killing machines from going

1712127153 136 Pentagon launches tech to stop AI powered killing machines from going

Researchers with the modestly budgeted GARD program have spent $51,000 investigating visual and signal noise tactics since 2022, Pentagon audits show.

Researchers with the modestly budgeted GARD program have spent $51,000 investigating visual and signal noise tactics since 2022, Pentagon audits show.

A bus packed with civilians, for example, could be mistakenly identified as a tank by an AI, if it were tagged with the correct “visual noise,” as a ClearanceJobs national security reporter proposed as an example. The Pentagon program has spent $51,000 researching since 2022

“With knowledge of that algorithm, sometimes it is also possible to create physically feasible attacks,” added that official, Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

Technically, it is feasible to “trick” an AI’s algorithm into making mission-critical errors, causing the AI ​​to mistakenly identify a variety of patterned patches or stickers as if they were a real physical object that does not actually exist.

A bus full of civilians, for example, could be mistakenly identified as a tank by an AI, if it were tagged with the correct “visual noise,” as one national security reporter on the site said. SettlementJobs proposed as an example.

In short, these cheap and lightweight “noise” tactics could cause vital military AI to misclassify enemy combatants as allies, and vice versa, during a critical mission.

Researchers with the modestly budgeted GARD program have spent $51,000 researching visual and signal noise tactics since 2022. Pentagon audits show.

A 2020 MITER study illustrated how AI can interpret visual noise, which may appear merely decorative or inconsequential to human eyes, such as a 'Magic Eye' poster from the 1990s, as a solid object. Above, MITER visual noise tricks an AI into seeing apples

A 2020 MITER study illustrated how AI can interpret visual noise, which may appear merely decorative or inconsequential to human eyes, such as a 'Magic Eye' poster from the 1990s, as a solid object. Above, MITER visual noise tricks an AI into seeing apples

A 2020 MITER study illustrated how AI can interpret visual noise, which may appear merely decorative or inconsequential to human eyes, such as a ‘Magic Eye’ poster from the 1990s, as a solid object. Above, MITER visual noise tricks an AI into seeing apples

1712127154 821 Pentagon launches tech to stop AI powered killing machines from going

1712127154 821 Pentagon launches tech to stop AI powered killing machines from going

U.S. Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities Michael Horowitz explained at an event in January that the new Pentagon directive “does not prohibit the development of any (AI) systems,” but “will make clear what what is and what is not.” permitted.’ Above, a fictional killer robot from the Terminator film franchise.

What is public about your work includes a study from 2019 and 2020 which illustrates how visual noise, which may appear merely decorative or inconsequential to human eyes, such as a ‘Magic Eye’ poster from the 1990s, can be interpreted as a solid object by AI.

Computer scientists at defense contractor MITER Corporation managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left outdoors, and even people.

“Whether they are physically achievable attacks or noise patterns that are added to artificial intelligence systems,” Turek said Wednesday, “the GARD program has built state-of-the-art defenses against them.”

“Some of those tools and capabilities have been provided to CDAO (the Department of Defense Digital and Artificial Intelligence Chief Office),” according to Turek.

The Pentagon formed the CDAO in 2022; serves as a hub to facilitate faster adoption of AI and related machine learning technologies across the military.

The Department of Defense (DoD) recently updated its rules on AI amid “a lot of confusion about” how it plans to use autonomous decision-making machines on the battlefield, according to Michael, deputy assistant secretary of Defense for Force Development and Emerging Capabilities. by US Horowitz

Horowitz explained at an event in January that the “directive does not prohibit the development of any (AI) systems,” but will “make clear what is permitted and what is not permitted” and maintain a “commitment to responsible behavior,” to as it develops. Lethal autonomous systems.

While the Pentagon believes the changes should reassure the public, some have said they are “not convinced” of the efforts.

Mark Brakel, director of advocacy organization Future of Life Institute (FLI), told DailyMail.com in January: “These weapons carry a huge risk of inadvertent escalation.”

He explained that AI-powered weapons could misinterpret something, such as a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without cause and without intentional adversarial “visual noise.”

Brakel said the result could be devastating because “without meaningful human control, AI-powered weapons are like the Norwegian rocket incident (a near-nuclear Armageddon) on steroids and could increase the risk of accidents at hotspots like the Strait of Taiwan”.

Dailymail.com has contacted the Department of Defense for comment.

The Defense Department has been aggressively pushing to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack a target without human intervention.

You may also like