Reporting requirements are essential to alert the government to potentially dangerous new capabilities in increasingly powerful AI models, says a US government official who works on AI issues. The official, who requested anonymity to speak freely, points out OpenAI Support about the “inconsistent rejection of requests for nerve agent synthesis” of its latest model.
The official says the reporting requirement is not too onerous. They argue that, unlike AI regulations in the European Union and China, Biden’s EO reflects “a very broad and light approach that continues to encourage innovation.”
Nick Reese, who was the Department of Homeland Security’s first emerging technology director from 2019 to 2023, rejects conservative claims that the reporting requirement will endanger companies’ intellectual property. And he says it could actually benefit startups by encouraging them to develop AI models that are “more computationally efficient” and with less data that falls below the reporting threshold.
The power of AI makes government oversight imperative, says Ami Fields-Meyer, who helped draft Biden’s EO as a White House technology official.
“We’re talking about companies that say they’re building the most powerful systems in the history of the world,” Fields-Meyer says. “The first obligation of the government is to protect the people. ‘Believe me, we have it’ is not a particularly compelling argument.”
Experts praise NIST’s security guidance as a vital resource for building protections into new technology. They point out that flawed AI models can lead to serious social harms, including discrimination in rent and loans and wrongful loss of government benefits.
Trump’s own first-term AI order required federal AI systems to respect civil rights, something that will require research into societal harms.
The AI industry has largely welcome Biden’s security agenda. “What we’re hearing is that, generally speaking, it’s helpful to have all of this explained,” the US official says. For startups with small teams, it “expands your people’s ability to address these concerns.”
Rolling back Biden’s EO would send an alarming signal that “the US government is going to take a hands-off approach to AI security,” says Michael Daniel, a former presidential cyber adviser who now heads Cyber Threat Alliance, a nonprofit information sharing organization.
As for competition with China, EO advocates say the security rules will actually help the United States prevail by ensuring that American AI models perform better than their Chinese rivals and are protected from Beijing’s economic espionage.
Two very different paths
If Trump wins the White House next month, a sea change is expected in how the government approaches AI safety.
Republicans want to prevent harm from AI by applying “existing legal and harm laws” rather than enacting sweeping new restrictions on the technology, Helberg says, and they favor “focusing much more on maximizing the opportunities that AI brings.” , rather than focusing too much on risk.” mitigation.” That would probably spell doom for the reporting requirement and possibly for some of the NIST guidelines.
The reporting requirement could also face legal challenges now that the Supreme Court has weakened the deference that courts used to give agencies when evaluating their regulations.
And the Republican Party’s reaction could even jeopardize the functioning of NIST. Voluntary partnerships for AI testing. with leading companies. “What happens to those commitments in a new administration?” the American official asks.
This polarization around AI has frustrated technologists who fear Trump will undermine the search for safer models.
“Along with the promises of AI, there are dangers,” says Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, “and it is vital that the next president continues to ensure the security of these systems.”