The writer is international policy director at Stanford University’s Cyber Policy Center and serves as a special advisor to Margrethe Vestager
Technology companies recognize that the race for AI dominance is being decided not only in the market, but also in Washington and Brussels. Rules for the development and integration of their AI products will have an existential impact on them, but currently remain up in the air. So executives try to get ahead and set the tone, claiming they are best placed to regulate the technologies they produce. AI may be new, but the talking points are recycled: they’re the same ones Mark Zuckerberg used on social media and Sam Bankman-Fried on crypto. Such statements should not again distract democratic legislators.
Imagine JPMorgan’s CEO explaining to Congress that because financial products are too complex for lawmakers to understand, banks must decide for themselves how to prevent money laundering, enable fraud detection, and determine liquidity-to-loan ratios. He would be laughed out of the room. Angry voters would point out how well self-regulation worked in the global financial crisis. From big tobacco to big oil, we’ve learned the hard way that corporations can’t make disinterested rules. They are neither independent nor capable of creating countervailing powers for themselves.
Somehow that fundamental truth has been lost when it comes to AI. Legislators are eager to delay companies and want their regulatory advice; senators even asked OpenAI CEO Sam Altman appoints potential industry leaders to oversee a purported national AI regulator.
Within industry circles, the call for AI regulation has become almost apocalyptic. Scientists warn that their creations are too powerful and could go rogue. A recent letter, signed by Altman and others, warned that AI was a threat to humanity’s survival, akin to nuclear war. You would think that these fears would spur executives into action, but despite the signing, virtually no one has changed their own behavior. Perhaps their formulation of how we think about guardrails around AI is the real goal. Our ability to navigate questions about the type of regulation required is also heavily influenced by our understanding of the technology itself. The rulings have drawn attention to AIs existential risk. But critics argue that prioritizing preventing this in the future overshadows much-needed work against discrimination and prejudice that should be happening today.
Warnings about the catastrophic risks of AI, supported by the very people who could stop forcing their products on society, are disorienting. The open letters make the signatories seem powerless in their desperate appeals. But those who raise the alarm already have the power to slow or pause the potentially dangerous progress of artificial intelligence.
Former Google CEO Eric Schmidt argues that companies are the only ones equipped to develop crash barriers, while governments lack the expertise. But legislators and executives are also not experts in agriculture, crime fighting or drug prescribing, but they regulate all of those activities. They certainly shouldn’t be discouraged by the complexity of AI – at least it should encourage them to take responsibility. And Schmidt has inadvertently reminded us of the first challenge: breaking the monopolies of access to proprietary information. With independent research, realistic risk assessments and guidelines for the enforcement of existing regulations, a debate on the need for new measures would be evidence-based.
Executive actions speak louder than words. Just days after Sam Altman welcomed AI regulation in his congressional testimony, he threatened to pull the plug on OpenAI’s operations in Europe. Realizing that EU regulators were not taking kindly to threats, he switched back to a charm offensive and vowed to open an office in Europe.
Lawmakers should remember that business people are primarily concerned with profit rather than social impact. It is high time to move beyond pleasantries and define specific goals and methods for AI regulation. Policy makers should not let tech CEOs shape and control the story, much less the process.
A decade of technological disruption has highlighted the importance of independent oversight. That principle is even more important when power over technologies like AI is concentrated in a handful of companies. We must listen to the powerful individuals who control them, but never take their words for granted. Their grand claims and ambitions should instead propel regulators and legislators to action based on their own expertise: that of the democratic process.