HomeTech How about this for a bomb? The United States must make AI its next Manhattan Project | John Naughton

How about this for a bomb? The United States must make AI its next Manhattan Project | John Naughton

0 comments
 How about this for a bomb? The United States must make AI its next Manhattan Project | John Naughton

tYears ago, Oxford philosopher Nick Bostrom published superintelligence, a book that explores how superintelligent machines could be created and what the implications of such technology could be. One was that such a machine, if created, would be difficult to control and could even take over the world to achieve its goals (which in Bostrom’s famous thought experiment involved making paperclips).

The book was a huge bestseller, sparking lively debates but also attracting many disagreements. Critics complained that it was based on a simplistic view of “intelligence,” that it overestimated the likelihood of superintelligent machines emerging in the near term, and that it did not suggest credible solutions to the problems it had raised. But it had the great merit of making people think about a possibility that until now had been confined to the most remote margins of academia and science fiction.

technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":2,"theme":1,"design":8}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"updateLogoAdPartnerSwitch":true,"assetOrigin":"https://assets.guim.co.uk/"}"/>

Now, 10 years later, another shot arrives at the same target. This time, however, it is not a book but a substantial essay (165 pages) with the title Situational awareness: the next decade. The author is a young German, Leopold Aschenbrenner, who now lives in San Francisco and hangs out with the most cerebral fringe of Silicon Valley. On paper, he sounds a bit like a Sam Bankman-Fried-style wunderkind: a maths genius who graduated from an elite US university in his teens, spent time at Oxford with the group from Future of Humanity Institute and worked for the OpenAI “super alignment.” ” equipment (now dissolved), before starting an investment firm focused on AGI (artificial general intelligence) with funding from the Collison brothers, Patrick and John, founders of Stripe, a pair of clever cookies who don’t back losers.

So this Aschenbrenner is smart, but he also has something to do with the game. The second point may be relevant because essentially the central idea of ​​his mega-essay is that superintelligence is coming (with AGI as a springboard) and the world is not ready for it.

The essay has five sections. The first traces the path from GPT-4 (where we are now) to AGI (which he believes could arrive as early as 2027). The second traces the hypothetical path from AGI to actual superintelligence. The third analyzes four “challenges” that superintelligent machines will pose to the world. The fourth section describes what he calls the “blueprint” needed to manage a world equipped with (dominated?) superintelligent machines. The fifth section is Aschenbrenner’s message to humanity in the form of three “principles” of “AGI realism.”

In his view of how AI will progress in the near future, Aschenbrenner is basically an optimistic determinist, in the sense that he extrapolates from the recent past under the assumption that trends continue. He cannot view an upward sloping graph without extending it. He rates LLMs (Large Language Models) based on ability. Therefore, GPT-2 was “preschool” level; GPT-3 was “elementary student”; GPT-4 is an “intelligent high school student” and a massive increase in computing power will apparently lead us by 2028 to “models as intelligent as doctors or experts who can work alongside us as co-workers.” By the way, why do AI promoters always consider doctors to be the epitome of human perfection?

After 2028 the big leap will come: from AGI to superintelligence. In Aschenbrenner’s universe, AI is not limited to human-level ability. “Hundreds of millions of AGI could automate AI research, compressing a decade of algorithmic progress into a year. We would quickly go from human level to vastly superhuman AI systems. The power – and danger – of superintelligence would be dramatic.”

skip past newsletter promotion

The third section of the essay contains an exploration of what that world might be like, focusing on four aspects of it: the unimaginable (and environmentally disastrous) computational requirements needed to run it; the difficulties of maintaining the security of AI laboratories in such a world; the problem of aligning machines with human purposes (difficult but not impossible, Aschenbrenner thinks); and the military consequences of a world of superintelligent machines.

Only when he gets to the fourth of these themes does Aschenbrenner’s analysis really begin to unravel into the themes. Throughout his thought, like the message carved into a piece of rock in Blackpool, is the analogy of nuclear weapons. He considers the United States to be at the stage with AI that it was after J. Robert Oppenheimer’s initial Trinity test in New Mexico, ahead of the USSR, but not by long. And in this metaphor, of course, China plays the role of the Soviet empire.

Suddenly, superintelligence has gone from a problem for humanity to an urgent matter of US national security. “America leads the way,” she writes. “We just have to preserve it. And we’re ruining it right now. Above all, we must quickly and radically shut down AI labs, before key advances in AGI leak in the next 12 to 24 months… We must build computer clusters in America, not in dictatorships offering money. And yes, American AI labs have a duty to work with the intelligence community and the military. America’s leadership on AGI will not ensure peace and freedom simply by creating the best AI applications. “It’s not pretty, but we must create AI for American defense.”

All that is needed is a new Manhattan Project. And an AGI industrial complex.

what i have been reading

Despot shot
In the former Eastern bloc, they fear a Trump presidency It is an interesting piece in the New Republic about people who know a thing or two about living under tyranny.

Normandy revisited
by historian Adam Tooze 80 years after D-Day: World War II and the ‘great acceleration’ It is a reflection on the anniversary of the war.

legal impediment
Monopoly Raid: The Harvey Weinstein of Antitrust is Matt Stoller’s blog post about Joshua Wright, the lawyer who for many years had a devastating impact on antitrust enforcement in the United States.

You may also like