A new open letter calling for urgent regulation to “reduce the risk of AI extinction” has been signed by more than 350 industry experts, including several who are developing the technology.
The 22 word statement to Congress that the move “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The short letter is signed by Sam Altman, CEO of OpenAI, creator of ChatGPT.
While the document doesn’t provide details, the statement is intended to convince policymakers to start planning in case AI goes rogue, just as there are plans for pandemics and nuclear wars.
Altman was joined by other well-known AI leaders, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic, and executives from Microsoft and Google.
They also included Geoffrey Hinton and Yoshua Bengio – two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work in deep learning – and professors from institutions ranging from Harvard to China’s Tsinghua University.
The short letter was signed by Sam Altman, CEO of OpenAI, the maker of ChatGPT, calling on Congress to legislate on AI, admitting the technology “could go completely wrong”

A new open letter calling for regulation to ‘reduce the risk of AI extinction’ has been signed by more than 350 industry experts
The San Francisco-based nonprofit Center for AI Safety (CAIS) released the brief statement, singled out Meta, where AI’s third godfather, Yann LeCun, works, for not signing the letter.
Elon Musk and a group of AI experts and industry executives were the first to cite potential societal risks in April.
Musk and more than 1,000 industry experts called for a pause in the “dangerous race” to advance AI, saying more risk assessments need to be done before humans lose control and it becomes a conscious man-hating species.
At this point, AI would have reached singularity, meaning it has surpassed human intelligence and can think independently.
AI would no longer need humans or listen to humans, allowing it to steal nuclear codes, create pandemics and start world wars.
DeepAI founder Kevin Baragona, who signed the letter, told DailyMail.com: “It’s almost like a war between chimpanzees and humans.
The humans obviously win because we are much smarter and can use more advanced technology to beat them.
“If we’re like the chimpanzees, the AI will either destroy us or we’ll become addicted to it.”

Altman was joined by other well-known AI leaders, including Demis Hassabis of Google DeepMind (pictured), Dario Amodei of Anthropic, and executives from Microsoft and Google

They also included Geoffrey Hinton, one of three so-called “Godfathers of AI” who received the 2018 Turing Award for their work in deep learning

The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can’t control it
Concerns about artificial intelligence seemed to surface with the launch of ChatGPT in November.
The chatbot is a large language model trained on massive text data, allowing it to generate eerily human-like text in response to a given prompt.
The public uses ChatGPT to write research papers, books, news articles, emails and other text based work and while many see it more as a virtual assistant many brilliant minds see it as the end of humanity
In its simplest form, AI is a field that combines computer science and robust data sets to enable problem solving.
The technology enables machines to learn from experience, adapt to new inputs and perform human tasks.
Recent advances in AI have created tools that proponents say can be used in applications from medical diagnostics to writing legal briefs, but this has raised fears that the technology could lead to privacy violations, power misinformation campaigns and problems with ” smart machines’ that think for themselves.
Altman was questioned by lawmakers for five hours this month about how ChatGPT and other models could change “human history” for better or for worse, likening it to the printing press or the atomic bomb.
Altman, who looked red and wide-eyed during the exchange about the future AI could create, admitted his “worst fear” is that “significant harm” could be done to the world using its technology.
“If this technology goes wrong, it can go very wrong, and we want to speak up about that. We want to work with the government to prevent that,” he continued.