Elon Musk and 1,000 other tech leaders, including Apple co-founder Steve Wozniak, have called for a pause in the “dangerous race” to develop artificial intelligence, which they fear poses a “profound danger to society and humanity” and could have “catastrophic” effects. “.
In an open letter about the Future of Life Institute, Musk and others said the human race does not yet know the full range of risks involved in developing the technology.
They are asking all AI labs to stop developing their products for at least six months while they conduct further risk assessments.
If any labs refuse, they want governments to “get involved”.
Musk fears that the technology will become so advanced it will not require — or listen to — human intervention.
It’s a widespread fear and acknowledged by the CEO of AI – the company that created ChatGPT – who said earlier this month that the technology could be developed and harnessed to commit ‘massive’ cyberattacks.
In an open letter to the Future of Life organization, Musk and others said humanity does not yet know the full range of risks involved in developing the technology
Musk, Wozniak and other tech leaders were among 1,120 people who signed the open letter calling for an industry-wide pause on the current “dangerous race.”
They say AI Labs are “currently caught up in an out-of-control race to develop and deploy more powerful digital brains that no one – not even their creators – can reliably understand, predict or control.” ”
“Strong AI systems should only be developed once we are confident that their effects will be positive and that their risks will be under control,” the letter states.
The letter also detailed the potential risks to society and civilization caused by human-competitive AI systems in the form of economic and political upheaval, and called on developers to work with policymakers on governance and regulatory authorities.
The letter comes as EU police force Europol on Monday joined a host of ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning of the potential for abuse of the system for phishing attempts, disinformation and cybercrime.
Since its launch last year, OpenAI’s ChatGPT, which is powered by Microsoft, has urged competitors to launch similar products, and companies to integrate it or similar technologies into their applications and products.
Musk has been trying to halt – or at least hinder – the rapid growth of AI technology for years.
In 2017, Musk warned that humanity is “summoning the devil” in its pursuit of technology.
With artificial intelligence, we summon the devil.
You know all those stories where there’s the guy with the pentagram and holy water and he’s like, yeah, he sure can control the devil? He said in an article for Vanity Fair.
Musk was a co-founder of OpenAI — the company that created ChatGPT — in 2015.
says Sam Altman, CEO of OpenAI, who did not sign the letter
Its intention was for it to operate as a non-profit organization dedicated to researching the risks that artificial intelligence might pose to society.
It is reported that he fears that search is lagging behind Google, and that Musk wants to buy the company.
Now, CEO Sam Altman — who didn’t sign off on Musk’s letter — has said he’s attacking AI publicly.
Elon obviously has some attacking us on Twitter right now on a few different vectors.
“I think he’s understandably nervous about the safety of AGI,” he said.
Altman says he’s open to “feedback” about GPT and wants to better understand the risks.
In a podcast interview Monday, Lex Friedman said, “There will be harm because of this tool.”
There will be harm, and there will be enormous benefits.
‘Great tools are good and real bad.’ We will reduce the bad and increase the good.
In an interview earlier this month, he said people have a right to be “a little scared,” and that he was.
We must be careful here. I think people should be glad we’re a little scared of this.
I am particularly concerned that these models could be used for disinformation on a massive scale. Now that they are better at writing computer code, they can be used to launch offensive cyberattacks.
No to tech leaders to stop dangerous AI: Read the full letter
AI systems with human competitive intelligence can pose serious risks to society and humanity, as evidenced by extensive research (1) and recognized by top AI laboratories.
As stated in the widely adopted Asilomar AI Principles, advanced AI can represent a profound change in the history of life on Earth, and must be planned and managed with due care and resources. Unfortunately, this level of planning and management does not happen, although recent months have seen AI Labs race out of control to develop and deploy more powerful digital brains that no one – not even their creators – can reliably understand, predict or control.
Contemporary AI systems are now able to compete with humans in public tasks, and we have to ask ourselves: Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including the ones that do? Should we develop non-human minds that may eventually outnumber, surpass intelligence, outlive, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technical leaders. Strong AI systems should only be developed once we are confident that their effects will be positive and that their risks will be under control. This confidence must be well justified and increase with the magnitude of the system’s potential effects. OpenAI’s recent statement on AI states that “At some point, it may be important to obtain an independent review before starting to train future systems, and for more advanced efforts to agree to limit the growth rate of computing used to create models.” We agree. The point now.
Therefore, we call on all AI labs to immediately stop training AI systems more powerful than GPT-4 for at least 6 months. This discontinuation must be public and verifiable, and include all key actors. If such a moratorium cannot be triggered quickly, governments must step in and impose a moratorium.
AI Labs and independent experts shall use this pause to develop and implement a set of joint safety protocols for the design and development of advanced AI that are rigorously audited and supervised by independent external experts. These protocols must ensure that the systems they adhere to are secure beyond a reasonable doubt.
This does not mean pausing the development of artificial intelligence in general, but rather simply stepping back from the dangerous race to larger and unpredictable black box models with emerging capabilities.
AI research and development must be refocused on making today’s robust, modern systems more accurate, secure, interpretable, transparent, robust, consistent, trustworthy, and loyal.
In parallel, AI developers should work with policymakers to accelerate the development of strong AI governance systems. These should include at a minimum the following: new and capable AI-dedicated regulatory authorities; Oversee and track highly capable AI systems and large sets of computational abilities; source systems and watermarks to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for harm caused by artificial intelligence; strong public funding for technical safety research for AI; and well-resourced institutions to deal with the dramatic economic and political (particularly democracy) disruptions that AI will cause.
Humanity can enjoy a prosperous future with artificial intelligence. Having successfully created robust AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems to achieve the obvious benefit of all, and give society a chance to adapt. Society has temporarily stopped using other technologies with potentially catastrophic effects on society.
We can do that here. Let’s enjoy the long summer of artificial intelligence, not rush unprepared for fall.