DeepAI founder Kevin Baragona said that artificial intelligence has become the nuclear weapon of software
One tech magnate has described the race for perfection in artificial intelligence (AI) as the nuclear arms race of the 21st century.
Kevin Baragona was one of more than 1,000 leading experts to sign an open letter at the Future of Life Institute, calling for a pause in the “dangerous race” to develop ChatGPT-like AI.
Like the invention of the atomic bomb in the 1940s, Parajona told DailyMail.com that “superintelligence is like a nuclear weapon for software.”
“A lot of people have debated whether or not we should continue to develop it,” he continued.
Americans were grappling with a similar idea during the development of a weapon of mass destruction known as the “nuclear concern”.
“It’s akin to a war between chimpanzees and humans,” Baragona, who signed the letter, told DailyMail.com.
Obviously humans win because we are much smarter and can take advantage of more advanced technology to defeat them.
“If we are like chimpanzees, artificial intelligence will destroy us, or we will be enslaved by it.”
These concerns come with the extraordinary rise of ChatGPT, which has swept the world in recent months, passing groundbreaking medical and legal tests that took humans nearly three months to prepare.
The forces of ChatGPT-like artificial intelligence have sparked a civil war in Silicon Valley.
Elon Musk and Apple co-founder Steve Wozniak signed the letter to stop AI, while Bill Gates and Google CEO Sundar Pichai did not.
“While I can only speculate as to why Gates and Sundar didn’t sign the letter to pause advanced AI research, I believe they didn’t because they signed the checks to accelerate the progress of AI,” Paragona said.
Microsoft, which Gates founded, has invested heavily in OpenAI, the creator of ChatGPT.
In January, it was speculated that Gates invested an additional $10 billion in the startup to compete with Google in marketing new AI breakthroughs.
Artificial intelligence concerns come as experts predict it will achieve the singularity by 2045, which is when technology will surpass human intelligence that we cannot control.
Microsoft also added AI to its Bing search engine in February, integrating the powers of ChatGPT.
Google just opened Bard to the public on March 21st, which is also a natural language chatbot.
The California company was careful with its pitch so as not to come up with inaccurate facts about its technology, but Bard’s first impression showed that the company had pushed him to market.
It’s not yet clear how Bard will fair against the likes of OpenAI’s ChatGPT and Microsoft’s AI-powered Bing.
“Microsoft is investing heavily in OpenAI, and Google is investing heavily in Anthropic,” Baragona told DailyMail.com.
“They may feel that it is not the time to recoil from unfounded fears of potential negative consequences.”
Musk, Wozniak and more than 1,000 tech leaders signed an open letter on Wednesday calling for a six-month pause in AI development.
The groups said more risk assessment needs to be done before humans lose control and become a sensitive, human-hating species.
Bill Gates and Google CEO Sundar Pichai did not sign the open letter with Musk. The couple has invested heavily in the development of artificial intelligence and see the technology as the way of the future
At this point, artificial intelligence will have reached the singularity, which means that it has surpassed human intelligence and has independent thinking.
The AI will no longer need or listen to humans, allowing it to steal nuclear codes, create epidemics, and spark global wars.
Gates and Sundarare are on the other side of the corridor.
They hail ChatGPT-like AI as the “most important” innovation of our time – saying it could solve climate change, cure cancer and boost productivity.
OpenAI launched ChatGPT in November, which was an instant hit worldwide.
A chatbot is a large language model that has been trained on massive text data, allowing it to generate eerily human-like text in response to a given prompt.
ChatGPT is used by the public to write research papers, books, news articles, emails and other text-based works, and while many see it as more like a virtual assistant, many brilliant minds see it as the end of humanity.
Elon Musk and Apple co-founder Steve Wozniak have signed a letter protesting technology that “poses profound dangers to humanity.”
Musk and Wozniak fear that AI will advance beyond human control and call for a six-month pause to solidify the risks.
In its simplest form, AI is a field that combines computer science with powerful data sets to enable problem solving.
This technology allows machines to learn from experience, adapt to new inputs, and perform human-like tasks.
The systems, which include machine learning and deep learning subfields, consist of artificial intelligence algorithms that seek to create expert systems that make predictions or classifications based on input data.
Stopping AI development is like putting toothpaste back in the tube, Scott Opitz, chief technology officer of intelligent automation company ABBYY, said in a statement. AI applications are pervasive and affecting virtually every aspect of our lives.
While commendable, putting the brakes now through a voluntary pause may be implausible.
What is needed is a concerted and good-faith effort between industry and lawmakers to pass commonsense regulations that embrace ethical AI principles based on human-centered values of fairness, transparency, and accountability.
Hollywood may have teased humans’ fears of AI that were usually shown as sinister, as in The Matrix and The Terminator, which paint a picture of robot overlords enslaving the human race.
However, the idea is resonating throughout Silicon Valley as more than 1,000 tech experts believe it could become our reality.
This will be possible if AI reaches the singularity, a hypothetical future where technology will surpass human intelligence and change the course of our evolution – and this is expected to happen by 2045. AI will have to pass the Turing test first.
When that happens, the technology is seen to have independent intelligence, allowing it to self-replicate in a more powerful system that humans cannot control.