Home US ‘The outcome could be extinction’: Elon Musk-backed researcher warns there is NO proof AI can be controlled – and says tech should be shelved NOW

‘The outcome could be extinction’: Elon Musk-backed researcher warns there is NO proof AI can be controlled – and says tech should be shelved NOW

by Jack
0 comment
Dr. Roman V Yampolskiy claims that he found no evidence that AI can be controlled and therefore should not be developed.

Dr. Roman V Yampolskiy claims that he found no evidence that AI can be controlled and therefore should not be developed.

An Elon Musk-backed researcher is once again sounding the alarm about AI’s threat to humanity after finding no evidence that the technology can be controlled.

Dr. Roman V Yampolskiy, an AI safety expert, received funding from the billionaire to study advanced intelligent systems, which is the focus of his upcoming book “AI: Unexplainable, Unpredictable, Uncontrollable.”

The book examines how AI has the potential to dramatically reshape society, not always in our favor, and has the “potential to cause existential catastrophe.”

Yampsolskiy, a professor at the University of Louisville, conducted a “review of the scientific literature on AI” and concluded that there is no evidence that the technology can be prevented from going rogue.

To fully control AI, he suggested that it is necessary Be modifiable with “undo” options, limitable, transparent and easy to understand in human language.

“It is no wonder that many consider this to be the most important problem humanity has ever faced,” Yampsolskiy shared in a statement.

“The result could be prosperity or extinction, and the fate of the universe is at stake.”

To fully control the AI, he suggested that it should be modifiable with options.

To fully control AI, he suggested it must be modifiable with “undo” options, limited, transparent, and easy to understand in human language.

Musk is reported to have provided funding to Yampsolskiy in the past, but the amount and details are unknown.

In 2019, Yampsolskiy wrote a blog post on Medium thanking Musk “for partially funding his work on AI safety.”

The Tesla CEO has also sounded the alarm about AI, specifically in 2023 when he and more than 33,000 industry experts signed an open letter about The Future of Life Institute.

The letter said AI labs are currently “locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one – not even their creators – can reliably understand, predict or control.”

“Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks manageable.”

And Yampolskiy’s upcoming book appears to echo those concerns.

He expressed concern about new tools being developed in recent years that pose risks to humanity, regardless of the benefit such models provide.

In recent years, the world has witnessed how AI started generating queries, composing emails, and writing code.

Elon Musk has also raised the alarm about AI, specifically in 2023 when he and more than 33,000 industry experts signed an open letter at The Future of Life Institute.

Elon Musk has also raised the alarm about AI, specifically in 2023 when he and more than 33,000 industry experts signed an open letter at The Future of Life Institute.

Now, these systems detect cancer, create novel drugs, and are used to find and attack targets on the battlefield.

And experts have predicted that technology will reach the singularity in 2045, which is when technology surpasses human intelligence and has the ability to reproduce, at which point we may not be able to control it.

“Why do so many researchers assume that the problem of AI control is solvable?” Yampolskiy said.

«As far as we know, there is no evidence of this, there is no evidence. Before we embark on the quest to build controlled AI, it is important to demonstrate that the problem has a solution.’

While the researcher said he conducted an extensive review to reach the conclusion, it is unknown at this time exactly what literature was used.

What Yampolskiy did provide is his reasoning for why he believes AI cannot be controlled: the technology can learn, adapt, and act semi-autonomously.

These capabilities make decision-making infinite and that means an infinite number of security problems can arise, he explained.

And because technology adapts on the fly, humans may not be able to predict problems.

Experts have predicted that technology will reach the singularity in 2045, which is when technology surpasses human intelligence to the point that we cannot control it.

Experts have predicted that technology will reach the singularity in 2045, which is when technology surpasses human intelligence to the point that we cannot control it.

“If we do not understand AI decisions and only have a ‘black box,’ we cannot understand the problem and reduce the probability of future accidents,” Yampolskiy said.

“For example, AI systems are already being tasked with making decisions in healthcare, investments, employment, banking and security, to name a few.”

Such systems should be able to explain how they arrived at their decisions, in particular to demonstrate that they are free of bias.

“If we get used to accepting AI’s answers without explanation, essentially treating it like an Oracle system, we won’t be able to know if it starts providing incorrect or manipulative answers,” Yampolskiy explained.

He also noted that as the capacity of AI increases, its autonomy also increases, but our control over it decreases, and greater autonomy is synonymous with less security.

“Humanity faces a choice: do we become like babies, cared for but uncontrolled, or do we refuse to have a helpful guardian but remain in charge and free,” Yampolskiy warned.

The expert shared tips on how to mitigate risks, such as design a machine that accurately follows human commands, but Yampolskiy pointed out the possibility of contradictory commands, misinterpretation or malicious use.

“If humans are in control they can result in contradictory or explicitly malevolent orders, while AI being in control means humans are not,” he explained.

«Most AI security researchers are looking for a way to align future superintelligence with the values ​​of humanity.

‘Values-aligned AI will be biased by definition, pro-human bias, good or bad, is still a bias.

“The paradox of value-aligned AI is that a person who explicitly orders an AI system to do something can get a ‘no’ while the system tries to do what the person really wants.

“Humanity is either protected or respected, but not both.”

You may also like