Home Australia Rise of the killer robots: Experts reveal just how close we are to a Terminator-style takeover

Rise of the killer robots: Experts reveal just how close we are to a Terminator-style takeover

0 comments
In James Cameron's epic sci-fi blockbuster, which hit American theaters on Friday, October 26, 1984, Arnold Schwarzenegger plays the robotic assassin of the same name.

It’s been exactly 40 years since The Terminator hit the big screen, shocking moviegoers with its terrifying depiction of a post-apocalyptic future.

In James Cameron’s epic sci-fi blockbuster, billions of people die when self-aware machines unleash a global nuclear war in the early 21st century.

Arnold Schwarzenegger plays the eponymous killer robot sent back in time from 2029 to 1984 to eliminate the threat of human resistance.

Famously, the Terminator, who resembles an adult human, “won’t stop at all…until you’re dead,” as one character says.

While this sounds like pure science fiction, academic and industry figures, including Elon Musk, fear that humanity will truly be wiped out by AI.

But when exactly will this happen? And will the disappearance of humanity reflect the apocalypse described in the Hollywood film?

MailOnline spoke to experts to find out how close we are to a Terminator-style takeover.

In James Cameron’s epic sci-fi blockbuster, which hit American theaters on Friday, October 26, 1984, Arnold Schwarzenegger plays the robotic assassin of the same name.

In the classic film, the Terminator’s goal is simple: kill Sarah Connor, a Los Angeles resident who will give birth to John, who will lead a rebellion against the machines.

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as advanced vision and superhuman limbs that can easily crush or strangle us.

Natalie Cramp, a partner at data firm JMAN Group, said a real-world Terminator equivalent is possible, but fortunately it probably won’t be in our lifetime.

“Anything is possible in the future, but we are a long way from robotics reaching the level where Terminator-type machines have the capacity to overthrow humanity,” he told MailOnline.

According to the expert, humanoid-style robots like Terminator are not the most likely path for robotics and AI to advance at this time.

Rather, the most pressing threat in the industry is machines that are already in common use, such as drones and self-driving cars.

“There are many obstacles to making a robot like that work effectively, including how you power it and coordinate its movements,” Cramp told MailOnline.

«The main problem is that it is actually not the most efficient way a robot can take to be useful.

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as enormous superhuman limbs that can easily crush or strangle us.

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as enormous superhuman limbs that can easily crush or strangle us.

“If we’re speculating about what kind of AI devices could ‘go rogue’ and harm us, it’s likely to be everyday objects and infrastructure: a self-driving car that’s malfunctioning or a power grid that’s failing.”

Mark Lee, professor of artificial intelligence at the University of Birmingham, said a Terminator-style apocalypse would occur when “any government is crazy enough to hand over control of national defense to an AI.”

“Thankfully, I don’t think there is a nation crazy enough to consider this,” he told MailOnline.

Professor Lee agreed that there are different types of AI that are a more pressing concern, including the powerful algorithms behind them.

“The immediate danger of AI for most people is the effect it will have on society as we move towards AI systems making decisions about mundane things like job applications or mortgages,” he told MailOnline.

“However, considerable efforts are also being made in military applications, such as AI-guided missile systems or drones.

“We need to be careful here, but the concern is that even if the Western world agrees on an ethical framework, others in the world might not.”

The Terminator's goal is simple: kill Sarah Connor, a Los Angeles resident who will give birth to John, who will lead a rebellion against the machines.

The Terminator’s goal is simple: kill Sarah Connor, a Los Angeles resident who will give birth to John, who will lead a rebellion against the machines.

Dr Tom Watts, a researcher on US foreign policy and international security at Royal Holloway University of London, said it is “crucially important” that human operators continue to exercise control over robots and artificial intelligence.

“The entire international community, from superpowers like China and the United States to smaller countries, needs to find the political will to cooperate and manage the ethical and legal challenges posed by military applications of AI during this time of geopolitical upheaval.” write in a new piece for The conversation.

“How nations confront these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator, even if we don’t see time-traveling cyborgs anytime soon.”

In 1991, a hugely successful sequel, Terminator 2: Judgment Day, was released, featuring a reprogrammed “friendly” version of the robot of the same name.

The film’s humanoid antagonist named T-1000 can run at the speed of a car and in one memorable scene liquefies to walk across metal bars.

Unfortunately, researchers in Hong Kong are working to make this a reality, having designed a small prototype that can switch between liquid and solid stages.

Overall, creating a walking, talking robot with lethal powers will be a bigger challenge than designing the software system that acts as its brain.

Since its release, Terminator has been recognized as one of the best science fiction films of all time.

At the box office, it grossed more than 12 times its modest budget of $6.4 million, which is equivalent to £4.9 million at current exchange rates.

Dr Watts believes the film’s greatest legacy has been to “distort the way we collectively think and talk about AI”, which today poses an “existential danger that often dominates public debate”.

Elon Musk is among the tech leaders who have helped keep the spotlight on the supposed existential risk of AI to humanity, often referencing the film.

TIMELINE OF ELON MUSK’S COMMENTS ON AI

Musk has long and very openly condemned AI technology and the precautions humans must take.

Musk has long and very openly condemned AI technology and the precautions humans must take.

Elon Musk is one of the most prominent names and faces in technology development.

The billionaire businessman runs SpaceX, Tesla and the Boring company.

But while it is at the forefront of creating artificial intelligence technologies, it is also well aware of its dangers.

Here’s a complete timeline of all of Musk’s premonitions, thoughts, and warnings about AI, so far.

August 2014 – ‘We have to be very careful with AI. Potentially more dangerous than nuclear weapons.

October 2014 – ‘I think we should be very careful with artificial intelligence. If I had to guess what our biggest existential threat is, that’s probably it. That’s why we have to be very careful with artificial intelligence.’

October 2014 – ‘With artificial intelligence we are summoning the devil.’

June 2016 – “The benign situation with ultra-intelligent AI is that we would be so far below in intelligence that we would be like a pet or a house cat.”

July 2017 – “I think AI is something that poses a civilizational risk, not just an individual risk, and so it really requires a lot of security research.”

July 2017 – “I am exposed to the most advanced AI and I think people should care a lot about it.”

July 2017 – ‘I keep sounding the alarm, but until people see robots walking down the street killing people, they don’t know how to react because it seems very ethereal.’

August 2017 – ‘If you’re not worried about AI safety, you should be. Much more risk than North Korea.

November 2017 – ‘Maybe there is a five to 10 percent chance of success (in making AI safe).’

March 2018 – ‘AI is much more dangerous than nuclear weapons. So why don’t we have regulatory oversight?

April 2018 – ‘(AI is) a very important topic. “It’s going to affect our lives in ways we can’t even imagine now.”

April 2018 – ‘(We could create) an immortal dictator from whom we would never escape.’

November 2018 – ‘Maybe the AI ​​will make me follow him, laugh like a demon, and say who the pet is now.’

September 2019 – ‘If advanced AI (beyond basic robots) hasn’t been applied to manipulate social media, it won’t be long before it is.’

February 2020 – ‘At Tesla, using AI to solve autonomous driving is not just the icing on the cake, it is the cake.’

July 2020 – ‘We are heading towards a situation where AI is much smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean everything will go to shit in five years. It just means that things become unstable or weird.

April 2021: “A significant part of real-world AI needs to be solved for widespread, unsupervised autonomous driving to work.”

February 2022: “We have to solve a lot of AI just to make cars drive themselves.”

December 2022: ‘The danger of training AI to wake up (in other words, to lie) is deadly.’

You may also like