It’s an issue that is currently troubling some of the world’s greatest minds, from Bill Gates to Elon Musk.
Elon Musk, CEO of SpaceX and Tesla, described AI as our “greatest existential threat” and compared its development to “evoking the demon.”
He believes that super-intelligent machines can use humans as pets.
Professor Stephen Hawking said it is “almost certain” that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They can steal jobs
According to a 2016 YouGov survey, more than 60 percent of people fear that robots will lead to fewer jobs in the next decade.
And 27 percent predict that the number of jobs will decline “a lot,” while previous research suggests workers in the administrative and service sectors will be hit the hardest.
Other experts are not only a threat to our jobs, but also believe that AI can go “rogue” and become too complex for scientists to understand.
A quarter of respondents predicted that robots will become part of everyday life within just 11 to 20 years, while 18 percent predict that it will within the next decade.
They can be ‘rogues’
Computer scientist Professor Michael Wooldridge said AI machines can get so complicated that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms work, they can’t predict when they will fail.
This means driverless cars or intelligent robots can make unpredictable ‘out of character’ decisions at critical moments, putting people at risk.
For example, the AI behind a driverless car could choose to swerve into pedestrians or bump into obstacles rather than drive sensibly.
They could wipe out humanity
Some people believe that AI will wipe out humans completely.
“Eventually I think humanity will go extinct, and technology will probably play a part in this,” DeepMind’s Shane Legg said in a recent interview.
He called artificial intelligence, or AI, the “number one risk for this century.”
Musk warned that AI poses a greater threat to humanity than North Korea.
“If you’re not concerned about AI security, you should be. Much more risky than North Korea,” the 46-year-old wrote on Twitter.
“Nobody likes to be regulated, but anything (cars, planes, food, drugs, etc) that poses a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulation to AI technology.
He has argued that controls are necessary to prevent machines from falling beyond human control controle