Artificial intelligence has changed shape in recent years.
What began in the public eye as a fast-growing field with promising (but largely benign) applications has grown into a more than 100 billion dollars industry where the heavyweights – Microsoft, Google and OpenAI, to name a few – look set to outdo each other.
The result is increasingly sophisticated large language models, often in haste and without adequate testing and supervision.
These models can do much of what a human can do, and in many cases even better. They can beat us in advanced strategy games, create incredible art, diagnose cancer and compose music.
Read more: text-to-audio generation is here. One of the next big AI disruptions could be in the music industry
There is no doubt that AI systems appear to be “intelligent” to some degree. But could they ever be as intelligent as humans?
There’s a term for that: Artificial General Intelligence (AGI). Although a broad concept, for simplicity’s sake you can think of AGI as the point where AI acquires human-like generalized cognitive abilities. In other words, it’s the point where AI can tackle any intellectual task a human can.
AGI is not there yet; current AI models are held back by a lack of certain human qualities, such as true creativity and emotional awareness.
We asked five experts if they think AI will ever reach AGI, and five out of five said yes.
But there are subtle differences in how they approach the question. More questions emerge from their answers. When can we reach AGI? Will it continue pass by people? And what exactly is “intelligence”?
Here are their detailed answers:
Read more: The call to regulate AI is getting louder. But how exactly do you regulate such technology?