20.8 C
London
Sunday, June 4, 2023
HomeUKHas GPT-4 really crossed the surprising threshold of human-level artificial intelligence? ...

Has GPT-4 really crossed the surprising threshold of human-level artificial intelligence? It depends

Date:

The recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case AI that performs at a human level) feasible?

An online preprint this week has added to the hype, suggesting that the latest advanced major language model, GPT-4, is in the early stages of artificial general intelligence (AGI) as it exhibits “sparks of intelligence”.

OpenAI, the company behind ChatGPT, has stated unabashedly his pursuit from AGI. Meanwhile, a large number of researchers and public intellectuals have called for one immediate stop to the development of these models, citing “serious risks to society and humanity”. These calls to interrupt AI research are theatrical and unlikely to succeed – the lure of advanced intelligence is too provocative for humans to ignore, and too rewarding for companies to interrupt.

But are the concerns and hopes about AGI justified? How close is GPT-4, and AI more broadly, to general human intelligence?



Read more: Evolution, not revolution: Why GPT-4 is remarkable, but not groundbreaking


If human cognitive capacity is a landscape, AI has indeed increasingly taken over large parts of this territory. It can now perform many individual cognitive tasks better than people in the areas of vision, image recognition, reasoning, reading comprehension and gaming. These AI skills can potentially result in a dramatic rearrangement of the global labor market in less than a decade.

But there are at least two ways to view the AGI issue.

The uniqueness of humanity

First, over time, AI will develop skills and capabilities for learning that match those of humans, reaching the AGI level. It is expected that the unique human capacity for continuous development, learning and transferring learning from one domain to another will eventually be duplicated by AI. This is in contrast to current AI, where training in one area, such as detecting cancer in medical images, does not transfer to other areas.

So the concern felt by many is that at some point AI will surpass human intelligence and then quickly eclipse us, making us appear to future AIs as ants appear to us now.

The plausibility of AGI has been disputed by several philosophers and researchers, citing that current models are largely ignorant of exits (that is, they do not understand what they are producing). They also have no view achieve consciousness because they are primarily predictive – automate what should come next in text or other output.

Rather than being intelligent, these models simply combine and duplicate data on which they have been trained. Consciousness, the essence of life, is missing. Even if AI base models keep going and complete more advanced tasks, there is no guarantee that consciousness or AGI will emerge. And if it showed up, how would we recognize it?



Read more: Futurists predict a point where man and machine become one. But will we see it coming?


Persistent AI

The usefulness of ChatGPT and GPT-4 capabilities to master some tasks as good as or better than a human being (such as bar exams and academic Olympiads) gives the impression that AGI is close at hand. This perspective is confirmed by the rapid performance improvement with every new model.

There is no doubt that AI can now outperform humans in many ways individually cognitive tasks. There is also growing evidence that the best model for interacting with AI may be one of human/machine coupling – with our own intelligence at the center. increasesnot replaced by AI.

GPT-4 is also ‘multi-modal’ – it can take visual input and answer questions based on that.
Open AI

Signs of such pairing are already emerging with announcements of work copilots And AI pair of programmers for writing codes. It seems almost inevitable that our future of work, life and learning will have AI pervasive and persistent.

By that measure, AI’s ability to be seen as intelligent is plausible, but this remains a contested space and many have opposed it. Renowned linguist Noam Chomsky has stated that the day of AGI “may come, but the dawn is not yet dawning”.

Smarter together?

The second angle is to consider the idea of ​​intelligence as it is applied by people in their daily lives. According to a school of thought, we are intelligent mainly in networks and systems rather than as lone individuals. We store knowledge in networks.

Until now, those networks have mainly been human. We can gain insight from someone (like the author of a book), but we don’t treat them as an active “agent” in our cognition.

But ChatGPT, Copilot, Bard and other AI-enabled tools can become part of our cognitive network – we engage with them, ask questions, restructure documents and resources for us. In this sense, AI need not be sentient or possess general intelligence. It just needs the capacity to be embedded and part of our knowledge network to replace and expand many of our current jobs and tasks.

The existential focus on AGI ignores the many possibilities that current models and tools offer us. Aware, conscious or not – all these traits are irrelevant to the many people who already use AI to co-create art, structure writings and essays, develop videos and navigate life.

The most relevant or urgent concern for humans is not whether AI is intelligent when it stands alone and separate from humans. It can be argued that from today we are more intelligent, more capable and more creative of AI because it enhances our cognitive abilities. At this point, it seems that the future of humanity could be AI teaming – a journey already well underway.



Read more: Bard, Bing and Baidu: How the big tech AI race will transform search and all computing


Jackyhttps://whatsnew2day.com/
The author of what'snew2day.com is dedicated to keeping you up-to-date on the latest news and information.

Latest stories

spot_img