The debate over the promises and risks of artificial intelligence was shaken up on November 30, 2022, when the company OpenAI launched ChatGPT. ChatGPT is an improved and free version of GPT-3, a powerful system launched in 2020 that generates text. This is called a language model. GPT-3 has also been used to write a Guardian opinion piecearguing that AI will not destroy humans.
In December, over a million people used ChatGPT and were posting computer codesof the weekly recipe programswork presentations and dissertations generated by the system.
ChatGPT and GPT-3 can also resolve math problems, correct grammar Or simplify complicated text. Currently, ChatGPT can no longer respond to the request; the site does not have the capacity to support the excessive number of users.
Having been trained on a large amount of data, including websites, books, and Wikipedia, these systems can imitate different literary styles, including explain in biblical style how to remove a sandwich from a VCRwrite poems in the style of Baudelaire or produce scenarios of scenes from the hit show Friends.
For the first time, society fully grasps the scale of the transformations to come. Yet much of the public debate on ChatGPT centers around the issue of school plagiarism. The widespread fear that students are using ChatGPT to write their term papers is distracting the public from much more important matters.
Experts in artificial intelligence law and policy, we offer to shed light on the latest AI systems and the real risks they present.
Understand language models
Language models are AI systems trained to estimate the probability of a sequence of words appearing in a text. They are used in a variety of ways, including in virtual chats, messaging apps, and translation software. Think, for example, of your messaging app suggesting the next word in the sentence you started. Some language models are called large language models, when they are trained on a very high number of parameters, although there is no specific threshold for this number.
These models have been revolutionized by the invention of a new technology, called transformersin 2017. Many impressive language models using transformers have emerged, such as GPT-3, Bloom, LaMDA, Megatron-Turing NLG, and PaLM. While ChatGPT has been trained on 175 billion parameters, Google PaLM was trained on 540 billion and can explain jokes and produce logical reasoning sophisticated. Transformers have also been used to create systems that generate images from text, such as SLAB.E 2which can produce a believable image of a koala who plays (and scores!) basketball. In fact, some artists are now using AI to generate their works.
The plagiarism debate is not new
AI is currently revolutionizing the world of work. People with no training in programming can produce computer codes, anyone can generate maps, slides, drawings, photos, websites, texts or legal documents. The professionals of tomorrow will no doubt rely on these tools. The question therefore needs to be asked: what is the purpose of education if not to prepare students for society and work?
A debate about plagiarism took place in the 90s, when the internet grew. University professors complained that their students copied information from websites and electronic journals or asked for help on online forums. Of course, failure to cite sources is problematic; this is called plagiarism. But the first cheaters who used the Internet learned to search the Web and sort through information. In fact, the school system has since adapted to emphasize skills in collecting, analyzing, synthesizing and evaluating the accuracy and usefulness of information. This is one of the reasons why the today’s young adults are more resistant to misinformation than their elders.
ChatGPT is just the tip of the iceberg
Today, AI introduces an even greater revolution than that caused by the arrival of the Internet. ChatGPT is just one of many already existing AI systems that will transform societyand we can expect more to appear soon. Currently, the three ingredients of AI systems – computing power, algorithms and data – are all improving at a breakneck pace. ChatGPT is just the tip of the iceberg, and we need to prepare students for the significant social changes AI will bring about.
Instead of trying to stop students from using ChatGPT, we need to reform the way we teach. This reform shouldn’t be about finding nifty assignments that students can’t use ChatGPT for. We need to ensure that students can use AI systems correctly.
ChatGPT is formed in part from human feedback. Humans read the response produced by the system and judge whether it is truthful and informative. For some topics, especially those that require in-depth expertise, the answers may seem plausible to humans, but contain inaccuracies, which are thus reinforced. Over time, it will become even more difficult for humans to notice subtle deviations from the truth. Teachers could create assignments that require the use of ChatGPT, asking students to check lesser-known facts and provide more subtle insights.
A call for caution
But above all, we must make our students aware of the risks these systems present. Large language models have been shown to reproduce the biases and prejudicesgive potentially dangerous advice and facilitate the consumer manipulation. Soon, these models could lead to the mass manipulation. They can also be the source of legal violations of the confidentiality of data and intellectual property rightson which students must remain vigilant.
What’s more, creators and users of transformer-based AI systems regularly find that these systems are capable of tasks, sometimes problematic, of which they were unaware. For example, researchers have demonstrated that they can use a language model to calculate the likelihood that defendants will reoffenda task for which the model had not been intentionally trained. The developers of the first large language models did not expect them to be able to do arithmetic or reasoning. This unpredictability of the tasks achievable with these systems increases the risk that they will be used for nefarious purposes or behave against the interests of its users.
Students need to prepare. They must learn to critically assess AI systems, just as the previous generation had to learn to sort through information online. They can also report any computer bugs or unexpected behavior they see to help keep them safe. In addition, they should participate in democratic conversations to determine what values and principles should guide the behaviors of AI systems.
And even if they don’t need to learn certain skills that will be automated, they should understand the basics of how AI works and the risks it entails.