DeepMind tests the limits of large AI language systems with a model of 280 billion parameters

Language generation is the hottest thing in AI right now, with a class of systems known as “big language models” (or LLMs) being used for everything from improving Google’s search engine to creating text based fantasy games. But these programs also have serious problems, including spewing sexist and racist language and failing tests of logical reasoning. A big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm?

This is one of the topics Alphabet’s DeepMind AI lab is tackling in a trio of research papers published today. The company’s conclusion is that further scaling up these systems should yield many improvements. “A key finding of the paper is that the advancement and capabilities of large language models are still increasing. This is not an area that has stalled,” DeepMind researcher Jack Rae told reporters in a briefing call.

DeepMind, which regularly feeds its work into Google products, has explored the capabilities of these LLMs by building a language model with 280 billion parameters called Gopher. Parameters are a quick measure of the size and complexity of a language model, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters), but not as large as some more experimental systems, such as Microsoft and Nvidia’s Megatron Model (530 billion parameters).

It’s generally true in the AI ​​world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling LLMs provides better performance on the most common benchmarks that test things like sentiment analysis and summary. However, researchers also warned that some problems inherent in language models require more than just data and computing power to solve.

“I think right now it really looks like the model could fail in several ways,” Rae said. “Some of those ways are because the model just doesn’t understand what it’s reading well enough, and I feel like for that class of problems, we’re just going to see improved performance with more data and scale.”

But, he added, there are “other categories of problems, such as the model that perpetuates stereotypical prejudice or the model that is tricked into telling falsehoods, which […] no one at DeepMind thinks that scale will be the answer [to].” In these cases, language models need “additional training routines,” such as feedback from human users, he noted.

To arrive at these conclusions, the DeepMind researchers evaluated a series of language models of varying sizes on 152 language tasks or benchmarks. They found that larger models generally yielded better results, with Gopher itself offering state-of-the-art performance on about 80 percent of the tests selected by the scientists.

In another article, the company also explored the wide range of potential harm associated with deploying LLMs. These include the systems’ use of toxic language, their ability to share misinformation, and their potential to be used for malicious purposes, such as sharing spam or propaganda. All of these issues will become increasingly important as AI language models become more widely deployed, for example as chatbots and sales agents.

However, it’s worth remembering that performance on benchmarks isn’t the only thing when evaluating machine learning systems. In a recent newspaper, a number of AI researchers (including two from Google) have explored the limitations of benchmarks, noting that these datasets will always be limited in scope and unable to match the complexity of the real world. As is often the case with new technology, the only reliable way to test these systems is to see how they perform in the field. With large language models, we will see more of these applications soon.