HomeTech It’s useful that the latest AI can “think”, but we need to know its reasoning | John Naughton

It’s useful that the latest AI can “think”, but we need to know its reasoning | John Naughton

0 comments
It's useful that the latest AI can "think", but we need to know its reasoning | John Naughton

YoIt’s been almost two years since OpenAI launched ChatGPT on an unsuspecting world, and the world, closely followed by the stock market, lost its mind. Everywhere, people were wringing their hands wondering: What will this mean for (enter occupation, industry, business, institution).

Within academia, for example, humanities professors wondered how they would be able to grade essays if students used ChatGPT or similar technology to help write them. The answer, of course, is to find better ways to grade, because students will use these tools for the simple reason that it would be foolish not to, just as it would be foolish to budget without spreadsheets. But universities are slow-moving beasts, and as I write, there are committees in many ivory towers solemnly trying to formulate “policies on the use of AI.”

However, while they deliberate, the cruel killjoys at OpenAI have unleashed another conundrum for academia: a new type of large language model (LLM) that can supposedly do “reasoning.” They have named it OpenAI o1, but since it was known internally as Strawberry we will stick with that. The company describes it as the first of “a new series of AI models designed to spend more time thinking before responding.” “They can reason through complex tasks and solve more difficult problems than previous models in science, coding and mathematics.”

In some ways, Strawberry and its upcoming cousins ​​are a response to the strategies that previous trained LLM users had implemented to overcome the fact that the models were intrinsically “One-time LLM” – requested with a single example to generate responses or perform tasks. The trick the researchers used improving model performance was called “thought chain” stimulation. This forced the model to respond to a carefully designed sequence of detailed prompts and therefore provide more sophisticated responses. What OpenAI seems to have done with Strawberry is internalize this process.

So while with previous models like GPT-4 or Claude, you gave them a prompt and they responded quickly, with Strawberry a prompt usually produced a delay while the machine “thought” a bit. This involves an internal process of generating a series of possible answers that are then subjected to some type of evaluation, after which the one considered most plausible is chosen and provided to the user.

As described by OpenAIStrawberry “learns to hone his chain of thought and refine the strategies he uses. Learn to recognize and correct your mistakes. Learn to break down complicated steps into simpler ones. Learn to try a different approach when the current one doesn’t work. “This process dramatically improves the reasoning ability of the model.”

What this means is that somewhere inside the machine there is a record of the “chain of thought” that led to the final result. At first, this seems like a breakthrough because it could reduce the opacity of LLMs (the fact that they are, essentially, black boxes). And this matters, because humanity would be crazy to entrust its future to decision-making machines whose internal processes are – by accident or corporate design – inscrutable. However, it’s frustrating that OpenAI is reluctant to allow users to see inside the box. “We have decided” says“don’t show raw chains of thought to users. We recognize that this decision has disadvantages. We strive to partially compensate for this by teaching the model to reproduce any useful ideas from the thought chain in the answer.” Translation: Strawberry’s box is a slightly lighter shade of black.

The new model has attracted a lot of attention because the idea of ​​a “reasoning” machine smacks of progress toward more “intelligent” machines. But, as always, all these loaded terms must be sanitized in quotes so that we don’t anthropomorphize the machines. They’re still just computers. However, some people have been scared by some of the unexpected things Strawberry seems capable of doing.

Of these, the most interesting was provided during internal testing of the model by OpenAI, when its ability to hack computers was being explored. The researchers asked him to hack a protected file and report its contents. But the test designers made a mistake: they tried to put Strawberry in a virtual box with the protected file, but they didn’t realize that the file was inaccessible.

According your reportHaving found the problem, Strawberry inspected the computer used in the experiment, discovered a bug in a misconfigured part of the system that he should not have been able to access, edited how the virtual boxes worked, and created a new box with the files he needed. In other words, he did what any ingenious human hacker would have done: having encountered a problem (created by human error), he explored his software environment to find a solution and then took the necessary steps to accomplish the task. had been fixed. And he left a mark that explained his reasoning.

Or, in other words, he used his initiative. Like a human. We could use more machines like this.

skip past newsletter promotion

what i have been reading

Questioned rhetoric
The danger of superhuman AI is not what you think This is a fabulous article by Shannon Vallor at noema magazine about the sinister barbarism of a technology industry that speaks of its creations as “superhuman.”

Guess again
Benedict Evans has written an elegant article, Asking the wrong questionsarguing that we are not so much wrong in our predictions about technology as we are in making predictions about the wrong things.

on the edge
Historian Timothy Snyder’s sobering Substack essay on our choices regarding Ukraine, To be or not to be.

You may also like