Home Tech An “AI scientist” invents and conducts his own experiments

An “AI scientist” invents and conducts his own experiments

0 comment
An "AI scientist" invents and conducts his own experiments

At first glance, a recent series of research papers produced by a leading artificial intelligence lab at the University of British Columbia in Vancouver might not seem all that remarkable. They present incremental improvements to existing algorithms and ideas and read like the stuff of a mid-level AI conference or journal.

But the research is, in fact, remarkable. This is because it is entirely the work of a “AI Scientist” developed at the UBC lab together with researchers from the University of Oxford and a startup called Sakana Artificial Intelligence.

He project It demonstrates a first step toward what could prove to be a revolutionary trick: allowing AI to learn by inventing and exploring novel ideas. But for now, they aren’t all that novel. Several papers describe tweaks to improve an image-generating technique known as diffusion modeling; another describes an approach to speeding up learning in deep neural networks.

“They are not revolutionary or very creative ideas,” he admits. Jeff Clunethe professor who heads the UBC lab. “But they seem like very interesting ideas that someone could try.”

As amazing as current AI programs may be, they are limited by their need to consume human-generated training data. If AI programs could learn in an open-ended way, by experimenting and exploring “interesting” ideas, they might discover capabilities that extend beyond anything humans have shown them.

Clune’s lab had previously developed AI programs designed to learn in this way. For example, a program called Omni I tried generating the behavior of virtual characters in various video game-like environments, eliminating those that seemed interesting and then iterating on them with new designs. These programs had required manually coded instructions to define the degree of interest. However, large language models provide a way to allow these programs to identify what is most intriguing. Recent project Clune’s lab used this approach to allow AI programs to come up with the code that allows virtual characters to do all sorts of things within a Roblox-like world.

The AI ​​scientist is an example of how Clune’s lab looks at possibilities. The program proposes machine learning experiments, decides what looks most promising with the help of an LLM, and then writes and runs the necessary code — rinse and repeat. Despite the disappointing results, Clune says open learning programs, like language models, could become much more capable as the computing power that feeds them increases.

“It’s like exploring a new continent or a new planet,” says Clune of the possibilities offered by LLMs. “We don’t know what we’re going to discover, but wherever we look, there’s something new.”

Tom HopeAdjunct professor at the Hebrew University of Jerusalem and research scientist at the Allen Institute for AI (AI2), says that AI Scientist, like the LLMs, appears to be highly derivative and cannot be considered reliable. “None of the components are reliable at this point,” he says.

Hope notes that efforts to automate elements of scientific discovery date back decades to the work of AI pioneers. Allen Newell and Herbert Simon in the 1970s and subsequently the work of Pat

Langley at the Institute for the Study of Learning and Experience. He also notes that several other research groups, including a team at AI2, have recently leveraged LLMs to help generate hypotheses, write papers and review research. “They captured the zeitgeist,” Hope says of the UBC team. “The direction is, of course, incredibly valuable, potentially.”

It’s also unclear whether LLM-based systems will be able to generate truly novel or revolutionary insights. “That’s the trillion-dollar question,” Clune says.

Even without scientific breakthroughs, open learning can be vital to developing more capable and useful AI systems in the here and now. A report An article published this month by Air Street Capital, an investment firm, highlights the potential of Clune’s work to develop more powerful and reliable AI agents, or programs that autonomously perform useful tasks on computers. Big AI companies seem to view agents as the next big thing.

This week, Clune’s lab revealed its latest open learning project: a AI program that invents and builds AI agentsAI-designed agents outperform human-designed agents on some tasks, such as math and reading comprehension. The next step will be to devise ways to prevent such a system from generating agents that behave badly. “It’s potentially dangerous,” Clune says of this work. “We have to get it right, but I think it’s possible.”

You may also like