Home Money The Godmother of AI Wants Us All to Be World Builders

The Godmother of AI Wants Us All to Be World Builders

0 comment
The Godmother of AI Wants Us All to Be World Builders

According to market-obsessed tech pundits and professional skeptics, the AI ​​bubble has burst and winter is back. Fei-Fei Li isn’t buying it. In fact, Li, who earned the nickname “Godmother of AI,” is betting on the opposite. She’s on part-time leave from Stanford University to co-found a company called World laboratoriesAlthough current generative AI is based on language, she sees a frontier where systems build entire worlds out of physics, logic, and the rich details of our physical reality. It’s an ambitious goal, and despite discouraging moguls who say progress in AI has hit a plateau, World Labs is on the fast track to funding. The startup is perhaps a year away from having a product, and it’s not at all clear how well it will work when and if it arrives, but investors have put in $230 million and are Reportedly, valuing The billion-dollar startup that’s just starting out.

About a decade ago, Li helped turn AI around by creating ImageNet, a bespoke database of digital images that allowed neural networks to become significantly smarter. She believes today’s deep learning models need a similar boost if AI is to create real worlds, whether realistic simulations or entirely imaginary universes. Future George R.R. Martins could compose their dream worlds as prompts rather than prose, which they could then render and traverse. “The physical world for computers is seen through cameras, and the computer’s brain is behind the cameras,” Li says. “Turning that vision into reasoning, generation, and eventually interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.” World Labs calls itself a spatial intelligence company, and its fate will help determine whether that term becomes a revolution or a punchline.

Li has been obsessed with spatial intelligence for years. While everyone was going crazy over ChatGPT, she and a former student, Justin Johnson, were chatting excitedly on phone calls about the next iteration of AI. “The next decade is going to be about generating new content that takes computer vision, deep learning, and AI out of the internet world and into space and time,” says Johnson, now an adjunct professor at the University of Michigan.

Li decided to start a company in early 2023, after a dinner with Martin Casado, a pioneer in virtual networks who is now a partner at Andreessen Horowitz, a venture capital firm known for its almost messianic embrace of AI. Casado sees AI as being on a similar path to computer games, which started with text, moved to 2D graphics, and now have dazzling 3D images. Spatial intelligence will drive the change. Eventually, he says, “you could take your favorite book, put it into a model, and literally step into it and watch it play out in real time, in an immersive way,” he says. The first step to making that happen, Casado and Li agreed, is to move from big language models to big world models.

Li began assembling a team, with Johnson as co-founder. Casado suggested two more people: one was Christoph Lassner, who had worked at Amazon, Meta’s Reality Labs, and Epic Games. He is the inventor of Pressa representation scheme that led to a famous technique called Gaussian splatters in 3D. That sounds like an indie band at an MIT toga party, but it’s actually a way of synthesizing scenes, rather than isolated objects. Casado’s other suggestion was Ben Mildenhall, who had created a powerful technique called NeRF (neural radiance fields) that transfigures 2D pixel images into 3D graphics. “We brought real-world objects into VR and made them look perfectly real,” he says. He left his position as a senior research scientist at Google to join Li’s team.

An obvious goal of a large-scale world model would be to infuse robots with, well, a sense of the world. Indeed, that’s in World Labs’ plan, but not at the moment. The first phase is to build a model with a deep understanding of three-dimensionality, physicality, and notions of space and time. Next will come a phase in which the models will support augmented reality. After that, the company can turn to robotics. If this vision comes to fruition, large-scale world models will enhance self-driving cars, automated factories, and perhaps even humanoid robots.

You may also like