Former OpenAI executive Mira Murati says it could take decades, but AI systems will eventually perform a wide range of cognitive tasks as well as humans, a possible technological milestone widely known as artificial general intelligence, or AGI.
“Right now, it seems pretty achievable,” Murati said at WIRED’s The Big Interview event in San Francisco on Tuesday. In her first interview since stepping down as OpenAI’s CTO in September, Murati told WIRED’s Steven Levy that she’s not too worried about recent rumors in the AI industry that developing more powerful generative AI models is proving a challenge.
“Current evidence shows that progress is likely to continue,” Murati said. “There is not much evidence to the contrary. It is not clear whether we need new ideas to get to AGI level systems. “I am quite optimistic that progress will continue.”
The comments reflect his long-standing interest in trying to find a way to bring increasingly capable AI systems to the world despite his separation from OpenAI. Reuters reported in October that Murati is founding his own AI startup to develop proprietary models and could raise more than $100 million in venture capital funding. On Tuesday, Murati declined to provide further details about the company.
“I’m figuring out what it’s going to be like,” he said. “I’m in the middle of this.”
Murati started in aerospace and then at Elon Musk’s Tesla, where he worked on the Model S and Model manage services like ChatGPT and Dall-E. She became a top executive at OpenAI and was briefly in charge last year as board members wrestled with the fate of CEO Sam Altman.
When Murati stepped down, Altman credited her for providing support during difficult times and described her as instrumental to OpenAI’s growth.
Murati did not publicly specify why he left OpenAI, other than to say that it seemed like the right time to pursue personal exploration. Dozens of OpenAI’s early employees have left the nonprofit in recent years, some out of frustration with Altman’s increasing focus on generating revenue rather than conducting purely academic research. Murati told WIRED’s Levy that there has been “too much obsession” with outputs and not enough with the substance of AI development.
He noted that work on producing synthetic data to train models and increasing investment in computing infrastructure to power them are important areas to pursue. Advances in those areas will make AGI possible one day, he said. But not everything is technological. “This technology is neither inherently good nor bad,” he said. “It comes with both parts.” It is up to society, Murati said, to continue to collectively steer models toward good, so that we are well prepared for the day when AGI arrives.