A former senior OpenAI employee said the company behind ChatGPT is prioritizing “brilliant products” over security, and revealed that he resigned after a disagreement over key goals reached a “breaking point.”
Jan Leike was a key security researcher at OpenAI as co-lead of super-alignment, ensuring powerful AI systems adhered to human values and goals. His intervention comes ahead of a global summit on artificial intelligence in Seoul next week, where politicians, experts and technology executives will discuss oversight of the technology.
Leike resigned days after the San Francisco-based company launched its latest artificial intelligence model, GPT-4o. His departure means two senior security figures at OpenAI have left this week following the resignation of Ilya Sutskever, OpenAI co-founder and co-head of superalignment.
Leike detailed the reasons for his departure in an X thread posted on Friday, in which he said safety culture had become a lower priority.
“In recent years, safety culture and processes have taken a backseat to shiny products,” he wrote.
OpenAI was founded with the objective to ensure that artificial general intelligence, which he describes as “AI systems that are generally smarter than humans,” benefits all of humanity. In his X posts, Leike said that he had been at odds with OpenAI leadership over the company’s priorities for some time, but that confrontation had “finally reached a breaking point.”
Leike said OpenAI, which also developed the Dall-E image generator and the Sora video generator, should invest more resources in issues such as security, social impact, confidentiality and protection for its next generation of models.
“These problems are quite difficult to solve and I am concerned that we are not on the trajectory to get there,” he wrote, adding that it was becoming “more difficult” for his team to conduct their investigation.
“Building machines smarter than humans is an inherently dangerous task. “OpenAI takes on an enormous responsibility on behalf of all humanity,” Leike wrote, adding that OpenAI “must become a security-first AGI company.”
Sam Altman, CEO of OpenAI, responded to Leike’s thread with a post on X thanking his former colleague for his contributions to the company’s security culture.
“You’re right, we have a lot more to do; We are committed to doing so,” she wrote.
Sutskever, who was also the chief scientist at OpenAI, wrote in his post announcing his departure that he was confident that OpenAI “will build an AGI that is safe and beneficial” under its current leadership. Sutskever had initially supported Altman’s ouster as head of OpenAI last November, before backing his reinstatement after days of internal turmoil at the company.
Leike’s warning came as a panel of international artificial intelligence experts published a inaugural report on AI safety, which said there was disagreement about the likelihood of powerful AI systems evading human control. However, he warned that rapid advances in technology could leave regulators behind, warning of the “potential disparity between the pace of technological progress and the pace of a regulatory response.”