Home Money OpenAI employees warn of a culture of risk and retaliation

OpenAI employees warn of a culture of risk and retaliation

0 comment
OpenAI employees warn of a culture of risk and retaliation

A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risks, without sufficient oversight and gagging employees who could witness irresponsible activities.

“These risks range from a further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems, which could lead to human extinction,” reads the letter published in righttowarn.ai. “As long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

The letter asks not only OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also requires companies to establish “verifiable” ways for workers to provide anonymous feedback about their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activities, while many of the risks we are concerned about are still unregulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases throughout the industry.”

OpenAI came under fire last month after a Vox article revealed that the company has threatened to take back equity from employees if they do not sign non-disparagement agreements prohibiting them from criticizing the company or even mentioning the existence of such an agreement. OpenAI CEO Sam Altman said on X recently that he was not aware of any such agreements and that the company had never recovered anyone’s capital. Altman also said the clause would be eliminated, giving employees freedom to speak out. OpenAI did not respond to a request for comment at the time of publication.

OpenAI has also recently changed its approach to managing security. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s most powerful AI models effectively disbanded after several prominent figures left and the remaining team members were absorbed. by other groups. A few weeks later, the company announced that he had created a Safety Committee, led by Altman and other board members.

Last November, Altman was fired by OpenAI’s board of directors for allegedly failing to disclose information and deliberately misleading them. After a very public fight, Altman returned to the company and most of the board of directors was removed.

Signers of the letters include people who worked on security and governance at OpenAI, current employees who signed anonymously, and researchers currently working at rival AI companies. It was also supported by several renowned AI researchers, including Geoffrey Hinton and Yoshua Bengio, who won the Turing Award for their pioneering AI research, and Stuart Russell a leading AI security expert.

Former employees who signed the letter include William Saunders, Carroll Wainwright and Daniel Ziegler, all of whom worked on AI security at OpenAI.

“The general public currently underestimates the pace at which this technology is developing,” he says. Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and who left the company more than a year ago to pursue a new research opportunity. Hilton says that while companies like OpenAI are committed to developing AI safety, there is little oversight to ensure this is the case. “The protections we’re asking for are intended to apply to all cutting-edge AI companies, not just OpenAI,” he says.

“I left because I lost trust that OpenAI would behave responsibly,” says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI. “There are things that happened that I think should have been revealed to the public,” he adds, declining to give details.

Kokotajlo says the charter proposal would provide greater transparency and believes there’s a good chance OpenAI and others will reform their policies given the backlash to news of non-disparagement agreements. He also says AI is advancing at a worrying speed. “The stakes are going to be much, much, much higher in the coming years,” he says, “at least I think so.”

You may also like