Home Tech OpenAI and Google DeepMind Workers Warn of AI Industry Risks in Open Letter

OpenAI and Google DeepMind Workers Warn of AI Industry Risks in Open Letter

0 comments
OpenAI and Google DeepMind Workers Warn of AI Industry Risks in Open Letter

A group of current and former employees of prominent artificial intelligence companies. issued an open letter Tuesday that warned of a lack of safety oversight within the industry and called for greater protection for whistleblowers.

The letter, which calls for the “right to warn about artificial intelligence,” is one of the most public statements about the dangers of AI by employees within what is generally a secretive industry. Eleven current and former OpenAI employees signed the letter, along with two current or former Google DeepMind employees, one of whom previously worked at Anthropic.

“AI companies possess important non-public information about the capabilities and limitations of their systems, the adequacy of their protection measures, and the risk levels of different types of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. “We don’t think everyone can be trusted to share it voluntarily.”

OpenAI defended its practices in a statement, saying it had avenues such as a hotline to report problems at the company and that it would not release new technology until proper safeguards were in place. Google did not immediately respond to a request for comment.

“We are proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. “We agree that rigorous debate is crucial given the importance of this technology and we will continue to engage with governments, civil society and other communities around the world,” said an OpenAI spokesperson.

Concerns about the potential harms of artificial intelligence have existed for decades, but the AI ​​boom of recent years has intensified those fears and left regulators scrambling to catch up with technological advances. While AI companies have publicly stated their commitment to developing the technology safely, researchers and employees have warned of a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.

The letter from current and former employees of the AI ​​company, which was the first reported by the New York Times, calls for greater protection for workers at advanced artificial intelligence companies who choose to voice security concerns. Calls for a commitment to four principles around transparency and accountability, including a provision that companies will not force employees to sign any non-disparagement agreements that prohibit airing risk-related AI issues and a mechanism for employees to share their concerns anonymously with board members.

“As long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter states. “However, extensive confidentiality agreements prevent us from expressing our concerns except to the same companies that may not be addressing these issues.”

Companies like OpenAI have also used aggressive tactics to prevent employees from speaking freely about their work, with Vox reports last week that OpenAI made employees leaving the company sign extremely restrictive non-disparagement and confidentiality documents or lose all of their vested equity. Sam Altman, CEO of OpenAI, apologized following the report and said he would change exit procedures.

The letter comes after two senior OpenAI employees, co-founder Ilya Sutskever and key security researcher Jan Leike, resigned from the company last month. Following his departure, Leike alleged that OpenAI had abandoned a culture of security in favor of “brilliant products.”

Tuesday’s open letter echoed some of Leike’s statements, saying the companies showed no obligation to be transparent about their operations.

You may also like