Home Tech OpenAI forms security council as it trains latest AI model

OpenAI forms security council as it trains latest AI model

0 comment
OpenAI forms security council as it trains latest AI model

OpenAI says it is creating a security committee and has begun training a new AI model to replace the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board of directors on “critical security decisions” for its projects and operations.

The security committee comes amid a debate over AI security at the company, which came to light after a researcher, Jan Leike, resigned and criticized OpenAI for allowing security to “take a back seat.” versus shiny products. OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “super-alignment” team focused on AI risks that they co-led.

technology/article/2024/may/27/elon-musk-xai-raises-openai-funding-round"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"inAdvertisingPartnerABTest":false,"assetOrigin":"https://assets.guim.co.uk/"}"/>

OpenAI said it had “recently started training its next frontier model” and that its AI models led the industry in capability and security, although it made no mention of the controversy. “We welcome robust debate at this important time,” the company said.

AI models are prediction systems that are trained on vast data sets to generate text, images, videos, and human-like conversations on demand. Frontier models are the most powerful and cutting-edge artificial intelligence systems.

The security committee is packed with company experts, including Sam Altman, OpenAI’s CEO, and its president, Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, former general counsel at Sony.

The committee’s first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board within 90 days. The company said it will then make public the recommendations it is adopting “in a manner that is consistent with safety.”

You may also like