OpenAI Board Gets Veto Power Over AI Model Launches

OpenAI created a new team to test AI models for safety and report to top management and the board

Ben Wodecki, Jr. Editor

December 20, 2023

2 Min Read
OpenAI logo

At a Glance

  • OpenAI has launched a Preparedness team that will vet its models for safety prior to deployment.
  • AI models will be scored on four risk levels: low, medium, high and critical. High and critical models need more guardrails.
  • The board can reverse the decisions of top management.

In the wake of its recent boardroom turmoil, OpenAI is launching a dedicated team to oversee safety tests on its models.

OpenAI had fired CEO Sam Altman reportedly over concerns raised by its Chief Scientist Ilya Sutskever that the startup was moving too quickly to commercialize its technology instead of thoughtfully and with more guardrails against potential harms. Altman would return five days later after most of the staff threatened to quit in sympathy.

Now, OpenAI is bolstering its safety mandate with the debut of a new Preparedness team that will conduct evaluations and push the limits of its foundation models. Reports compiled by the team would be sent to the OpenAI leadership team as well as the new board of directors.

While the leadership team would make decisions on whether to move forward with systems following tests, the board now holds the right to reverse decisions.

“This technical work is critical to inform OpenAI’s decision-making for safe model development and deployment,” an OpenAI blog post reads.

The board’s new powers comes after a major reshuffle following the firing-rehiring debacle in November. The board is expected to expand to nine members from three, with Microsoft obtaining a non-voting observer position. The OpenAI board currently consists of former U.S. Treasury Secretary Larry Summers, former co-CEO of Salesforce, Bret Taylor and Quora co-founder Adam D’Angelo, with the latter the only survivor of the pre-coup board.

Related:OpenAI’s GPT Store Coming in Q1, Board to Expand, AI Summit NY 2023

OpenAI Preparedness Framework

OpenAI said its primary fiduciary duty is to “humanity, and we are committed to doing the research required to make AGI safe.”

With its new Preparedness Framework, OpenAI said it wants to learn from deployments and “use the lessons to mitigate emerging risks. For safety work to keep pace with the innovation ahead, we cannot simply do less, we need to continue learning through iterative deployment.”

Under its Preparedness Framework, a new Preparedness Team will conduct regular safety drills, ensuring OpenAI can respond rapidly to issues should they arise. OpenAI said it also will have qualified, independent third parties brought in to conduct audits.

All OpenAI models will now be continually updated – with models evaluated at every doubling of effective compute during training runs.

OpenAI’s tests will see models evaluated for issues related to cybersecurity, persuasion, model autonomy and misuse of systems to generate chemical, biological or nuclear threats.

Models will then be placed into one of four safety risks levels based on those results, similar to the EU AI Act’s classification of AI systems. OpenAI models are scored along risk levels of low, medium, high and critical. Those that score a ‘medium’ or lower will be deemed suitable for deployment while models that score ‘high’ or below can be further developed. Models deemed ‘critical’ or ‘high’ get additional safety measures.

Related:Sam Altman Returns as OpenAI CEO − to a New Board

OpenAI said its Preparedness Framework is currently in beta, with plans to update it continually.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like