Four AI Leaders Create Industry Body on AI Safety Research
OpenAI, Google, Microsoft and Anthropic launched the Frontier Model Forum to coordinate research efforts
At a Glance
- OpenAI, Microsoft, Google and Anthropic launch an industry body for safe AI development, called Frontier Model Forum.
- Initiatives include safety research coordination and information sharing with policymakers.
OpenAI, Microsoft, Google and Anthropic are joining forces to create an industry group to ensure safe AI model development.
The Frontier Model Forum will see the four AI leaders combine their expertise to create a public library of solutions to support industry best practices and standards.
The Frontier Model Forum said it plans to cover four key areas:
Advance AI safety research to promote responsible development of frontier models and minimize potential risks
Identify safety best practices for frontier models
Share knowledge with policymakers, academics, civil society and others to advance responsible AI development
Support efforts to leverage AI to address society’s biggest challenges
Its members will coordinate research efforts on adversarial robustness, emergent behaviors and anomaly detection while also creating “secure mechanisms” for sharing information about AI risk, according to the founding companies.
Brad Smith, vice chair and president of Microsoft, described the initiative as “a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Smith added.
What is a Frontier model?
The quartet focus on what they call ‘frontier models’ – which they define as large-scale machine learning models that “exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.”
Frontier models are not to be confused with Foundation models, or general-purpose artificial intelligence systems like ChatGPT. Essentially, the concept of a Frontier AI model appears to be a blanket term for companies developing large multimodal AI systems.
The group said they want other companies to join them – opening the door for the likes of Meta, Amazon and Apple.
Any new member of the Frontier Model Forum would have to be willing to collaborate toward "the safe advancement of these models."
“The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety,” said Dario Amodei, CEO of Anthropic.
Getting on the bandwagon
The group will start by creating an advisory board to guide its strategy and priorities. The founding companies will establish a charter, outline governance and provide funding as well as appoint an executive board to lead efforts. They stressed they would consult both civil society and governments to help shape the forum.
The newly formed group will work to support initiatives such as the G-7's Hiroshima process, the OECD’s work on AI risks and the U.S.-EU Trade and Technology Council. The Forum also plans to support projects like MLCommons and the Partnership on AI.
The formation of the Frontier Model Forum comes as AI developers face increased scrutiny.
Governments around the world are moving to establish governance and regulations to keep AI risks in check. The founders of the Forum were among those who met with world leaders to persuade them not to stymie their work.
Just last week, the four companies along with Amazon and Llama 2 developer Meta agreed to a White House framework that would allow independent experts to conduct security tests of their AI systems before release.
Stay updated. Subscribe to the AI Business newsletter.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like