Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
July 26, 2023
OpenAI, Microsoft, Google and Anthropic are joining forces to create an industry group to ensure safe AI model development.
The Frontier Model Forum will see the four AI leaders combine their expertise to create a public library of solutions to support industry best practices and standards.
The Frontier Model Forum said it plans to cover four key areas:
Advance AI safety research to promote responsible development of frontier models and minimize potential risks
Identify safety best practices for frontier models
Share knowledge with policymakers, academics, civil society and others to advance responsible AI development
Support efforts to leverage AI to address society’s biggest challenges
Its members will coordinate research efforts on adversarial robustness, emergent behaviors and anomaly detection while also creating “secure mechanisms” for sharing information about AI risk, according to the founding companies.
Brad Smith, vice chair and president of Microsoft, described the initiative as “a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Smith added.
The quartet focus on what they call ‘frontier models’ – which they define as large-scale machine learning models that “exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.”
Frontier models are not to be confused with Foundation models, or general-purpose artificial intelligence systems like ChatGPT. Essentially, the concept of a Frontier AI model appears to be a blanket term for companies developing large multimodal AI systems.
Any new member of the Frontier Model Forum would have to be willing to collaborate toward "the safe advancement of these models."
“The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety,” said Dario Amodei, CEO of Anthropic.
The group will start by creating an advisory board to guide its strategy and priorities. The founding companies will establish a charter, outline governance and provide funding as well as appoint an executive board to lead efforts. They stressed they would consult both civil society and governments to help shape the forum.
The newly formed group will work to support initiatives such as the G-7's Hiroshima process, the OECD’s work on AI risks and the U.S.-EU Trade and Technology Council. The Forum also plans to support projects like MLCommons and the Partnership on AI.
The formation of the Frontier Model Forum comes as AI developers face increased scrutiny.
Governments around the world are moving to establish governance and regulations to keep AI risks in check. The founders of the Forum were among those who met with world leaders to persuade them not to stymie their work.
Just last week, the four companies along with Amazon and Llama 2 developer Meta agreed to a White House framework that would allow independent experts to conduct security tests of their AI systems before release.
Read more about:ChatGPT / Generative AI
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024