Countries Commit to AI Risk Threshold Sharing at Seoul AI Safety Summit
Attendees signed Seoul Statement to work together to quantify potential risks of multimodal AI
The Seoul AI Safety Summit wrapped up this week with countries worldwide agreeing to share risk thresholds for foundation model development and deployments.
The sequel to last November’s AI Safety Summit concluded with attendees signing the Seoul Ministerial Statement, an agreement to collaborate on quantifying potential AI risks — including what would constitute a “severe risk.”
Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, Netherlands, Nigeria, New Zealand, The Philippines, Republic of Korea, Rwanda, Kingdom of Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, the U.K, the U.S. and the EU all signed the statement.
The risk considerations apply to foundation or so-called “frontier models” — an AI model that can be applied to a broad range of applications, typically a multimodal system capable of handling images, text and other inputs.
Nations agreed that such severe risk would be if a foundation model could help bad actors use or access chemical or biological weapons.
Attending nations also agreed to work with leading tech companies to publish foundation model risk frameworks ahead of the AI Action Summit, taking place in France in early 2025.
“Through this AI Seoul Summit, 27 nations and the EU have established the goals of AI governance as safety, innovation and inclusion,” said Lee Jong Ho, Korea’s minister of science and information and communication technology. “In particular governments, companies, academia, civil society from various countries have together advanced to strengthen global AI safety capabilities and explore an approach on sustainable AI development.
“We will carry forward the achievements made in Korea and [the] U.K. to the next summit in France and look forward to minimizing the potential risks and side effects of AI while creating more opportunities and benefits.”
Following the event, a report on AI safety science will be published. It will include information shared by attendees and is designed to share insights with global policymakers and technology developers.
“The agreements we have reached in Seoul mark the beginning of Phase Two of our AI Safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future,” said U.K. technology secretary Michelle Donelan. “For companies, it is about establishing thresholds of risk beyond which they won’t release their models. For countries, we will collaborate to set thresholds where risks become severe.”
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like