Countries Commit to AI Risk Threshold Sharing at Seoul AI Safety Summit

Attendees signed Seoul Statement to work together to quantify potential risks of multimodal AI

Ben Wodecki, Jr. Editor

May 24, 2024

2 Min Read
U.K. Department for Science, Innovation and Technology

The Seoul AI Safety Summit wrapped up this week with countries worldwide agreeing to share risk thresholds for foundation model development and deployments.

The sequel to last November’s AI Safety Summit concluded with attendees signing the Seoul Ministerial Statement, an agreement to collaborate on quantifying potential AI risks — including what would constitute a “severe risk.”

Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, Netherlands, Nigeria, New Zealand, The Philippines, Republic of Korea, Rwanda, Kingdom of Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, the U.K, the U.S. and the EU all signed the statement.

The risk considerations apply to foundation or so-called “frontier models” — an AI model that can be applied to a broad range of applications, typically a multimodal system capable of handling images, text and other inputs.

Nations agreed that such severe risk would be if a foundation model could help bad actors use or access chemical or biological weapons.

Attending nations also agreed to work with leading tech companies to publish foundation model risk frameworks ahead of the AI Action Summit, taking place in France in early 2025.

“Through this AI Seoul Summit, 27 nations and the EU have established the goals of AI governance as safety, innovation and inclusion,” said Lee Jong Ho, Korea’s minister of science and information and communication technology. “In particular governments, companies, academia, civil society from various countries have together advanced to strengthen global AI safety capabilities and explore an approach on sustainable AI development.

Related:UK's AI Safety Summit: What They Discussed

“We will carry forward the achievements made in Korea and [the] U.K. to the next summit in France and look forward to minimizing the potential risks and side effects of AI while creating more opportunities and benefits.”

Following the event, a report on AI safety science will be published. It will include information shared by attendees and is designed to share insights with global policymakers and technology developers.

“The agreements we have reached in Seoul mark the beginning of Phase Two of our AI Safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future,” said U.K. technology secretary Michelle Donelan. “For companies, it is about establishing thresholds of risk beyond which they won’t release their models. For countries, we will collaborate to set thresholds where risks become severe.”

Related:AI Safety Summit: 28 Nations and EU Sign the ‘Bletchley Declaration’

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like