UK's AI Safety Summit: What They Discussed

U.K. Prime Minister Rishi Sunak meets with world leaders. Also, a synopsis of the roundtable discussions at the summit

Ben Wodecki, Jr. Editor

November 2, 2023

6 Min Read
UK PM Rishi Sunak photo
Getty Images

At a Glance

  • Prime Minister Rishi Sunak met with world leaders at the AI Safety Summit.
  • Roundtable discussions led by government leaders range from open source risks to developer duties and AI controllability.

U.K. Prime Minister Rishi Sunak met with global leaders to emphasize the need for international cooperation in managing advanced AI systems.

He said that AI “does not respect borders” and that “no country can do this alone,” during the second and last day of his AI Safety Summit, held in the historic Bletchley Park estate, where modern computing was born.

“We’re taking international action to make sure AI is developed in a safe way, for the benefit of the global community,” Sunak said.

He appeared on a private panel session covering AI development and then met with world leaders including Secretary-General of the United Nations António Guterres.

Ahead of the event, the U.N. announced the formation of the AI Advisory Body, which would support the international community’s efforts to govern AI. The body is made up of AI experts from around the world, including Dame Wendy Hall who developed the world wide web precursor, the Microcosm hypermedia system, and Hiroaki Kitano, CTO of Sony.

The body will publish a series of recommendations next year, which will be informed by the discussions at the AI Safety Summit. The pair discussed the need for close global collaboration on AI safety.

Sunak also met with leaders from what he called "like-minded" countries, including U.S. Vice President Kamala Harris, Italian Prime Minister Giorgia Meloni and President of the European Commission Ursula von der Leyen.

Sunak met with Harris at 10 Downing Street the night before, though the pair largely spoke about the situation in Israel and Gaza.

During his meeting with von der Leyen, Sunak praised the Commission’s efforts for “taking the lead” in putting AI on the agenda and gave approbation to the two “working so closely together” on AI governance. Von der Leyen said the AI Safety Summit was “the right conference at the right time.”

Notably, China’s Vice Minister of Science and Technology Wu Zhaohui also spoke at the event despite some geopolitical tension. The U.S. has been trying to curb China’s AI progress due to national security concerns; it is restricting sales of advanced AI chips to China, among other measures.

The U.K. invited China to the summit, since China is a leader in AI and excluding the country would be “naïve,” said U.K. Tech Secretary Michelle Donelan on Bloomberg TV.

In his speech, streamed by CNBC, Wu said “China is willing to enhance dialogue and communication in AI safety with all sides, contributing to an international mechanism with broad participation and a governance framework based on wide consensus, delivering benefits to the people and building a community with a shared future for mankind.”

View post on X

What leaders, experts discussed

The first day of the summit saw eight roundtable discussions covering key themes and issues, including misuse of systems, risks from loss of control and national governance considerations.

François-Philippe Champagne, Canada’s science minister, chaired a panel about risks to global safety from frontier AI misuse.

His group said the most advanced systems, like GPT-4, could make it easier for less sophisticated actors to carry out attacks. Therefore, there was an “urgent” need for global action among governments, industry, and experts to develop safeguards, the group said, although they acknowledged that understanding of such systems is still in “early stages.”

Yi Zeng from the Chinese Academy of Sciences, chaired a panel on unpredictable advances in frontier AI capability.

His group acknowledged that the abilities of current AI systems are far beyond what many predicted only a few years ago, and that the “potential benefits of future systems should not be a reason to skip or rush safety testing or other evaluation.”

He also noted the benefits of open source models but warns that it is “impossible to withdraw an open access model with dangerous capabilities once released.” There needs to be a balance between benefits and risks.

Josephine Teo, Singapore’s minister for communications and information, led a panel examining the risks from loss of control over frontier AI.

She said current AI models are “easily controlled” and “do not present an existential risk” and it is “unclear” they would ever be uncontrollable by humans. However, “there is currently insufficient evidence to rule out that future frontier AI, if misaligned, misused or inadequately controlled, could pose an existential threat.”

“This question is an active discussion among AI researchers.”

Michelle Donelan, U.K. tech secretary, chaired a panel on what should frontier AI developers do to scale capability responsibly.

Her group concluded that while leading AI companies are making “significant progress” on AI safety policies, more must be done in months, not years. Also, company policies around responsible AI development “are just the baseline” and must be supplemented by government standards and regulations.

They further contend that standardized benchmarks will be required from trusted external third parties, like the newly announced U.K. and U.S. AI Safety Institutes.

Does the summit even matter?

On the first day of the summit, global leaders signed the Bletchley Declaration, a list of pledges to ensure AI is developed and deployed safely with particular focus on the most advanced systems.

However, the declaration “is not going to have any real impact on how AI is regulated,” noted Forrester principal analyst Martha Bennett.

She said the EU already has the AI Act in the works while U.S. President Biden released an Executive Order on AI this week. There is also the G7's ‘International Guiding Principles on Artificial Intelligence' and a voluntary 'Code of Conduct for AI Developers,’ released on Oct. 30.

These “contain more substance than the Bletchley Declaration,” Bennett said.

However, “the countries and entities represented at the AI Summit would not have agreed to the text of the Bletchley Declaration if it contained any meaningful detail on how AI should be regulated. And that’s OK,” she acknowledged. “The Summit and the Bletchley Declaration are more about setting signals and demonstrating willingness to cooperate, and that’s important. We’ll have to wait and see whether good intentions are followed by meaningful action.”

Siân John, NCC Group CTO, said the declaration combined with other global governance initiatives signed this week “represent critical steps forward toward securing AI on a truly global scale.”

“We are particularly heartened to see commitments from the Bletchley signatories to ensure that the AI Safety Summit is not just a one-off event, but that participants will convene again next year in South Korea and France, ensuring continued international leadership. In doing so, it will be important to set clear and measurable targets that leaders can measure progress against.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like