October 26, 2023
At a Glance
- Rishi Sunak tries to strike a balance on AI safety ahead of his global summit, but experts warn that it is a ‘pipedream.’
- He also invited China to come, saying it might not have been easy but "it was the right thing to do."
U.K. Prime Minister Rishi Sunak sought to strike a balance between being realistic about the risks of AI and remaining optimistic about its potential benefits, a week before the government’s AI Safety Summit.
In a speech at the Royal Society, the U.K.’s national academy of sciences, he acknowledged that AI poses serious risks of being used by bad actors, but said he did not want to be an alarmist.
“Right now, the only people testing the safety of AI are the very organizations developing it. Even they don’t always fully understand what their models could become capable of,” Sunak said. “And there are incentives in part, to compete to build the best models, quickest. So, we should not rely on them marking their own homework, as many of those working on this would agree.”
Sunak argued against rushing to regulate AI at the risk of squashing innovation.
“How can we write laws that make sense for something we don’t yet fully understand? Instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government.”
Thus far, the U.K. government’s approach to AI legislation has been less stringent than that of the EU. It outlined its requirements in a white paper but tasked regulators to come up with rules on AI for their specific jurisdictions. In June, the country’s AI minister said at our conference, AI Summit London, that any prospective regulation would complement technical standards and assurance techniques − meaning there could be more regulation coming.
Inviting China is ‘the right thing to do’
Sunak said he wants to collaborate with other countries on AI safety, rather than treating them as adversaries. That is why countries such as China was invited to the summit, to be held on Nov. 1 and 2, despite its tensions with the U.S. Sunak wants input from a diversity of voices for a robust discussion on regulating AI.
“There can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers,” he said. “That might not have been the easy thing to do, but it was the right thing to do.”
China has taken a stricter approach compared to the West. Chinese AI companies have to go through a security review by the country’s data watchdog before they can release new generative AI models to the public.
However, the country is open to oversight relating to AI, though President Xi Jinping said in June that China is to focus on its own sovereign security, rather than global safety.
Deputy Prime Minister Oliver Dowden confirmed that China has accepted the invitation, although “we’ll wait to see everyone who actually turns up at the summit,” he told BBC Radio 4’s Today program.
The event itself is behind closed doors, similar to the one held by U.S. Senator Chuck Schumer in September.
Stay updated. Subscribe to the AI Business newsletter.
For the U.K. summit, Sunak said he hopes the event will create a shared understanding of the risks of AI. He wants attendees to agree to the first-ever international statement about the nature of AI risks.
Sunak also wants to establish a “truly global expert panel” for AI, nominated by the countries and organizations attending. The panel would be tasked with publishing a ‘State of AI Science’ report.
“Our efforts also depend on collaboration with the AI companies themselves. … Every new wave will become more advanced, better trained, with better chips, and more computing power. So we need to make sure that as the risks evolve, so does our shared understanding.”
The panel reflects prior calls from senior politicians, including U.N Secretary-General António Guterres, to create a global watchdog to monitor AI risks, similar to the International Atomic Energy Agency (IAEA), which monitors nuclear power plants and weapons.
Industry reaction: Skepticism and pipedreams
Sunak has been keen to showcase to the world that the U.K. is an AI superpower. This vision included spending $120 million on a group tasked with advising the government on AI, which includes Turing Award winner Yoshua Bengio.
Big AI names are due to attend the summit, including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei alongside world leaders like U.S. Vice President Kamala Harris. The event, which will have 100 attendees, will be held at historic Bletchley Park, where codebreakers led by computer scientist pioneer Alan Turing cracked the Enigma codes during World War II.
However, while viewing the summit as bringing “to the forefront the need to have the highest level of international cooperation between governments and industry, the challenge is not just to ride the AI wave but to direct its course. The AI revolution isn't merely about leveraging a tool; it's about sculpting a future where business prosperity and societal welfare are inseparably intertwined. Collaboration is a must," said Maya Dillon, head of AI at Cambridge Consultants.
Paul Henninger, KPMG U.K.’s head of connected technology, said the event will kick start collaborative approaches to AI risk assessments, but organizations will “welcome regular updates to guidance as new use cases develop, and the technology evolves.”
The ultimate aim of the event is to agree on how to regulate AI. But such a hope is a "pipedream," in the view of Chris Royles, EMEA Field CTO at Cloudera, who said it could take years for effective regulation to come out. Instead, he said companies should focus on ensuring their AI is trained on trusted proprietary data.
Fabien Rech, general manager at Trellix, agreed, adding that taking a security-first approach to AI "allows organizations to regain confidence, giving them the upper hand while protecting the business from cybercriminals looking to leverage generative AI.”
Report highlights potential genuine risks of AI
Hours before Sunak's speech, the U.K. government published a paper that outlined the various risks of AI. It references a wide range of risks:
Societal harms: Degradation of the information environment through generation of misinformation and deepfakes; labor market disruption from automation; and algorithmic bias leading to unfairness.
Misuse risks: Dual use including the potential for aiding the development of weapons, as well as improving the effectiveness of cyberattacks and disinformation campaigns.
Loss of control risks: Humans handing over control of decisions to misaligned AIs, and advanced AI agents actively seeking to increase their own influence and reduce human control (both seen as controversial and unlikely in the near future)
The report also highlights cross-cutting risk factors that could exacerbate these, like the difficulty of designing safe AI systems, evaluating their safety, tracking their use and lack of incentives for safety.
In response to the report, Deployteq CEO Sjuul van der Leeuw said it shows the U.K. government is taking “a serious approach” to AI safety. "AI holds huge opportunities for businesses and industries … but only when supported by the right regulation and guidance from government and policymakers."
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like