Why Global Regulation Might Not Stop Dangerous AI

The U.K.'s AI Safety Summit set out noble goals for international cooperation to mitigate AI risks. They may not be realistic.

Sascha Brodsky, Contributor

November 9, 2023

7 Min Read
Photo of flags from many countries
Getty Images

At a Glance

  • Last week's AI Safety Summit brought international accord on addressing AI risks. But experts say it may not work.
  • There is too much focus on reigning in the technology rather than its impact on people.
  • Vigilant but dynamic regulation of AI is needed to adapt to this fast-developing technology, experts said.

Governments are working on ways to limit the risks of AI, but some experts are skeptical that the technology can be kept under control.

Last week’s AI Safety Summit in the U.K. brought together the U.S., China, the European Union, Britain, and 25 other nations. The participants agreed on guidelines to stop problems like spreading false information and causing serious harm, whether on purpose or by accident.

But these are noble goals that may not be realistic.

“AI is an umbrella term for a range of technologies ranging from expert systems to traditional machine learning and, very recently, generative AI,” Kjell Carlsson, the head of data science strategy and evangelism at Domino Data Lab, said in an interview. “This makes it difficult to craft regulation that applies to all of these technologies and the myriad potential use cases.”

The hope of preventing harm

Policymakers at the summit emphasized the importance of ongoing AI research, primarily focusing on ensuring safety in a Bletchley Declaration. The unified front comes as some of the leading minds in AI have warned of everything from the technology eliminating the human race to merely slashing jobs.

However, other observers have suggested that these doomsday scenarios are either overblown or a form of marketing. “AI doomism is quickly becoming indistinguishable from an apocalyptic religion,” Meta’s top AI scientist,” Yann LeCun, wrote on X, formerly known as Twitter.

Related:AI Safety Summit: 28 Nations and EU Sign the ‘Bletchley Declaration’

The summit attendees tried to seat themselves somewhere in the middle of the debate.

“Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace, and prosperity,” the Bletchley Declaration said. “To realize this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy, and responsible.”

The Declaration highlights the need for everyone to work together to identify AI safety risks and understand them based on science. It also suggests making rules that fit each country's specific situation and promoting cooperation and openness between governments.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” the declaration said.

Individual governments are also taking steps to rein in AI. Before the conference, President Biden issued an executive order on AI development, emphasizing collaboration among government, businesses and universities to guide AI evaluation, foster innovation, protect jobs and ensure consumer privacy.

Related:UK's AI Safety Summit: What They Discussed

“My administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice,” Biden said.

Skeptics throw shade on proposals

While the summit shows a newfound international unity on AI safety, the current proposed regulations do not go far enough to limit risks, Milan Kordestani, an AI regulatory expert, said in an interview. He noted that Biden’s order directs federal agencies to develop and implement AI safeguards, and it also encourages the private sector to adopt these safeguards but lacks specificity.

“The proposed regulations do not directly limit the private sector, nor do they address the ways that individual citizens will interact with AI technology,” he added. “More importantly, these regulations do not address the development of AI in academic institutions nor even engage the academic community as part of a dialogue about AI risks. In these ways, the proposed regulations are not yet situated well enough to address the serious risks inherent in AI.”

Much of the AI regulation being considered in the U.S. and abroad is focused on the technology, as opposed to a human-centric approach, Kordestani said. He proposed that we not only have to ask how far we can develop this technology but also what AI will mean for our technological advancement in other realms, such as medicine or education.

Related:5 Takeaways from the UK's AI Safety Summit

“We will need regulations that address changes in our workforce, our distribution networks, and even the ways our own minds operate,” he added. “The current proposed regulations are an important and necessary first step, but they cannot even begin to address the long-term social implications of AI development.”

“Legislators designing regulations for the internet in the late 1980s could never have predicted our need for regulation of misinformation on social media now: in the same way, AI regulation will need to be a dynamic process to ensure that we are constantly addressing new risks of AI.”

Like nuclear weapons

One problem with trying to keep AI in check is the technology’s global reach. Richard Gardner, the CEO of the tech company Modulus, compared the challenge of regulating AI to reining in nuclear weapons.

“Regulating AI within the borders of a country, or even as an international community, means that enemy nation-states may continue development, even if they sign onto an accord saying that they wouldn’t,” he said in an interview. “So, too, can rogue developers create black-market AI products. It is not unlike the decades spent playing whack-a-mole in Iraq and Iran hunting for nuclear programs − except that computers are much easier to hide.”

When it comes to copyrighted material, Gardner said government regulations should center on the use of robots.txt files. These text files made by webmasters to guide search engine bots on how to explore their website's pages should be included in the programming of AI to exclude the use or training on copyrighted or other protected materials.

“Then we need to let innovators continue with R&D,” he added. “The Biden administration’s potentially heavy-handed approach to regulation may end up creating government-sanctioned censorship.”

Even figuring out the broad outlines of how to apply human ethics can be challenging. Current human rights laws form the basis on which AI initiatives can be challenged. Still, there is a desperate need to make a precise translation of those abstract rights in the form of specific regulations, Rob van der Veer, an AI and application security expert at Software Improvement Group, said in an interview.

“Privacy regulations put some boundaries on the purpose of AI systems that use personal data, transparency and fairness, and the protection and handling of personal data involved,” he added. “Security regulations take care of the basic application security in AI systems, but have many blind spots regarding the new threats and assets that AI engineering has, e.g., the poisoning of training data.”

How to make better regulations

The whole field of AI is progressing so rapidly that it is hard to predict and thus control what’s around the corner. The process of regulating AI must be continuous, with constant assessment and reassessment of new advances and technologies, Kordestani noted. He suggested this approach potentially means new legislation or international treaties limiting new training methods or new applications.

“Governments should collaborate with companies to ensure that innovation is not stifled and competitive edges can be maintained while also ensuring safe development and use of AI,” he added. “Simultaneously, more academics need to be engaged and respected in this dialogue to ensure that we also maintain the safety of our public from moral perspectives - ensuring equitable access, discussing data ethics and social bias, and asking valuable questions about the meaning of human existence in the face of technological change.”

Ultimately, one of the dangers of AI is that bad-faith actors will develop AI for nefarious purposes, Kordestani said. To make things safer, we need to keep making better international rules for AI development.

“But I believe that it is essential to build these multi-stakeholder approaches with dialogue between the public, private and academic realms to constantly adapt and address dangers and bad-faith actors as they come,” he added. “Vigilant but dynamic regulation of AI can prevent the average citizen from being exposed to most dangers AI could pose to our workforce, our governance, and our daily lives.”

Regulation works better when it is tailored to specific use cases rather than focusing solely on a particular technology, Carlsson said. For example, he noted it is more effective to regulate car safety than to hope that general regulations on all combustion engines will lead to safer cars. So, making laws to prevent the use of deep fakes for fraud is more valuable than suggesting laws that demand adding watermarks to content made with generative AI.

“While this means that regulation will always be catching up to the use cases – since new uses for AI are constantly being invented – this is both unavoidable and appropriate,” he added. “However, it means that we need a regulatory structure that can adapt quickly and both design, enforce, and update regulations quickly. Unfortunately, much like every technology, the potential problems and risks of AI have very little to do with AI and everything to do with humans.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like