Nations Pledge to Make AI 'Secure By Design.’ Can They Go Beyond Nice Platitudes?

Eighteen nations signed an agreement from the U.S. and U.K. cybersecurity agencies to keep AI systems safe. Will it work?

Sascha Brodsky, Contributor

December 12, 2023

4 Min Read
Getty Images

At a Glance

  • U.S. Cybersecurity and Infrastructure Security Agency and U.K. National Cyber Security Centre released safe AI guidelines.
  • Eighteen nations pledged to keep AI 'secure by design.' Experts say the agreement is too general to be effective.

The recent pledge by 18 countries to create AI systems that are "secure by design” is only the beginning of what is necessary to safeguard them, experts say.

Signing on to published guidelines from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC) on building secure AI, these nations concurred that companies involved in creating and implementing AI should ensure its usage is safe for consumers and the general populace while safeguarding them from potential misuse.

But the pact, which is not legally enforceable, mainly offers broad guidelines. These include monitoring AI systems for misuse, securing data against unauthorized alterations, and conducting thorough evaluations of software providers.

“This is a good start,” Fred Rica, the former head of cyber risk at KPMG and currently a partner at accounting and consulting firm BPM, said in an interview. “It focuses on risks, provides a set of principles, creates alignment, and raises awareness. But those things, while good and important, are a far cry from any sort of prescriptive guidance as to what exactly constitutes ‘secure’ or, even more specifically, ‘secure by design.’”

Guidelines meant to keep AI safe

The new AI guidelines have grand ambitions to keep the world safe from rogue chatbots and hackers penetrating LLMs. They include broad suggestions like closely overseeing the AI model's infrastructure, keeping an eye out for any interference with the models both before and after their launch, and educating staff about cybersecurity risks. However, the directives are short on details and do not address some controversial topics in AI, such as what kind of regulations might be needed for image-generating models and deepfakes or data collection and usage methods in training models.

Related:Build More Secure AI: US, UK Strike Landmark Cybersecurity Agreement

"Both AI and traditional software are the product of complex supply chains, and security cannot be added as a ‘final layer of paint’ on just the places that are needed,” George Davis, the CEO of Frame AI, said in an interview. “For traditional software development, CISA recommendations like ‘avoid default passwords’ build on 40-plus years of global experience with networked computer systems. For AI systems, we are rushing to anticipate use cases and the resulting risks at a moment of extreme innovation - barely one year into the widespread use of generative AI.”

Setting early rules for AI has risks, Davis said. Acting too hastily could lead to an excessive emphasis on training large AI models such as GPT-4 or Bard for safety while neglecting the safety of smaller, more specialized models and the systems in which they are implemented. It also could lead us to worry too much about the dangers of AI chatbots, like the fear that ChatGPT could teach bad things, and not enough about other uses of AI, like analyzing data with different safety needs.

Related:UK's AI Safety Summit: What They Discussed

Next steps for security

Protecting AI systems is similar to addressing standard computer security issues. Starting with a risk assessment of AI is crucial for security and control, Rica said. He suggested that designers and developers need to think about potential risks to the AI system, such as data tampering, manipulation of the model or external interference. They also need to consider the entire AI process, including third-party services, and how to control and monitor access to the system.

Another critical step is to independently check and confirm that the AI's security and controls are working as they should. For instance, a good approach is to define clear policies and principles for designing and developing AI, focusing on values like fairness, transparency and inclusivity. Then, get an outside party to verify that these principles are followed.

“Someone once said ‘the best firewall is a cable cutter’,” Rica noted. “With any system or technology, there is always a balance between usability and controls. With that backdrop, it is highly likely we will develop an appropriate balance and develop and deploy AI systems with appropriate controls. Additionally, part of the amazing power of a technology like AI is that we will be able to use AI to protect AI. We are seeing that already in certain areas – like using AI to detect deepfakes and using AI to model the likely paths a hacker might take to break into a system.”

While some elements of AI security, such as data protection, have been around for a while, other aspects are relatively new and unique to AI. For example, large language models (LLMs) are inherently prone to prompt-injection attacks, which can cause them to behave in ways for which they were not designed, making it a concern as AI gets more autonomy and agency to take actions, Alastair Paterson, the CEO of Harmonic Security, said in an interview.

“LLMs are also master persuaders, and ensuring they are not misused to manipulate members of the public will be a tremendous challenge,” he added. “We do not yet have solutions to these issues, let alone the ones coming down the track as the technology evolves. As such, it is hard to be optimistic about our ability to make AI safe and secure, but we must keep trying by innovating rapidly in the security industry to meet these new challenges.”

About the Author(s)

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like