Build More Secure AI: US, UK Strike Landmark Cybersecurity Agreement

New guidelines for secure AI system development call for a holistic approach to AI, securing the entire lifecycle

Ben Wodecki, Jr. Editor

November 30, 2023

2 Min Read
Logos of CISA and NCSC

At a Glance

  • U.S. and U.K. cyber agencies publish a guide on how to build AI that’s more secure.

In what they bill as a landmark collaboration, U.S. and U.K. cybersecurity agencies jointly published guidelines for building more secure AI systems, specifically aimed at protecting critical infrastructure.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC) said the recommendations are mainly for organizations building AI but encouraged consideration by all stakeholders including developers, decision-makers and managers.

The guidelines are designed to cover all types of AI systems, not just the so-called ‘frontier models’ like GPT-4 that were the focal point of safety conversations at the recent AI Safety Summit.

They offer guidance on security from design to deployment and beyond – outlining considerations to help reduce overall risk through an AI system’s lifecycle.

They follow a ‘secure by default’ approach – and is aligned with practices from other best practices like NIST’s Secure Software Development Framework.

Billed as a holistic approach, the guidelines cover issues around supply chain security, documentation, asset management and technical debt management.

The recommendations include protecting infrastructure and models from compromise, with incident management processes and continuous monitoring of a system's behavior and inputs.

The agencies say developers “must invest in prioritizing features, mechanisms and implementation of tools that protect customers at each layer of the system design, and across all stages of the development life cycle. Doing this will prevent costly redesigns later, as well as safeguarding customers and their data in the near term.”

Upon announcement, the U.S. and U.K. agencies said the approach “prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability and establishes organizational structures where secure design is a top priority.”

Other global cybersecurity agencies signing onto the guidelines include the National Security Agency (NSA), FBI, the German Federal Office for Information Security, the Canadian Center for Cyber Security and the Cyber Security Agency of Singapore, among others.

Assisting in the development of the guidelines were Amazon, Anthropic, Google, Google DeepMind, Hugging Face, IBM, Microsoft, OpenAI, RAND, Scale AI, Palantir, Stanford Center for AI Safety, and others.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like