National Security Agency Releases Guidance for Securely Implementing AI Systems

The NSA and other cybersecurity agencies recommend sensitive AI model information like weights should be protected

Ben Wodecki, Jr. Editor

April 17, 2024

2 Min Read
Digital background depicting innovative technologies in security systems, data protection Internet technologies
Getty Images

The National Security Agency (NSA) has released a series of Cybersecurity Information Sheets (CSIs) aimed at assisting organizations in securely implementing AI systems obtained from external sources.

The resources were jointly created with other cybersecurity agencies from the U.K., New Zealand and Canada.  

While the recommendations are aimed at national security systems, the NSA’s guidance applies to any company deploying AI tools from third parties, especially those operating in high-risk environments.

“AI security is a rapidly evolving area of research,” according to the report. “As agencies, industry and academia discover potential weaknesses in AI technology and techniques to exploit them, organizations will need to update their AI systems to address the changing risks, in addition to applying traditional IT best practices to AI systems.”

Key recommendations include ensuring that firms deploying AI have strong governance frameworks in place. For example, the agencies recommend that staff responsible for an organization's cybersecurity in general should also be accountable for AI system security.

Before any AI deployment, the cybersecurity agencies suggest securing existing IT infrastructure before validating the AI system they want to deploy prior to a full implementation.

Related:AI in Cybersecurity: Understanding the Digital Security Landscape

The agencies warn a key priority is restricting access to the core AI models and data on which they are trained including model weights, which could be targeted for theft or manipulation by hackers.

The report advises isolation and hardware protections for sensitive data and implementing access restrictions including multifactor authentication and two-person control to prevent unauthorized access.

It also recommends businesses create and maintain logs of an AI model’s behavior, including information on inputs, outputs, errors and any unexpected modifications that might compromise the model's performance or security.

Additionally, AI systems should be routinely audited with incident response protocols in place to respond to incidents, the report recommended.

The report also suggests organizations implementing AI adopt secure-by-design and zero-trust approaches in their architecture to manage risks to and from the external AI system.

“AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise and advanced threat analysis,” said Dave Luber, the NSA’s cybersecurity director.

The CSIs are the first guidance released by the NSA’s Artificial Intelligence Security Center, established last September. The new entity collaborates with other cybersecurity agencies and academia to develop resources aimed at safeguarding the nation against AI misuse.

Related:AI Cybersecurity Startup Raises $300M to Secure Enterprise Data

The new resource is designed to build upon the NSA’s previous guidance, including the Guidelines for Secure AI System Development and Engaging with Artificial Intelligence.

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like