Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
New guidelines for secure AI system development call for a holistic approach to AI, securing the entire lifecycle
In what they bill as a landmark collaboration, U.S. and U.K. cybersecurity agencies jointly published guidelines for building more secure AI systems, specifically aimed at protecting critical infrastructure.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC) said the recommendations are mainly for organizations building AI but encouraged consideration by all stakeholders including developers, decision-makers and managers.
The guidelines are designed to cover all types of AI systems, not just the so-called ‘frontier models’ like GPT-4 that were the focal point of safety conversations at the recent AI Safety Summit.
They offer guidance on security from design to deployment and beyond – outlining considerations to help reduce overall risk through an AI system’s lifecycle.
They follow a ‘secure by default’ approach – and is aligned with practices from other best practices like NIST’s Secure Software Development Framework.
Billed as a holistic approach, the guidelines cover issues around supply chain security, documentation, asset management and technical debt management.
The recommendations include protecting infrastructure and models from compromise, with incident management processes and continuous monitoring of a system's behavior and inputs.
The agencies say developers “must invest in prioritizing features, mechanisms and implementation of tools that protect customers at each layer of the system design, and across all stages of the development life cycle. Doing this will prevent costly redesigns later, as well as safeguarding customers and their data in the near term.”
Upon announcement, the U.S. and U.K. agencies said the approach “prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability and establishes organizational structures where secure design is a top priority.”
Other global cybersecurity agencies signing onto the guidelines include the National Security Agency (NSA), FBI, the German Federal Office for Information Security, the Canadian Center for Cyber Security and the Cyber Security Agency of Singapore, among others.
Assisting in the development of the guidelines were Amazon, Anthropic, Google, Google DeepMind, Hugging Face, IBM, Microsoft, OpenAI, RAND, Scale AI, Palantir, Stanford Center for AI Safety, and others.
You May Also Like