Guidance Published to Help Developers Build Safer AI

The National Institute of Standards and Technology also launched a series of tests AI model developers can implement to evaluate generative systems

Ben Wodecki, Jr. Editor

May 1, 2024

3 Min Read
Close up of a developer's hands as they code in the office at night
Getty Images

The National Institute of Standards and Technology (NIST) has released several guidance documents to help companies build AI more safely.

NIST published four draft publications designed to provide advice to businesses implementing chatbots and text-based image and video systems.

Also published were documents on developing global AI standards and promoting transparency.

Initially released as drafts, NIST is seeking feedback to help finalize the publications before a final release later this year.

The guides are designed to work with other AI-related NIST publications like the AI Risk Management Framework and Secure Software Development Framework.

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology. “These guidance documents will not only inform software creators about these unique risks but also help them develop ways to mitigate the risks while supporting innovation.”

The AI RMF Generative AI Profile contains a list of 13 potential risks of model output and more than 400 actions developers employ to mitigate them.

The document outlines potential risks, categorizing their impact on technical issues, human misuse or broader societal concerns.

Related:NIST Creates Cybersecurity Playbook for Generative AI

The second publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models focuses on securing underlying software code.

Developers can use it to help take action against potentially malicious information found in datasets.

It provides guidance on training data collection processes, providing recommendations on analyzing text for signs of bias and manipulation.

Reducing Risks Posed by Synthetic Content provides insights into AI-generated content, including how to potentially implement transparency measures like watermarking and recording metadata.

The fourth guidance document, A Plan for Global Engagement on AI Standards, focuses on information sharing. Users are provided with recommendations on standards and cooperation in AI development.

NIST has also launched a series of tests AI model developers can implement to evaluate generative systems.

The NIST GenAI tests whether a generative AI system outputs potentially discriminative content. It also evaluates whether outputs are indistinguishable from human-produced content.

The tests currently only work on text generation systems, with more support for more modalities like images, video and code coming soon.

Related:Google-backed Anthropic Calls for More Public Funding for AI Standards

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” said U.S. Secretary of Commerce Gina Raimondo. “With these resources and the previous work on AI from the department, we are continuing to support responsible innovation in AI and America’s technological leadership

NIST’s publications come 180 days after President Biden signed the AI executive order.

“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said Secretary Raimondo.

In addition to NIST’s publications, the U.S. Patent and Trademark Office (USPTO) has launched a request for comment on how AI could affect evaluations on whether an invention is patentable under U.S. law.

The USPTO, which resides in the Commerce Department, wants views from intellectual property experts on potential impacts for an examiner determining what qualities as prior art, a key assessment used to evaluate the novelty of an invention.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like