NIST seeks stakeholder comments on managing risks posed by AI systems
Responses will help shape its draft risk management framework
Responses will help shape its draft risk management framework
The US National Institute of Standards and Technology (NIST) is seeking comments on AI risk management, as it works on drafting a guidance document.
The agency is looking for input on defining and managing characteristics of AI trustworthiness, and the potential roadblocks when seeking to de-risk AI.
Responses will be taken into consideration as NIST continues to draft the Artificial Intelligence Risk Management Framework (AI RMF), a voluntary guidance document.
Risky business
The inception of the document was ordered by Congress and forms part of NIST's response to the Executive Order on Maintaining American Leadership in AI.
The AI RMF could make a critical difference in whether or not new AI technologies are competitive in the marketplace, according to deputy commerce Secretary Don Graves.
"Each day it becomes more apparent that artificial intelligence brings us a wide range of innovations and new capabilities that can advance our economy, security, and quality of life. It is critical that we are mindful and equipped to manage the risks that AI technologies introduce along with their benefits," Graves said.
The deadline to submit responses is August 19, with interested parties urged to download the template response form, and then send the completed version to [email protected].
The agency said it plans to hold a workshop the following month where attendees can help develop the outline for the draft AI RMF.
This latest call for comments comes hot on the heels of a similar request, for a white paper on proactive measures that developers can adopt to avoid bias in AI.
The document titled ‘A proposal for identifying and managing bias in AI,’ suggests a three-step process to identifying and managing potential lack of fairness at different stages in an AI system’s lifecycle.
About the Author
You May Also Like