The agency proposes a model that looks at objectivity, safety, and accuracy among the potential factors that impact user trust

Ben Wodecki, Jr. Editor

May 20, 2021

3 Min Read

The US National Institute of Standards and Technology (NIST) has proposed a method for evaluating trust in AI-enabled devices, and is looking for stakeholder feedback.

Accuracy, objectivity, and safety are among the nine factors identified by NIST as having a potential impact on user trust in AI.

“We are proposing a model for AI user trust,” Brian Stanton, a cognitive scientist in NIST’s visualization and usability group, said.

“It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

Do you trust me?

The NIST proposal details nine factors that reportedly contribute to a person’s potential trust in an AI system.

The factors differ from the technical requirements for trustworthy AI that NIST is currently developing alongside industry stakeholders.

Stanton’s paper, which was co-authored with NIST computer scientist Ted Jensen, suggests that a person weighs the trust factors related to an AI system differently depending on both the task itself, and the risk involved in trusting the model’s decision.

The paper asks whether human trust in AI systems is measurable, and if so, how to measure it accurately and appropriately.

“Many factors get incorporated into our decisions about trust. It’s how the user thinks and feels about the system and perceives the risks involved in using it,” Stanton said.

The team based the proposal on existing research into trust, including the role of trust in human history and how it has shaped cognitive processes.

The paper details the unique trust challenges associated with AI, and how humans decide whether or not to trust a machine’s recommendations.

© N. Hanacek/NIST

“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton said. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”

Stakeholders have been given until July 30 to respond to NIST’s request for comment, and can download a response form and email it to [email protected].

Separately, NIST is looking for comment on China’s policies related to emerging technologies in order to avoid being outpaced by rival superpower.

NIST is seeking information on China’s role in organizations responsible for developing international standards in the past decade, as well as the country's impact on Asian standardization strategies for cloud, advanced communication systems, and other emerging technologies listed in the China Standards 2035 blueprint.

NIST will reportedly use the comments to establish international standards for emerging technologies.

The agency’s effort to understand user trust in AI appears to be lagging behind the EU’s plan to regulate AI based on the risks posed by specific systems – with an outright ban on those deemed to be a threat to citizen’s rights or livelihoods.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like