Google-backed Anthropic Calls for More Public Funding for AI Standards
There is ‘concerning stagnation in funding' despite the rise of AI tech
At a Glance
- AI startup Anthropic wants government to raise its funding to NIST for AI standards development.
- Startup proposes raising funding to $50 million for 2024, double its 2021 level.
Anthropic, the AI startup founded by former OpenAI engineers, has called on the U.S. government to increase funding to the National Institute of Standards and Technology (NIST) to support AI standards efforts.
In a blog post on the company’s website, Anthropic said an increase in NIST funding would ensure the agency is “well placed to carry out its work promoting safe technological innovation.”
Anthropic proposes NIST’s AI budget be raised by another $15 million to $50 million for 2024, double what it received in 2021.
The comments were made following the U.S. Commerce Department’s budget hearing for the fiscal year 2024. Commerce Secretary Gina Raimondo said the budget would include $1.6 billion to support the work of NIST – but just under $40 million of that would go towards AI.
Image: Anthropic
Anthropic is calling for more. The startup behind the generative AI model Claude argued that NIST has lacked resources and faced “concerning stagnation in funding” despite rapid advancements in AI.
“With an ambitious investment, NIST could build on fundamental measurement techniques and standardize them across the field,” the startup said. “Additional resourcing would also allow NIST to build much-needed community resources, such as testbeds, to assess the capabilities and risks of today’s open-ended AI systems.”
The startup said that increasing NIST’s funds would raise the public trust of AI and create “a market for system certification and positive incentives for developers to participate.”
NIST had requested substantial budget increases in 2021, 2022 and 2023, but did not get near the amount it requested. NIST’s 2020 budget was $20 million, going up to only about $30 million in 2023.
Prior to the Biden administration’s recently announced first steps to regulate AI, NIST is responsible for the closest thing to U.S. legislation on AI. Among its AI work, the standards agency was working on a voluntary framework for AI risk management.
The initial iteration of the framework, published in January, acts as a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the risks of AI technologies. NIST developed the framework at the direction of Congress as part of the National Defense Authorization Act 2021.
About the Author
You May Also Like