June 28, 2023
At a Glance
- CEO of Hugging Face calls for a balance of open access and accountability in AI before Congress.
- The AI community faces a fork in the road: keep models open and risk misuse or closed but controlled by a few tech giants.
As the AI community reaches a fork in the road on whether to keep AI models open - and risk bad actors from misusing them - or closed but controlled by a few tech giants, the CEO of Hugging Face advocated for a balanced approach.
In a hearing before Congress, Hugging Face CEO and co-founder Clement Delangue, said making AI more open “cultivates safe innovation.”
Delangue, who spoke at the House Committee on Science, Space and Technology hearing on AI, explained that broadening access to machine learning models and training datasets would allow researchers and users to “better understand systems, conduct audits, mitigate risks, and find high-value applications.”
However, the CEO of one of the internet’s largest AI model libraries acknowledged that a fully open system would welcome bad actors so he advocated for a mix between AI openness and accountability.
“Our approach to ethical openness acknowledges these tensions and combines institutional policies, such as documentation; technical safeguards, such as gating access to artifacts; and community safeguards, such as community moderation," Delangue said. “We hold ourselves accountable to prioritizing and documenting our ethical work throughout all stages of AI research and development.”
Benefits of openness
The Hugging Face CEO argued that open systems foster democratic governance and that giving increased access, especially to researchers, can “help to solve critical security concerns.”
“Openness bolsters transparency and enables external scrutiny,” Delangue said in his testimony. “The AI field is currently dominated by a few high-resource organizations who give limited or no open access to novel AI systems, including those based on open research."
“To encourage competition and increase AI economic opportunity, we should enable access for many people to contribute to increasing the breadth of AI progress across useful applications, not just allow a select few organizations to improve the depth of more capable models.”
The likes of OpenAI have opted to keep models like GPT-4 closed so as not to reveal underlying information on its capabilities. During his congressional testimony, Delangue said all AI models, datasets and “relevant components of an AI system” should share details to improve transparency.
Stay updated. Subscribe to the AI Business newsletter
Hugging Face has over 250,000 model cards on its platform, with its CEO contending that publishing more documentation on models would enable risks to be mitigated as users and developers would be able to understand how to measure them.
Delangue joined recent calls from AI startup Anthropic to increase investment for the National Institute for Standards and Technology to improve AI transparency.
NIST has been working on developing AI standards and Delangue argued during his testimony that the body needs more resources to help fight AI bias and risk.
“Increased funding for NIST would both strengthen U.S. government technical leadership and improve the space for researchers across sectors to collaborate on addressing AI risks,” he said.
Delangue also called for more funding to the U.S. National AI Research Resource (NAIRR), adding that widening of resources limits "who is able to contribute to innovative research and applications.”
About the Author(s)
You May Also Like