Stability AI CEO Urges Lawmakers to Keep AI Open SourceStability AI CEO Urges Lawmakers to Keep AI Open Source
Emad Mostaque, who leads the company behind Stable Diffusion, recommends five steps for regulators to take - including asking cloud providers to snoop.
May 31, 2023
At a Glance
- Stability AI CEO Emad Mostaque wrote to the Senate Judiciary Subcommittee, urging lawmakers to keep AI open source.
- He outlined five regulatory steps lawmakers can take, including asking cloud providers to do some snooping.
The CEO of Stability AI, the company behind text-to-image generator Stable Diffusion, urged U.S. lawmakers to keep AI open source and outlined five steps they can take to safeguard against AI harms.
Emad Mostaque wrote to the Senate Judiciary Subcommittee on Privacy, Technology and the Law, which recently summoned OpenAI CEO Sam Altman to his first congressional hearing.
Stability AI also was one of the companies invited by the White House to let the public evaluate their AI models – with Meta and Amazon nowhere in sight.
In a letter to the chair of the subcommittee, Sen. Richard Blumenthal (D-CT), and ranking member Sen. Josh Hawley (R-Mo.), Mostaque said the opportunity presented by AI is “significant.”
But “as you consider the future of AI oversight, we encourage the subcommittee to vigorously promote openness in AI,” Mostaque wrote. “These technologies will be the backbone of our digital economy, and it is essential that the public can scrutinize their development.”
He said the transparency afforded by open source models and datasets will boost AI safety, foster competition, and will “ensure the U.S. retains strategic leadership in critical AI capabilities.”
“Grassroots innovation is America’s greatest asset, and open models will put these tools in the hands of workers and firms across the economy.”
Let cloud providers snoop
Mostaque recommended the following steps for oversight of AI:
1. Larger models pose a greater risk of misuse, adaptation or weaponization. But they can be detected since these models need “significant” compute resources for training and inference. Ask cloud computing providers to report when their services are used for large scale or computationally intensive training and inference.
2. For organizations that develop certain types of “highly capable and highly adaptable” AI models that could pose a serious risk, provide them with operational security and information security guidelines.
Stay updated. Subscribe to the AI Business newsletter
3. Users should know if they are interacting with AI. Thus, app developers should disclose such actions and also get user consent before collecting data for AI training. For AI apps with a bigger impact on users, such as those in financial, medical or legal fields, regulators may consider “robust performance requirements” when it comes to evaluation criteria, reliability, audit or assurance and interpretability.
4. Create content authenticity standards that social media platforms and AI service and application providers should adopt. These verifications should be part of their content recommendation and moderation systems to mitigate misinformation online.
5. The U.S. should increase its investments in three areas: evaluation frameworks for AI models in partnership with researchers, developers and companies; public compute and test bed resources to support the public research and public evaluation of AI; funding or acquiring a public foundation model that is subject to public oversight, trained on trusted data and made available to organizations across the country.
"There is no silver bullet to address every risk in AI," Mostaque wrote. "Instead, we encourage policymakers to explore practical interventions that target specific, observable, emerging risks."
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like