Research Group Demands Global Shutdown of Foundation Model Development
The Machine Intelligence Research Institute says smarter-than-human AI could "destroy humanity" without safeguards
Non-profit research group, the Machine Intelligence Research Institute (MIRI), is calling for a global shutdown of foundation or "frontier" model research over apocalyptic safety fears.
A foundation model is an AI system capable of a broad range of applications across various modalities.
MIRI believes foundation models will evolve to be smarter than humans and will be capable of “destroying humanity.”
Leading voices in tech, including Elon Musk and Steve Wozniak, previously called for a pause on foundation model development on models more powerful than OpenAI’s GPT-4.
MIRI wants to go one step further, with its recently published communication strategy calling for a complete shutdown of attempts to build any system smarter than a human.
“Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else,” the group said. “We are concerned that most legislation intended to keep humanity alive will go through the usual political processes and be ground down into ineffective compromises.
“Meanwhile, the clock is ticking. AI labs continue to invest in developing and training more powerful systems. We do not seem to be close to getting the sweeping legislation we need.”
MIRI wants governments to force companies developing foundation models to install an “off switch” so that AI systems can be turned off should they develop malevolent or “x-risk” tendencies.
“We want humanity to wake up and take AI x-risk seriously,” the group said. “We do not want to shift the Overton window, we want to shatter it.”
The nonprofit said it remains committed to the idea of AI systems that are smarter than humans but wants humanity to build such AI “only once we know how to do so safely.”
MIRI was founded by Eliezer Yudkowsky back in 2000 with backers including Peter Thiel and Vitalik Buterin, the co-founder of the Ethereum cryptocurrency.
The Future of Life Institute, which authored the six-month pause open letter, also counts among MIRI’s top contributors.
Bradley Shimmin, chief analyst of AI and data analytics at research firm Omdia, said with no supporting research, MIRI will find it hard to convince lawmakers.
“The market has already considered such issues and concluded that the current and near-future state of the art in transformer-based GenAI models can do little beyond create a useful representation of complex subjects,” Shimmin said. “MIRI's call to action seems to be a step behind the industry in both understanding and proposing a workable solution to any future existential risks posed by AI.”
Shimmin said, however, that MIRI was correct in identifying knowledge gaps between those building and those regulating AI.
“MIRI's work to elevate potential risks is a most welcome voice, one that should be carefully considered by those building the next generation of generative AI and eventual artificial general intelligence (AGI) solutions.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like