October 30, 2023
At a Glance
- Meta Chief AI Scientist Yann LeCun blasted the CEOs of OpenAI, Google DeepMind and Anthropic for asking for AI regulation.
- He also took aim at fellow Turing award winners Geoff Hinton and Yoshua Bengio for giving the naysayers "ammunition."
- LeCun took issue with their claims of AI's existential threat to humanity and "lobbying" against open source R&D in AI.
Meta Chief AI Scientist Yann LeCun is blasting his fellow AI luminaries for asking for regulation because of fears AI would kill all humanity − as the U.K. is getting ready to host its first global AI summit this week.
“I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous,” he posted on X (formerly Twitter). “If powerful AI systems are driven by objectives (which include guardrails) they will be safe and controllable because (we) set those guardrails and objectives.”
LeCun noted that current Auto-Regressive LLMs are not driven by objectives, so “let's not extrapolate from their current weaknesses.”
Then he accused the leaders of the three hottest AI companies of colluding to keep AI models closed: OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei.
“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry,” said LeCun.
Next, LeCun turned his attention to his fellow Turing awardees and MIT professor Max Tegmark.
“You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D,” he continued, referring to Geoff Hinton and Yoshua Bengio, both towering figures in AI.
LeCun supports open AI research and development as one way to get the technology into everyone’s hands. Meta believes in an open source approach to AI. Its flagship large language model, Llama 2, is open source. But the U.S. is concerned that open source would give bad actors powerful AI tools.
LeCun said closing off AI models is not the answer.
“If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI. The vast majority of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted.”
Open source prevents AI oligarchy?
He explained he supports open AI platforms because “I believe in a combination of forces: people's creativity, democracy, market forces, and product regulations. I also know that producing AI systems that are safe and under our control is possible. I've made concrete proposals to that effect.”
“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them,” LeCun said. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture. This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia. That won't work unless the platforms are open.”
If regulations kill open source AI models, “a small number of companies from the West Coast of the U.S. and China will control AI platforms and hence control people's entire digital diet. What does that mean for democracy? What does that mean for cultural diversity? *THIS* is what keeps me up at night.”
Last month, after a British newspaper reported that the U.K. AI summit would focus almost entirely on AI’s existential threat, LeCun sarcastically tweeted that the U.K. prime minister has “caught the Existential Fatalistic Risk from AI Delusion Disease (EFRAID). Let’s hope he doesn’t give it to other heads of state before they get the vaccine.”
LeCun was not being sour grapes about the summit; he said he was invited to the global meeting. But this tweet was challenged by Tegmark, which brought about LeCun's long missive.
The argument began simmering days ago ...
Hinton, who has been called the 'godfather of AI,' tweeted on Oct. 27 that companies are planning to train models with 100 times more computation than current state-of-the-art systems, within 18 months. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models."
To which LeCun retorted: "One thing we know is that if future AI systems are built on the same blueprint as current Auto-Regressive LLMs, they may become highly knowledgeable but they will still be dumb. They will still hallucinate, they will still be difficult to control, and they will still merely regurgitate stuff they've been trained on. MORE IMPORTANTLY, they will still be unable to reason, unable to invent new things, or to plan actions to fulfill objectives. And unless they can be trained from video, they still won't understand the physical world."
"Future systems will *have* to use a different architecture capable of understanding the world, capable of reasoning, and capable of planning so as to satisfy a set of objectives and guardrails. These objective-driven architectures will be safe and will remain under our control because *we* set their objectives and guardrails and they can't deviate from them," he added.
"They won't want to dominate us because they won't have any objective that drives them to dominate (unlike many living species, particularly social species like humans). In fact, guardrail objectives will prevent that. ... The idea that smart AI systems will necessarily dominate humans is just wrong."
About the Author(s)
You May Also Like