Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
February 8, 2024
Meta Chief AI Scientist Yann LeCun, one of the godfathers of AI, wants to abandon generative AI to achieve artificial general intelligence – in which AI assistants will have the same level of intelligence as humans.
At the World AI Cannes Festival, LeCun declared that “machine learning sucks” in its present form and that to achieve smarter AI systems, machines need to understand how the world works − as well as remember, reason and plan.
“The future of AI, I tell you, is non-generative. It works for text, doesn't work for anything else.”
Instead of generative AI, LeCun favors joint-embedding architectures, or JEPA. Meta released its first JEPA-based system last summer, I-JEPA, which can predict missing information rather than merely text.
“Predicting text is simple,” he said. “But could you take the real world? That's another story. It's just too many details.”
Expect more JEPA-based systems to come, with LeCun telling the Cannes crowd that the researchers at Meta are working on expanding it to handle videos as well as images.
“I-JEPA has not been yet trained on a big dataset but potentially it is going to get better than DINOv2 because it seems to overpower it.” DINOv2 is Meta’s computer vision model with self-supervised learning.
Last year in Cannes, LeCun went after ChatGPT. This year, it was the turn of artificial general intelligence, or AGI.
“No AI system, no intelligent system is general including humans. We are actually not very good at many things,” LeCun said. “We can buy a €30 game that can beat us at chess.”
The Turing Award winner instead proposed referring to AGI as Advanced Machine Intelligence, or AMI.
But while his proposal for a new way of thinking was one thing, achieving AMI was another. LeCun reaffirmed his view that this concept is decades away. He also said that machines will eventually surpass human intelligence in all domains − but not in the next 10 years or longer.
“There’s no question it's going to happen,” he said, adding that some breakthroughs may emerge to bring the reality closer, but obstacles still emerge.
He threw water on the idea that AI, in its current form, was smart. Pointing to use cases like autonomous vehicles and saying “Any 17-year-old can learn to drive a car with 20 hours of practice. We still don't have Level 5 autonomous cars, unless we cheat with sensors and maps and stuff.” Level 5 driving is 100% autonomous without any human intervention.
LeCun argued that human intelligence is a hard thing to quantify as it is specialized, non-linear and is formed by a collection of skills and prior interactions.
With large language models as the underlying system, LeCun said there would be “absolutely no way” to achieve human-level intelligence without training systems to make AI understand the world.
“Most of what we know as humans comes from our experiences of the world, doesn't come from language. We get an impression because we're so linguistically inclined, but in fact, most of our knowledge that we take for granted comes from our experience and interaction with the real world.”
“If someone claims, AGI − whatever they mean by this −is just around the corner, do not believe them. It's just not true.”
LeCun has long argued that AI models should learn about their surroundings without the need for human intervention.
The current form of AI dominating the landscape at present are large language models or LLMs.
However, “a child has seen 50 times more data than the LLMs that are trained on the totality of all text that is perfectly available,” LeCun said.
Also not spared from LeCun’s keynote were auto-regressive models, which forecast future behavior based on past data. While saying such systems are useful, the Meta scientist also derided them on the basis that they only represent a tiny portion of human knowledge.
“There are many things we can do with (auto-regressive models), but as a path towards human-level intelligence, they are an off-ramp.”
LeCun said in the future, every digital interaction humans have will be mediated by AI assistants.
Platforms that will power these assistants, according to the Meta AI scientist, will take the form of devices such as smart glasses, like Meta’s Ray Ban.
But this is still far off, as AI needs to be able to plan. “We need machines that can understand the world: They can remember, they can reason, they can plan. These four things LLMs cannot do.”
LeCun said that future AI systems need to be as diverse as future users and must be able to cater to a variety of languages, cultures and centers of interests.
But such a vision cannot be achieved by a handful of companies. “That cannot be done by a small set of companies on the West Coast of the U.S. or in China,” he said.
Future AI assistant systems will contain “the repository of all human knowledge,” with LeCun likening the assistant of the future as a shared infrastructure akin to the internet or Wikipedia. “AI is going to amplify human intelligence.”
“But we shouldn’t feel threatened by this,” he said, adding that if someone claims AI is going to kill everyone, “don’t listen to them.”
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024