Google DeepMind CEO on AGI, OpenAI and Beyond – MWC 2024

Google DeepMind CEO Demis Hassabis also gave a glimpse into how Google lost ground to OpenAI in the AI race

Ben Wodecki, Deborah Yao

February 26, 2024

6 Min Read
Photo of Google DeepMind CEO

At a Glance

  • Google DeepMind CEO Denis Hassabis said AGI is developing gradually, not a step change, as compute, methods and data scale.
  • He said Google developed the major AI innovations in the past decade but OpenAI was faster with its hacker mentality.

In 2010, Demis Hassabis co-founded what would become one of the most influential AI labs in the world: DeepMind, named after the term deep learning. The company, which Google acquired in 2014, had grand designs for building artificial general intelligence, or AGI.

How is that endeavor going?

“It’s looking like it’s going to be a more gradual process rather than a step function,” he said during a keynote fireside chat at Mobile World Congress 2024 in Barcelona, Spain. Today’s AI systems are becoming “incrementally more powerful” as compute, techniques and data used are scaled up.

It is possible that significant advances can come in the next few years with new innovations to improve AI’s ability to plan, remember and use tools – things current-generation AI systems are missing. In the meantime, AI advances are proving to be useful already in many other endeavors.

The CEO defines AGI as a system that can perform almost any cognitive task that humans can. He said there is a need for a human reference point is because the human brain “is the only … proof we have maybe in the universe that general intelligence is possible.”

But how will we know AGI when we see it? It is a question hotly debated in the field of AI. For Hassabis, it either may be obvious when it appears or may require considerable tests to determine.

Related:Google DeepMind CEO: AGI is Coming ‘in a Few Years’

“One way is to actually test the systems on thousands and thousands of tasks that humans do and see if it passes a certain threshold on all of those tasks. And the more tasks you put into that test set, the more sure you can be you have the general space covered.”

What put DeepMind on the map

Amid its quest to develop AGI, it was another AI system that helped cement DeepMind as a key player in the AI space: AlphaFold.

The system predicts protein structures and in 2022, the model was used to map nearly all of the 200 million known proteins.

Commenting on the project at MWC, Hassabis used AlphaFold as an example of a non-general AI system that could be used to further human knowledge.

He said it would have taken a billion years of having a person with a doctorate to map every known protein – something his team did in just one year.

Over a million researchers have used the model, according to the Google DeepMind CEO, but he wants the model to power drug discovery.

And that is a goal parent company Alphabet has in mind – it formed Isomorphic Labs in 2021 to reimagine drug discovery with AI systems like AlphaFold 2.

Isomorphic penned deals with pharma giants Novartis and Eli Lilly in January to use AI to design new drugs. According to Hassabis, drugs designed by AI will hit clinics in the next couple of years.

Related:DeepMind AI System Predicts Structure of Nearly All Known Proteins

“It's really having a material impact now on drug discovery, and I hope that drug discovery will shrink from 10 years to discover one drug down to maybe a matter of months to discover drugs to cure these terrible diseases.”

How Google lost ground to OpenAI

Hassabis noted that most of the major AI innovations of the past decade came from Google Research, Brain and DeepMind. OpenAI actually took these ideas and techniques and “applied Silicon Valley growth mentality, hacker mentality to it, and scaled it to sort of maximum speed,” he said.

Also, OpenAI’s unusual path to success with its models was not in coming up with a new innovation but rather by scaling current innovation.

“I don’t think anyone predicted it, maybe even including them, that these new capabilities would just emerge just through scale, not for inventing some new innovation, but actually just sort of scaling,” Hassabis said.

“And it’s quite unusual in the history of most scientific technology fields where you get step-changing capability by doing the same thing, just bigger – that doesn’t happen very often. Usually, you just get incremental capabilities, and normally you have to have some new insight or some new flash of inspiration, or some new breakthrough in order to get a step change. And that wasn’t the case here.”

The other surprising thing was that with ChatGPT, “the general public seems to be ready to use these systems even though they clearly have flaws – hallucinations, they’re not factual,” Hassabis said.

Google’s thinking was these systems needed to be “100 times more accurate” before releasing them but OpenAI just released it and it turns out “millions of people found value out of that,” he added. “It didn’t have to be 100% accurate for there to be some valuable use cases there, so I think that was surprising for the whole industry.”

Hassabis said they also thought these systems would have narrower use cases for scientists and other specific professions. But actually, the “general public was willing to use slightly messier systems and find value and use cases for them. So that then precipitated a change in (Google’s) outlook.”

This led to Google’s merging of Google Brain, a team within Google Research, with DeepMind in April 2023. The goal was to combine “all of our compute together and engineering talent together to build the biggest possible things we can,” he said. “Gemini, our most advanced, most capable AI model, is one of the fruits of that combination.”

On the future of AI

What does Hassabis believe the future of AI will look like? He said last May that DeepMind’s dream of AGI may be coming in a few years, but for now, his team is exploring new areas to apply AI.

One of those areas is in material sciences − using AI to help discover new types of materials.

“I dream of one day discovering room temperature superconductor it may exist in chemical space, but we just haven't found it as human chemists and material scientists.”

Google DeepMind is also looking at applying AI to weather prediction and climate change, as well as mathematics.

He also said that the next generation of smart assistants will be useful in people’s daily lives “rather than sort of gimmicky as they were in the previous generation.”

Users are already seeing smarter and more adaptable phones, sporting Google’s Gemini features and a new capability to search just by encircling an image.

But in five or more years, “is the phone even really going to be the perfect form factor?” he asked. “Maybe we need glasses or some other things so that the AI system can actually see a bit of the context that you're in to be even more helpful in your daily life.”

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like