Geoffrey Hinton’s stance changed after the advent of generative AI. His fellow Turing award winners have a different view.

Deborah Yao, Editor

May 2, 2023

5 Min Read
Johnny Guatto/University of Toronto

At a Glance

  • Geoffrey Hinton, known as one of the 'godfathers of AI,' left Google so he can freely speak out about the tech's dangers.
  • Hinton believes the fast advance of generative AI may lead to a world where autonomous machines harm humans.
  • But his fellow Turing awardees, Yann LeCun and Yoshua Bengio, do not quite agree with him.

AI pioneer Geoffrey Hinton, when asked how he could work on a technology that was potentially dangerous, would respond by quoting Robert Oppenheimer, who led the U.S. initiative to build the atom bomb.

“When you see something that is technically sweet, you go ahead and do it."

Hinton does not say that anymore. Instead, the 75-year-old ‘Godfather of Deep Learning’ quit his job at Google this week, where he worked for more than a decade, so that he can freely speak out about the dangers of AI, according to an interview with The New York Times.

He said a part of him now regrets his life’s work on neural networks, which in 2018 won him the Turing award, considered the Nobel prize of computing. His former students included Turing award winner Yann LeCun and OpenAI co-founder and chief scientist Ilya Sutskever.

What changed?

Hinton cited the new generation of large language models, especially GPT-4 from Open AI, that has made him realize how smart machines can get. “Look at how it was five years ago and how it is now. Take the difference and propagate forwards. That’s scary.”

“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton told the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Related:Going Beyond Sci-Fi: Why AI Poses an Existential Threat

Hinton had thought Google was a “proper steward” of AI, careful about any releases that might cause harm, until Microsoft began targeting its core search business by incorporating GPT-4 into Bing. It goaded Google into deploying AI faster in a contest that might be “impossible to stop,” he said.

Deepfakes to flood the internet?

Notably, Hinton did not sign the open letter from Future of Life Institute, which called for a 6-month pause on developing AI more powerful than GPT-4 until there are more guardrails. He said he did not want to publicly criticize Google or other companies until he had resigned.

The letter has attracted over 27,500 signatures to date, including from Elon Musk, Apple co-founder Steve Wozniak and Yoshua Bengio, who shared the Turing award with Hinton and LeCun. (LeCun did not sign the letter, tweeting that he disagreed with the premise.)

Hinton’s immediate worry is the internet will be filled with deepfakes such that the regular person will “not be able to know what is true anymore.”

In the future, he sees the more fundamental risk coming from AI systems generating and running code themselves, thereby acting autonomously in ways that could be detrimental to society.

Related:Google DeepMind CEO: AGI is Coming ‘in a Few Years’

As machines train themselves, they could exhibit unexpected and even harmful behavior. For example, one fear is that a machine trained to maximize rewards may prevent people from turning it off if it realizes that it can get more rewards by staying on - even if it harms humans.

While other AI experts believe that this existential threat from AI is hypothetical, Hinton thinks the global AI race between Microsoft, Google and others will escalate unimpeded without regulation. But regulating AI can be tricky if companies and nations want to work on it secretly since it is not traceable, unlike nuclear weapons.

That is why the best way forward is for the world’s best scientists to come up with ways to control AI, Hinton said.

Not science fiction

"These things are totally different from us," Hinton said in a separate interview with MIT Technology Review. "Sometimes I think it is as if aliens had landed and people have not realized because they speak very good English."

Hinton said large language models (LLMs) have massive neural networks with vast numbers of connections. But they are still tiny compared to the human brain, which has 100 trillion connections. LLMs have up to a trillion today, at most. "Yet GPT-4 knows hundreds of times more than any one person does. So maybe it has actually got a much better learning algorithm than us," he said.

Related:Musk, Wozniak Support Pausing the Training of ‘Powerful’ AI

For a long time, neural networks were thought to be bad at learning compared to the human brain, which can pick up new ideas and skills quickly. But that changed with these large language models. In 'few-shot learning,' these pretrained LLMs can be trained for a task with just a few examples.

Compare this LLM with a human's speed and the human's advantage disappears, Hinton said.

What about LLMs' tendency to hallucinate, or generate fiction or errors in their answers, is that not a weakness? Hinton said this confabulation is a feature, not a bug. "Confabulation is a signature of human memory. These models are doing something just like people."

Hinton thinks the next step for these intelligent machines is the ability to create their own subgoals, or interim steps needed to carry out a task. Already, experimental projects such as AutoGPT and BabyAGI can link chatbots with other programs to string together simple tasks and it could advance from there. This capability could turn deadly.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future,” Hinton said. “How do we survive that?”

LeCun does not share the same pessimistic view. Rather, the chief AI scientist at Meta told MIT Technology Review that intelligent machines will usher in "a new renaissance for humanity, a new era of enlightenment. I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans."

Even among humans, the smartest are not the ones most dominant, LeCun pointed out. "And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business."

Bengio, a computer science professor at Université de Montréal in Canada, has a more neutral view.

“I hear people who denigrate these fears, but I do not see any solid argument that would convince me that there are no risks of the magnitude that Geoff (Hinton) thinks about,” he told the magazine. But being overly concerned does not do much good. "Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like