Going Beyond Sci-Fi: Why AI Poses an Existential Threat

GPT-4 has an IQ of 80 to 90 and is headed for genius level

Deborah Yao, Editor

May 3, 2023

5 Min Read
Donald Iain Smith/Getty Images

At a Glance

  • AI pioneer Geoffrey Hinton explains why AI can wipe out humanity, warning about a tech he was seminal in developing.
  • Hinton changed his views about AI after seeing the power of large language models, especially GPT-4.
  • GPT-4 now has an IQ of 80 to 90, on its way to genius level.

Geoffrey Hinton has devoted his life’s work to advancing artificial intelligence. But now the Turing award winner speaks ominously about how AI has the capability to one day - and he is serious - wipe out humanity. He was so alarmed that he left Google so he can more freely speak out.

“If you take the existential risk seriously, as I now do … it might be quite sensible to just stop developing these things any further,” Hinton said at the EmTech Digital 2023 conference held by MIT Technology Review. “I used to think it was way off, but I now think it is serious and fairly close.”

He said he would not have quit Google if his concerns were solely the same ones afflicting past technological displacements such as job loss as machines replaced human workers. Hinton said he is speaking out because of a real, existential threat posed by AI.

“So I am sorry, I am sounding the alarm. We have to worry about this, and I wish I had a nice, simple solution. I wish, but I do not.”

But exactly how is AI an existential threat?

Hinton explained that his view changed in the past few months when large language models (LLMs) were released. He was particularly impressed with OpenAI’s GPT-4, which exhibited “simple reasoning.”

For instance, he asked GPT-4 to solve this problem: “I want all the rooms in my house to be white. At present, there are some white rooms, some blue rooms and some yellow rooms. Yellow paint fades to white within a year. So what should I do if I want (all rooms) to be white in two years’ time?”

Related:Google DeepMind CEO: AGI is Coming ‘in a Few Years’

GPT-4’s answer was to paint all the blue rooms yellow.

“That’s not the natural solution, but it works, right?” Hinton said. “That’s pretty impressive common sense reasoning of the kind that has been very hard to get AI to do using symbolic AI because they have to understand what ‘fades’ means and (other) temporal stuff."

GPT-4’s IQ of 80 to 90

Humans have always been better than machines at reasoning, but now machines have started to reason. Hinton believes GPT-4 has an IQ of about 80 to 90 and on its way to surpass genius levels.

Here is his other point: computers are digital and one can have many copies of the same model running on different clouds that do exactly the same thing. They might look at different data, but the model is the same. If there are 10,000 copies of the model, they can be looking at 10,000 different subsets of the data − and whenever one of them learns anything, all the others will learn it too.

People cannot transfer knowledge to each other like that, Hinton said. “If I learn a whole lot of stuff about quantum mechanics and I want you to know all that stuff about quantum mechanics, it is a long painful process of getting you to understand it. They can’t just copy my weights into your brain.”

Related:A Giant in AI Leaves Google, Fearing a Coming Dystopia

Can we just turn it off?

How about just unplugging the AI system? It is not that simple.

“These things will have learned from us, by reading all the novels that are everywhere, and everything Machiavelli ever wrote about how to manipulate people,” Hinton said. “If they are much smarter than us, they will be very good at manipulating us. You will not realize what is going on.”

As for stopping the development of AI models, Hinton believes it is a futile effort. “I do not think we are going to stop developing them because they are so useful,” he said.

Just the fact that governments want to use AI in weapons means development will not stop. “There is no way it is going to happen,” Hinton said. “So it is silly to sign a petition, saying, ‘please stop now.’”

Google developed the technology (Transformers) first that enables today's large language models, but it was careful in releasing it knowing there could be bad consequences, Hinton said. It could control the release since it was the only AI leader back then.

“Once OpenAI had built similar things using Transformers and money from Microsoft, and Microsoft decided to put it out there, Google did not have really much choice … in a capitalist system,” Hinton said. “You cannot stop Google competing with Microsoft.”

Is there a technical solution, such as make AI worse at learning or restricting communication?

Various guardrails have been tried, but AI models can write code and if given the ability to execute these computer programs as well, they can figure out a way around restrictions. “Imagine your 2-year-old saying, ‘my dad does things I do not like so I’m going to make some rules for what my dad can do.”

Why would machines be motivated to harm humans?

While humans were born with certain propensities, such as the desire to procreate and survive, machines do not have the same inherent motivations. But Hinton argued that AI models do have end goals, which humans program into them. The danger is that if these models get the ability to create their own secret subgoals – the interim steps needed to reach their ultimate goal – they will quickly realize gaining control is a “very good subgoal” that will help them achieve their ultimate goal, he explained.

“If these (machines) get carried away with getting more control, we are in trouble,” Hinton added.

Moreover, these models do not die. The hardware may break over time, but the models themselves can live on in new computers. “So we have got immortality, but it is not for us.”

Given this potential apocalypse, Hinton said the best way forward is to treat AI like nuclear proliferation. Since every human life is at risk, each nation should cooperate to keep AI under control. “We could get the U.S. and China to agree like we did with nuclear weapons,” Hinton said. “We're all in the same boat … so we all want to be able to cooperate on trying to stop it, as long as we can make some money on the way.”

Hinton credits a professor friend of his who urged him to come forward. "He said, 'Geoff, you need to speak. They will listen to you. People are blind to this danger."

"Do you think people are listening now?

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like