Godfather of AI: ‘I can’t see a path that guarantees safety’

Geoffrey Hinton said AI systems will become self-aware in a machine sense, 'in time.'

Deborah Yao, Editor

October 10, 2023

4 Min Read
Photo of Geoffrey Hinton
CBS News/60 Minutes

At a Glance

  • Geoffrey Hinton, the father of artificial neural networks, said AI systems will reason better than humans in five years.
  • AI models are not just auto-complete systems. They have to understand the meaning of words to predict the next word.
  • AI systems have experiences of their own and can make decisions based on those experiences.

Geoffrey Hinton, known for his pioneering work in artificial neural networks, said humanity is at a turning point where the decisions made today about AI will determine whether society will see a bright or doomsday future.

“We’re moving into a period when, for the first time ever, we may have things more intelligent than us,” the Turing award winner told 60 Minutes.

He said AI systems are intelligent, can understand humanity, have experiences of their own and are capable of making decisions based on those experiences.

Will they become self-aware as well, at least in a machine sense?

“Oh yes, I think they will in time,” Hinton said.

The AI advances Hinton set in motion came accidentally. In the 1970s at the University of Edinburgh, he dreamed of simulating a neural network on a computer as a tool to help him in his studies of the human brain. While he failed in his pursuit of understanding the brain, it led to artificial neural networks, which underpins deep learning algorithms.

“It took much, much longer than I expected,” he said “It took me 50 years before it worked. But in the end, it did work.”

Hinton comes from a family of exceptionalism. His ancestors include George Boole, from which comes the term Boolean logic that lay the foundation for computing, and Sir George Everest, the Surveyor General of India in the early 19th century who had a mountain named after him.

Related:A Giant in AI Leaves Google, Fearing a Coming Dystopia

Self-taught systems

Neural networks are what enabled robots to learn to play soccer at a Google AI lab in London. They were not programmed to do so; they were just told to score and they had to learn to do so. When a robot does score, it learns that it is on the right pathway through the layers of software in the neural network. This pathway gets stronger while wrong paths get weaker.

Currently, the biggest chatbots only have about a trillion connections and the human brain has 100 trillion, but a chatbot “knows far more than you do in your 100 trillion connections, which suggests it has got a much better way of getting knowledge into those connections,” Hinton said.

Stay updated. Subscribe to the AI Business newsletter.

However, the AI systems’ reasoning paths are not exactly known.

“We have a very good idea of roughly what it’s doing. But as soon as it gets complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain,” Hinton said.

That’s because humans designed the learning algorithm, but not the actual algorithm that gets deployed.

“When this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things,” Hinton said.

Related:Going Beyond Sci-Fi: Why AI Poses an Existential Threat

Having computers write and execute their own code can be a “serious worry,” he added. “One of the ways in which these systems might escape control is by writing their own computer code to modify themselves, and that’s something we need to be seriously worried about.”

AI isn’t just a super auto-complete

For language models, AI algorithms do predict the next word but it is more than statistics, Hinton said.

“If you think about it, to predict the next words, you have to understand the sentences,” he said. “You have to be really intelligent to predict the next word really accurately.”

For example, he asked GPT-4 to solve this problem: “The rooms in my house are painted white or blue or yellow, and yellow paint fades to white within a  year. In two years’ time, I’d like all the rooms to be white. What should I do?”

The answer: Paint the blue rooms in white, and leave yellow rooms alone. It also warned against painting the yellow rooms white because the color might be off when the yellow fades.

“Oh, I didn’t even think of that,” Hinton said.

He believes in five years’ time, AI systems might “well be able to reason better than us.”

Related:A Lone, Respected Voice Disputes AI’s Existential Threat

No path that guarantees safety

AI carries both benefits and risks. One of the biggest beneficiaries is health care where AI systems can understand medical images and design drugs. Health care is an area where “it’s almost entirely going to do good,” Hinton said.

Risks include putting a whole class of people out of work because their skills are replaced by machines, fakes news, bias and military AI.

More troubling is this conviction: “I can’t see a path that guarantees safety. We’re entering a period of great uncertainty where we’re dealing with things we’ve never dealt with before.”

“We can’t afford to get it wrong,” he added. Why? “Because they might take over. … If we could stop them ever wanting to, that will be great. But it’s not clear we can stop them ever wanting to.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like