A Lone, Respected Voice Disputes AI’s Existential Threat
Google Brain co-founder Andrew Ng says, "I don't get it."
At a Glance
- Google Brain co-founder Andrew Ng does not agree that AI poses an existential threat.
- He disputes the statement put out by the Center for AI Safety that was signed by many AI pioneers.
- Turing award winner Geoffrey Hinton said he got worried after seeing GPT-4 already exhibiting simple reasoning.
The co-founder of Google Brain is questioning the premise that AI poses an existential risk, a belief held by some of the foremost pioneers in the field.
“I don’t get it,” said Andrew Ng, who is also director of the Stanford AI Lab, general partner at the AI Fund and co-founder of Coursera. He posted videos of his views on LinkedIn and Twitter.
He cited the statement released by the Center for AI Safety that was signed by many AI pioneers including Turing award winners Geoffrey Hinton and Yoshua Bengio, and the CEOs of OpenAI, Google DeepMind and Anthropic, as well as Microsoft co-founder Bill Gates, among other luminaries.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” according to the center’s statement.
Stay updated. Subscribe to the AI Business newsletter
Ng begs to differ.
“I'm struggling to see how AI could pose any meaningful risks for our extinction,” he said. “No doubt, AI has many risks like bias, unfairness, inaccurate outputs, job displacement, concentration of power. But let's see AI’s impact as massively contributing to society, and I don't see how it can lead to human extinction.”
“Since I work in AI, I feel an ethical responsibility to keep an open mind and understand the risks,” Ng added.
He said he plans to reach out to people who might have a “thoughtful perspective on how AI creates a risk of human extinction.”
Notably, the CEOs of Google and Microsoft, who are leading the AI charge, did not sign the statement, nor did Turing award winner Yann LeCun, who is Meta’s chief AI scientist.
They also did not sign an earlier letter calling for a 6-month pause in developing AI systems more powerful that OpenAI’s GPT-4 large language model, from the Future of Life Institute. It was signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and many other AI pioneers. LeCun did not, tweeting that he disagreed with the letter’s premise.
Hullabaloo over AI’s existential risk
Why would the smartest minds in AI say it poses an existential threat?
Hinton said at MIT’s EmTech Digital 2023 conference that his view of the risk escalated after seeing the performance of OpenAI’s GPT-4, which is estimated to already have an IQ of 80 to 90 – even at this early stage.
Hinton said GPT-4 exhibited simple reasoning skills. For example, he told GPT-4 to solve this problem: “I want all the rooms in my house to be white. At present, there are some white rooms, some blue rooms and some yellow rooms. Yellow paint fades to white within a year. So what should I do if I want (all rooms) to be white in two years’ time?”
GPT-4’s answer was to paint all the blue rooms yellow. "That’s not the natural solution, but it works, right?” Hinton said. “That’s pretty impressive common sense reasoning of the kind that has been very hard to get AI to do using symbolic AI because they have to understand what ‘fades’ means and (other) temporal stuff."
Recently, remarks by a U.S. air force colonel made waves after he disclosed that a ‘thought experiment’ involving a military AI drone led to the hypothetical ‘killing’ of its human operator to accomplish its mission. OpenAI already flagged this destructive behavior in a 2016 paper in which an AI-controlled boat crashed into other boats and caused a fire to get the most points in a boat-racing game.
However, a LinkedIn reader posted this response to Ng's missive: "The only thing we have to fear is us. People are always the danger. Not technology itself."
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like