February 10, 2023
At a Glance
- Stuart Russell says ChatGPT “does not know anything” and is merely good for generating text.
- ChatGPT is not suitable for pushing AGI.
- If AI models do not focus on benefiting humans, “we will lose control over our future.”
OpenAI’s conversational AI chatbot ChatGPT has taken the world by storm, capturing the imagination of mainstream audiences and prompting widespread interest in AI.
But in reality, these chatbots are just that, chatbots. They are incapable of performing outside of their pre-designated parameters, and the public has been “fooled” into thinking the likes of ChatGPT can bring about the next evolution of AI, according to renowned computer science professor Stuart Russell.
Speaking at the World Artificial Intelligence Cannes Festival (WAICF), the University of Berkeley, California professor said that while ChatGPT can generate responses to questions, the chatbot “does not know anything.”
“If it can contradict itself about a simple fact that it states confidently and then states the opposite one sentence later, it does not know anything,” he said.
He described ChatGPT as “remarkably impressive” at generating text but reminded that it can be easily fooled.
The professor showed a series of examples of ChatGPT contradicting itself, including being unsure which animal is larger, an elephant or a cat.
“It's very good at producing grammatical, thematic coherent text as long as you don't ask it to go for too long,” said Russell.
The Berkeley professor even showed that the top AI systems designed to play Go and Chess can be defeated as the AI can often produce illegal moves or moves that do not make sense.
“It's simply extrapolating sequences. It's not understanding that there are rules, that there's a board, that there are pieces, that we're trying to checkmate the opponent, it doesn't understand any of that,” he said.
“It does not build a theory of the state of the world and does not build a theory of how the world changes with actions and so on. It's purely sequence extrapolation.”
Russell was giving a talk on the potential for achieving artificial general intelligence (AGI), which is when a computer can holistically mimic human intelligence rather than piecemeal (computer vision, voice recognition, etc.). It is tough to achieve and widely considered as the Holy Grail of AI.
Russell questioned ChatGPT’s place in the wider AGI conversation.
“We think (ChatGPT) is different (and can be used) for other domains because we are fooled by its ability to generate grammatically intelligent sounding text,” he said.
“Please stop believing what people tell you about the capabilities of AI.”
Humanity's future is at stake - really
During his talk, Russell said that AGI could greatly improve living standards on Earth and that these general-purpose systems would bring $13 quadrillion in net present value.
However, he said that AGI is “still some way” before coming to fruition. “There are still major basic open problems that require conceptual breakthroughs in order for us to make progress,” he said.
Russell said it was difficult to predict when AGI would come true, but warns we should be prepared for that eventuality.
He proposed changing the definition from a model of machines achieving their outcomes to achieving outcomes for humans.
“We want systems that act in the best interests of humans. But those systems have to be explicitly uncertain about what those best interests are. And it's that uncertainty that allows us to retain control over machines with arbitrarily great capabilities.”
Russell contended that the current direction of research, focusing on deep learning and large language models, is inadequate for producing general-purpose AI.
“If we move forward within the standard model, where we have to predefine the objectives of the AI system, then I think it's inevitable that we will lose control over our future.”
About the Author(s)
You May Also Like