Geoff Hinton: Russia and US Under Trump Can Let AI ‘Go Too Far’
One of the godfathers of AI fears world leaders could use AI to manipulate elections and wage war
At a Glance
- Renowned computer scientist Geoffrey Hinton warns of risks to humanity and electoral freedoms from AI.
- He believes Russia, China and the U.S. - under Trump - could push AI to do dangerous things.
Geoffrey Hinton, the father of artificial neural networks, is warning that Russia and China could let AI “go too far.” But he added one more country to the list: the United States if led by Donald Trump.
Hinton said Russian President Vladimir Putin, President Xi Jinping of China and U.S. presidential candidate and former president Trump will want to use AI for “manipulating electorates and waging wars.”
“They will make it do very bad things and they may or may go too far and it may take over,” he said.
Hinton made the remarks as he delivered the University of Oxford’s annual Romanes Lecture. There, he talked about his fear of the potential of a superintelligent system taking over in the future.
Hinton expressed concern that if superintelligent systems are given sub-goals, they could work towards what he described as an “almost universal sub-goal which helps with almost everything, which is get more control.”
Superintelligent systems “are going to have the sub-goal of getting more power. They are more effective at achieving things that are beneficial for us. And they will find it easy to get more power because they will be able to manipulate people.”
Hinton warned that such hypothetical future systems would be better and more effective at communicating with humans compared to current AI systems.
People use the same manipulative tactics. “Trump, for example, could invade the Capitol without ever going … just by talking he could invade the Capitol.”
Hinton said superintelligent systems, or at least AI systems that are smarter than humans, could arrive between 20 and 100 years.
But one area of AI likely to impact electorates is the creation of convincing content. Hinton expressed concern about the potential of images and video to undermine democracy in key elections this year.
Fake Biden robocalls and deepfakes of Slovak politician Michal Simecka have already emerged to try and deceive voters.
Hinton said that some of the bigger players in AI are trying to do something about election issues, though “maybe not enough.”
Job losses, biases and existential threats
During his lecture, Hinton touched upon what he believed to be the possibility of “massive” job losses brought on by AI.
He said that the intellectual equivalent of manual labor jobs are “going to disappear” when machines become smarter than humans.
“I think there's going to be a lot of unemployment,” Hinton said. “My friend Yann [LeCun from Meta] disagrees.”
Hinton said the one AI risk humans could handle more easily is discrimination and bias.
“Your goal is to be less biased than the system you replace,” he said. “If you freeze the weights of an AI system, you can measure its bias and you cannot do that with people. They will change their behavior once you start examining it.”
Other potential dangers of AI Hinton mentioned include mass surveillance, lethal autonomous weapons and cybercrime.
But his main concern was the potential of AI to be an existential threat that could wipe out humanity. He said the idea was not science fiction. Hinton believes that future AI systems could be so powerful they could persuade humans not to turn them off. This is all in an effort to gain more control.
“If a digital superintelligence ever wanted to take control it is unlikely that we could stop it,” he said.
About the Author
You May Also Like