July 28, 2022
Eric Schmidt calls for superpowers to adopt a Cold War approach to AI
AI technologies are akin to nuclear weapons in terms of their destructive power, according to former Google CEO Eric Schmidt.
Speaking at the recent Aspen Security Forum, Schmidt said that global superpowers like the U.S. and China may need a “deterrence treaty” for AI.
“Let’s say we want to have a chat with China on some kind of treaty around AI surprises. Very reasonable. How would we do it? Who in the U.S. government would work with us? And it’s even worse on the Chinese side. Who do we call? We’re not ready for the negotiations we need," he said.
Schmidt likened the issue to the early Cold War when clashing powers agreed to a “no surprise rule” regarding tests of nuclear arms.
“When somebody launches a missile, for testing or whatever, they notify everyone. Everyone then uses their missile defense system to watch to train the systems,” he explained.
“I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing will allow people to say, ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side.”
Schmidt stokes AI fears
The comments are not Schmidt’s first to stoke fears around AI. He said last July the U.S. should align with Asian allies like Japan to combat China’s growing AI prowess.
Schmidt, who also chairs the U.S. National Security Commission on Artificial Intelligence, proposed the idea of a coordinating group inside the Japanese government “who share our view as to what’s important and make sure that the universities are talking to each other”.
He echoed similar sentiments at Scale AI’s TransformX event last December.
Schmidt’s latest comments come as both the U.S and Chinese governments step up plans to regulate AI technologies.
China’s draft ‘Internet Information Service Algorithmic Recommendation Management Provisions’ data privacy law would allow regulators to scrutinize the internal mechanisms of platforms.
The provisional law would allow the country's cyber watchdog to rule on any AI algorithms used to set pricing, control search results, make recommendations, or filter content. Companies caught violating the prospective rules could face sizable fines or strict punishments, including having their business licenses pulled or full takedowns of their apps and services.
In the U.S., the House Democrats have legislation that would remove legal liability protections for tech giants whose algorithms lead to harm. The proposed Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Acy 1996 by limiting the liability protections for platforms that knowingly or recklessly make a recommendation of third-party information that harms users.