AI Leaders Warn About Existential Risks Again - Now Armed with Facts

Turing winners Hinton, Bengio and Yao plus 21 others want top AI companies to allocate at least a third of their AI R&D budgets to safety

Ben Wodecki, Deborah Yao

October 31, 2023

6 Min Read
A giant robot looming over a city
AI Business via DALL-E 3

At a Glance

  • Turing awardees Yoshua Bengio, Geoffrey Hinton and Andrew Yao published a paper outlining AI's existential and other threats.
  • What's different this time? The three Turing awardees and their 21 co-authors backed up their logic with research and facts.
  • They want major AI companies to spend one-third of their budgets on safety considerations alone - before it is too late.

Some of the foremost minds in AI are stepping up their warnings of the existential risk posed by AI, calling for at least a third of AI research and development budgets allocated towards safety.

In a strongly worded published paper, Turing award winners Yoshua Bengio, Geoffrey Hinton and Andrew Yao as well as AI luminary Stuart Russell along with 20 other experts called for restrictions on the most powerful AI systems to "prepare for the largest risks well before they materialize."

What’s different in this missive? These concerned AI experts from the U.S., China, EU and U.K. backed up their fears by citing research papers and other documentation – rather than protesting through vague one-page declarations or media appearances as some have done in the past.

The paper, Managing AI Risks in an Era of Rapid Progress, was published shortly before the U.K. is to host its first global AI Safety Summit on Nov. 1 and 2. One of the authors, Bengio, is among the independent AI experts advising Prime Minister Rishi Sunak on AI. Mila, the Quebec Artificial Intelligence Institute that he leads, is among the Summit’s attendees.

Currently, governments around the world are just starting to consider AI regulation, with the EU ahead of the pack through its AI Act, expected to be signed as soon as the year’s end. This week, President Biden signed an executive order on AI regulation, in the country’s strongest showing yet.

Related:In a Rare Outburst, Meta’s LeCun Blasts OpenAI, Turing Awardees

More steps must be taken. “Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective,” Bengio and his co-authors wrote. “This unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity.”

A call to disarm

The paper specifically focuses on the most powerful AI systems being trained on billion-dollar computers, like the so-called Frontier Models GPT-4 and Google Gemini. Tech companies who are training powerful models “have the cash reserves needed to scale the latest training runs by multiples of 100 to 1,100 soon” to build ultra-intelligent and capable AI, they wrote.

They argue that these larger models trained on billion-dollar supercomputers “will have the most hazardous and unpredictable capabilities.”

The authors said regulators must be given access to AI systems before deployment to “evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible.” (Thus far, only the U.S. has this voluntary agreement in the White House’s AI pledge, which was signed by Meta, OpenAI, Google, Amazon and others.)

Related:Biden Issues Executive Order on AI Regulation

Stay updated. Subscribe to the AI Business newsletter.

Superintelligent AI could “gain human trust, acquire financial resources, influence key decision-makers, and form coalitions with human actors and other AI systems. To avoid human intervention24, they could copy their algorithms across global server networks like computer worms,” the paper said.

“Future AI systems could insert and then exploit security vulnerabilities to control the computer systems behind our communication, media, banking, supply-chains, militaries, and governments. In open conflict, AI systems could threaten with or use autonomous or biological weapons.”

Of deep concern is the development of autonomous AI that can plan, act and pursue goals on its own. Currently, AI has limited autonomy but “work is underway to change this,” the paper said. Also, “no one currently knows how to reliably align AI behavior with complex values. Even well-meaning developers may inadvertently build AI systems that pursue unintended goals — especially if, in a bid to win the AI race, they neglect expensive safety testing and human oversight.”

When will superintelligence arrive? “We must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within this decade or the next,” they said.

As such, the authors call for an increase in research in technical AI safety, with companies allocating at least one-third of their AI R&D budget towards this effort. “Addressing these problems, with an eye toward powerful future systems, must become central to our field,” the group contended.

What governments should do

Bengio and the paper’s co-authors said the government needs “comprehensive insight into AI development” and should require “model registration, whistleblower protections, incident reporting and monitoring of model development and supercomputer usage.”

For AI systems with “hazardous capabilities,” they recommend governance mechanisms “matched to the magnitude of their risks.” There must be national and international safety standards that depend on model capabilities and governments should hold frontier AI developers and owners “legally accountable for harms from their models that can be reasonably foreseen and prevented.”

Governments also must be prepared to license and pause AI development, gain access to them and require adequate cybersecurity protections against state-level hackers until the models will have “adequate” protections in themselves.

Until these requirements are development, in the meantime major AI companies must outline the specific safety measures they will take if certain red-line capabilities are found in their AI systems.

The group likens companies developing AI to manufacturers releasing waste into rivers to cut costs and “they may be tempted to reap the rewards of AI development while leaving society to deal with the consequences.”

Dissension from another corner of AI

However, mere days after the paper was published, another Turing award winner, Yann LeCun, and Google Brain’s co-founder Andrew Ng, took issue with their stance.

Ng recently told the Australian Financial Review that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the industry. “When you put those two bad ideas together, you get the massively, colossally dumb idea − policy proposals that try to require licensing of AI. … It would crush innovation.”

Ng said there are large tech companies that do not want to compete with open source, “so they’re creating fear of AI leading to human extinction. … It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.

Previously, Ng had said that when he tried to evaluate how realistic arguments are for AI wiping out humanity, all he finds are answers that are "frustratingly vague and nonspecific. They boil down to 'it could happen.'"

As for LeCun, Meta’s chief AI scientist, he recently called out several of paper's authors as well as renowned computer scientist Max Tegmark, saying: “You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.”

The open source advocate also aimed his ire at CEOs of some of the major AI companies for their doomsday predictions. His rant on X (formerly Twitter) focused on supporting open AI research and development – arguing that fearmongering about AI would eventually lead to a small number of companies controlling AI, which would be bad for the field.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like