In a rare letter, Sam Altman, Greg Brockman and Ilya Sutskever outline the ‘special treatment’ needed to curb its powers

4 Min Read
STEFANI REYNOLDS/AFP via Getty Images

At a Glance

  • In a rare letter, OpenAI's top executives jointly warned about a coming 'superintelligence' they liken to nuclear power.
  • This 'superintelligence' might wipe out humanity and needs special treatment and global coordination.
  • They suggest creating an agency like the International Atomic Energy Agency to oversee AI and let the global public weigh in.

In a rare letter, top executives of ChatGPT-maker OpenAI jointly penned a warning about a coming “superintelligence” they compare to nuclear energy, and suggested outside-the-box ways to curb its power – before it is too late.

“Given the picture as we see it now, it’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains,” wrote CEO Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever, who are also the co-founders.

This “superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” they said. “Given the possibility of existential risk, we can’t just be reactive.”

Stay updated. Subscribe to the AI Business newsletter

While AI pioneers such as Geoffrey Hinton and Stuart Russell have warned against the serious threat AI poses to society, OpenAI’s co-founders should know more than anyone how much of an actual threat it is since they developed the branch of AI that is giving rise to these questions.

The advent of the GPT-4 large language model, in particular, sparked fears in the AI community, resulting in an open letter calling for a 6-month hiatus in developing powerful AI systems.

With its 170 trillion parameters, GPT-4 is much more powerful than its predecessors (GPT-3 has 175 billion parameters.) GPT-4 is much better at understanding natural language and is highly versatile. Microsoft researchers say it can even reason.

Related:OpenAI CEO's Proposals for AI Regulation

“We can have a dramatically more prosperous future, but we have to manage risk to get there,” OpenAI’s co-founders wrote.

Out-of-the-box solutions

The authors believe that this superintelligence needs “special treatment and coordination” to curb its powers, hinting that it is too dangerous and too smart to be reined in properly through conventional regulatory methods.

They propose the following paths:

1. Global coordination of AI development

Major governments around the world can set up a ‘project’ that current AI efforts could join, or the AI community could agree to limit the growth pace per year of AI at the frontier. Companies would be held to an “extremely high standard” of acting responsibly.

2. Create an IAEA-like agency to manage advanced AI

The International Atomic Energy Agency was created to manage the safe use of nuclear power, after scientists called for a global regime to rein in the technology to avoid a future nuclear war.

OpenAI's co-founders called for an IAEA-like agency to rein in this coming superintelligence – technically called artificial general intelligence – because it poses the same existential threat to humanity as nuclear war.

This agency would oversee AI efforts above a certain capability, as measured by compute required or energy usage, for example. The agency can inspect the AI systems, require audits and test for compliance with safety standards, restrict deployment, among other responsibilities.

Companies and countries can get a head start by putting together and implementing “elements” the agency could require.

“It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say," the wrote.

3. Scientists should develop tech to make superintelligence safe
4. Any AI efforts below superintelligence should not be micromanaged – such a requiring licenses or audits − to allow innovation to continue.

‘Strong’ public oversight

The three want people all over the world to “democratically decide” on the “bounds and defaults” for AI systems.

While they do not yet know how to design such a mechanism, they confirmed OpenAI is planning on experimenting with its development. “Individual users should have a lot of control over how the AI they use behaves,” they said.

Around the same time as the blog post was published, Brockman was speaking at AI Forward in San Francisco, an event put on by Goldman Sachs and SV Angel.

Brockman said OpenAI was looking at Wikipedia for inspiration. Wikipedia is a free online platform where any user can edit information and use citations to affirm entries. Brockman said such a concept could help “democratic decision-making” about AI.

AI governance growing

The newly published post from OpenAI’s senior leaders comes amid a flurry of global interest in AI governance in the wake of the rise of generative AI.

Most recently, G7 leaders will form a working group to establish the 'Hiroshima AI process' to set guardrails around AI. This is arguably the fastest global regulatory coordination ever seen for emergent technology.

Altman told Congress to pass more regulations on AI during a recent congressional hearing. He was also among the AI leaders who met with Vice President Kamala Harris where it was proposed that the public should be allowed to vet models from major players in the AI space.

In terms of legislation, Senate Majority Leader Chuck Schumer has already proposed potential rules on AI deployments, which would force companies to have prospective AI tools audited by a team of independent experts.

The jurisdiction that is furthermost ahead in terms of AI governance is the EU, whose AI Act is nearing fruition. The legislation would impose strict rules on systems that impact citizens’ rights.

The U.K., meanwhile, has left governance up to individual regulators, opting for a ‘light touch’ approach, much to the dismay of some workers' unions.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like