Why OpenAI Fired Its CEO Sam Altman

While everyone was still confused about what was happening at OpenAI, Elon Musk asked a very penetrating question.

Deborah Yao, Editor

November 27, 2023

3 Min Read
Sam Altman photo
Getty Images

At a Glance

  • Months before OpenAI fired CEO Sam Altman, the startup achieved a breakthrough that could lead to more advanced AI models.
  • Called Q*, the breakthrough enabled LLMs to solve basic math problems - a key technical milestone for generative AI.
  • But Meta Chief AI Scientist Yann LeCun said the breakthrough is "complete nonsense" as many top labs have been working on it.

Three days after the ouster of Sam Altman as CEO of OpenAI, and while the AI community remained confused about the firing, tech billionaire Elon Musk asked OpenAI Chief Scientist Ilya Sutskever this penetrating question:

“Why did you take such a drastic action?” Musk posted on X (formerly Twitter). “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”

View post on X

Sutskever had been one of the instigators of Altman’s departure. He reportedly was concerned about the pace of commercialization of OpenAI’s technology. Months before the firing, OpenAI had a breakthrough that would let them develop far more powerful AI models, according to The Information.

Sutskever and the board, who lead the nonprofit parent, feared that OpenAI still did not have enough safeguards to protect against these advanced models. They felt that the only recourse was to fire the lead instigator of this rapid commercialization: Altman.

The breakthrough, called Q* (Q-star), reportedly enabled AI models to solve basic math problems it had not seen before – a key technical milestone. At present, generative AI generates results that vary depending on the prompt. But in math, there is only one answer. Developing Q* implies that AI can achieve greater reasoning powers akin to human intelligence, according to Reuters.

Related:Sam Altman Returns as OpenAI CEO − to a New Board

This brings current levels of AI development one step closer to artificial general intelligence (AGI), when machines can reason like a human. One influential AI camp, led by the so-called godfather of AI, believes AGI could lead to human extinction absent stronger guardrails.

Meta’s LeCun disagrees - loudly

That is baloney, according to Meta Chief AI Scientist Yann LeCun.

“Please ignore the complete nonsense about Q*,” he posted on X. “Pretty much every top lab (FAIR, DeepMind, OpenAI, etc.) is working on that and some have already published ideas and results.”

View post on X

LeCun is the leading voice in the AI community that disputes the view that AGI can lead to human extinction. His reasoning: an entity with superintelligence does not automatically mean it wants to conquer those with weaker intellect. LeCun pointed to many corporate teams helmed by leaders whose staff is more intelligent.

Moreover, LeCun argues that machines are not social creatures and have no desire to dominate humanity. People are hierarchically organized and so there is a tendency to want to conquer each other. “Intelligence has nothing to do with the desire to dominate,” LeCun argued in a recent online debate. Instead, superintelligent AI assistants will help humans becoming even smarter, he said.

Related:OpenAI Sought Merger with Anthropic; Sam Altman the ‘Martyr’

LeCun summed up his views about AGI this way:

-There will be superhuman AI in the future.

-They will be under our control.

-They will not dominate us nor kill us.

-They will mediate all of our interactions with the digital world.

-Hence, they will need to be open platforms so that everyone can contribute to training and tuning them.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like