This Week's Most Read: Why OpenAI Fired Sam Altman

This week's most popular stories on AI Business

Deborah Yao, Editor

November 30, 2023

4 Min Read

Here are this week's most popular stories on AI Business:

1. Why OpenAI Fired Its CEO Sam Altman

Three days after the ouster of Sam Altman as CEO of OpenAI, and while the AI community remained confused about the firing, tech billionaire Elon Musk asked OpenAI Chief Scientist Ilya Sutskever this penetrating question:

“Why did you take such a drastic action?” Musk posted on X (formerly Twitter). “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”

Sutskever had been one of the instigators of Altman’s departure. He reportedly was concerned about the pace of commercialization of OpenAI’s technology. Months before the firing, OpenAI had a breakthrough that would let them develop far more powerful AI models, according to The Information.

Read more

2. Reflections on AI Governance Global 2023

Stephen Bolinger, Informa plc's chief privacy officer, writes about his experience attending the inaugural conference on AI governance in Boston.

"I came away from the sessions with three key observations. First, there seems to be a growing consensus that the NIST Risk Management Framework is the most developed and tangible approach to implementing AI governance at scale. Second, there is no certainty that the ‘Brussels Effect’ will emerge alongside the EU’s enactment of the EU AI Act to set a de facto global regulatory standard for AI in the way that the GDPR did for data protection. Finally, taking journalist (Kevin) Roose’s keynote presentation to heart, I must strive to maintain and develop the professional attributes that will be more challenging for AI to replicate: handling surprising situations, being social, and possessing a scarce combination of rare skills."

Read more

3. Stability AI Seeks Sale as Investors Lose Confidence in CEO

Stability AI, the startup that co-developed and commercialized Stable Diffusion, is reportedly for sale.

Bloomberg is reporting that the generative AI startup is facing increasing pressure from investors as it continues to lose money. Stability AI has reportedly held talks with multiple companies, though they remain at an early stage at this time.

According to Bloomberg sources, one of the potential buyers is Cohere, a rival startup that boasts Spotify and Oracle among its customers. Another company that approached buying Stability was Jasper.

No deal is imminent and the startup - once a generative AI darling - could turn its back on a sale.

Read more

4. Analysts: Nvidia's Data Center Revenue Quadruples in Q3

Chipmaking giant Nvidia shipped nearly half a million H100 and A100 GPUs in Q3, according to new figures from sister research firm Omdia.

This translates to $14.5 billion in data center revenue for the quarter, nearly quadrupling from the same quarter a year ago, wrote Vlad Galabov, director of Omdia's cloud and data center research practice, and Manoj Sukumaran, principal analyst, data center computing and networking, in their latest Market Snapshot Report.

Most of the Q3 GPU server shipments went to hyperscale cloud service providers. Meta was one of its largest clients while Microsoft also ordered mass numbers of H100 GPUs, likely to power their increasing roster of AI products or Copilots.

Read more

5. MIT, Google: Using Synthetic Images to Train AI Image Models

Upon launch, OpenAI’s DALL-E 3 wowed users with its ability to generate highly detailed images compared to prior versions. OpenAI said the model's improved ability to do so came from using synthetic images to train the model. Now, a team of researchers from MIT and Google are expanding on this concept, applying it to the popular open source text-to-image model Stable Diffusion.

In a newly published paper, the researchers described a new approach to using AI-generated images to train image generation models that they call StableRep. It uses millions of labeled synthetic images to generate high-quality images.

The researchers said StableRep is a “multi-positive contrastive learning method” where multiple images generated from the same text prompt are treated as positives for each other, which enhances the learning process. That means an AI image generation model would view several variations of, for example, a landscape and cross-reference them with all descriptions related to that landscape to recognize nuances based on those images. It would then apply them in the final output. This is what creates a highly detailed image.

Read more

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like