OpenAI Considers Building Its Own Chips
ChatGPT makers mulling whether to expand beyond Nvidia as ‘GPU constraints’ hit plans
At a Glance
- OpenAI is considering plans to design its own AI chips and look beyond its main supplier Nvidia amid the rush for AI hardware
ChatGPT maker OpenAI is looking at potentially designing its own AI chips and may acquire a company to make it happen.
Reuters reports that according to internal OpenAI discussions the company wants to sure up its pipeline of hardware amid the global scramble for chips to train AI.
OpenAI currently relies on chips from Nvidia, but Reuters sources state that it’s exploring options to build its own AI chip or work more closely with suppliers beyond its current provider. Nvidia’s chips currently power the Microsoft supercomputer OpenAI has been used to train its AI model since 2020. The chip supplier is leading the market for AI chip supplies, having shipped 900 tons of its flagship H100 unit in Q2 alone, according to Omdia research.
Reuter’s report comes after CEO Sam Altman repeatedly bemoaned the scarcity of GPUs. A since-deleted blog post covering a discussion between Altman and Raza Habib, CEO of London-based AI firm HumanLoop, saw the OpenAI CEO blame reliability and speed issues of its API on “GPU constraints.”
The issue with access to GPUs is exacerbated by the sheer cost to run them, with OpenAI reportedly spending hundreds of thousands of dollars every day to run ChatGPT.
OpenAI has not confirmed whether it has made a definitive decision to press on with its chip plans.
Backers Microsoft has been working on its own custom chips for some time under the name Athena. Some 300 staff are said to be working on the chips, with Athena potentially set to be used by both Microsoft and OpenAI sometime next year.
And rival Meta is doing the same, building custom chips for running AI models like Llama 2. The MITA chips are optimized for the company’s own internal workloads.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like