BofA Global Research analysts say Nvidia's story is just beginning.

Deborah Yao, Editor

May 31, 2023

2 Min Read

At a Glance

  • BofA Global Research analysts said Nvidia latest product announcements strengthened its position in AI.
  • Nvidia could take a chunk of the AI networking market, which is expected to quintuple to $10.7 billion by 2027.
  • Only around 15% of cloud servers are doing accelerated computing, which will ramp up to properly train large language models.

Nvidia CEO Jensen Huang’s slew of product announcements at Computex – which he himself decried as ‘too much’ – further bolsters the chipmaker’s position in AI, according to BofA Global Research analysts.

At the trade show in Taiwan, Huang said the era of accelerated computing and generative AI has arrived. Nvidia, as the market leader in advanced chips for AI workloads, stands to benefit.

In a research note, BofA analysts said the chipmaker has an early-mover advantage and also strongly executes.

They pointed to five highlights from the keynote:

1. Full volume production of H100/HGX H100 servers (roughly $200,000/server)

2. Unveiling of new DGX GH200 AI supercomputer - combining up to 256 GH200 Superchips with Google, Meta, and Microsoft first to gain access. (Superchips are also in full production, combining arm-based Grace CPU plus H100 GPU.)

Stay updated. Subscribe to the AI Business newsletter

3. New Spectrum-X networking platform focused on Ethernet-based AI cloud environments (Spectrum-4 Ethernet switches, BlueField-3 DPUs, software)

4. Development of MGX platform, modular reference architectures that can be used by system makers for over 100 server variations (partnered with Supermicro, ASUS, and several more)

5. Partnership with SoftBank to develop a platform for generative AI and 5G/6G applications (based on GH200 Superchips).

Related:Nvidia CEO: Two Computing Trends Are Emerging At Once

AI networking TAM

The analysts said the AI networking market could quintuple to a $10.7 billion total addressable market (TAM) by 2027.

“In the early days of generative AI deployment, Infiniband was the preferred protocol, leveraging low-latency deployment experience gained in supercomputer installations. Over time Ethernet could become more ubiquitous,” they wrote.

“With the acquisition of Mellanox in 2020, Nvidia holds both Infiniband and Ethernet assets to support connectivity in high performance compute applications.”

The analysts also said that Nvidia is transforming into a data center powerhouse, with its full-stack platform supporting AI leadership − the company already partnered with more than 1,600 generative AI startups plus top hyperscalers.

They noted that only around 15% of cloud servers are doing accelerated computing today, which will ramp up as GPUs are “required” for proper training of LLMs.“We believe we are only at the start of the story” for Nvidia, they pred

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like