Impressive 2020 revenue growth as the company admits it is looking over its shoulder
Nvidia has maintained its position as the top manufacturer of AI processors used in data centers, according to research firm Omdia.
The AI Processors for Cloud and Data Center Forecast Report states that Nvidia’s cloud and data center AI processor revenue reached $3.2 billion in 2020 – up from $1.8 billion just a year prior.
Omdia researchers found that the Santa Clara-based tech giant is responsible for 80.6 percent share of the global market’s revenue, outranking Xilinx, Google, and Intel, among others.
It is important to note that the report excludes the impact of Intel’s general purpose server CPUs, which are widely used for AI acceleration, but are not designed specifically for such work.
“Nvidia in 2020 continued to capitalize on its strong incumbent position in GPU-derived chips to maintain its leadership position in cloud and data center AI processors,” said Jonathan Cassell, principal analyst for advanced computing at Omdia.
“And as the leading supplier of GPU-derived chips, Nvidia has established itself and bolstered its position as the AI processor market leader for the key cloud and data center market.”
A king wears the crown
The demand for AI chips is growing rapidly, with a flood of suppliers looking to nab Nvidia’s crown. From young startups to major semiconductor vendors, the AI processor market is getting saturated with everything from GPU-based designs, to programmable silicon, to new varieties of semiconductors designed specifically to accelerate deep learning.
Annual market revenue is expected to soar by a factor of nine between 2019 and 2026, when it is projected to to reach $37.6 billion, according to Omdia.
Its research team defines AI processors as chips that integrate distinct subsystems dedicated to AI processing. This includes the likes of GPU-derived AI application-specific standard products (GPU-derived AI ASSPs), proprietary-core AI application-specific standard products (proprietary-core AI ASSPs), AI application-specific integrated circuits (AI ASICs), and field-programmable gate arrays (FPGAs).
While central processing unit (CPU) chips like Intel’s Xeon are extensively used for AI acceleration in data center operations, Omdia opted not to include these devices in its AI processor analysis. Intel reported it had shipped 200,000 10nm Xeon processors in the first quarter of this year.
According to Cassell, Nvidia benefited from the demand for on-premises data centers and cloud hyperscalers, “because of their familiarity to users.”
“Nvidia’s Compute Unified Device Architecture (CUDA) Toolkit is used nearly universally by the AI software development community, giving the company’s GPU-derived chips a huge advantage in the market,” he said.
Cassell and the Omdia team noted in the report that Nvidia itself realizes that other chip suppliers will gain significant market share in the coming years as market acceptance increases for alternative GPU-based chips and other types of AI processors.
Given this trend, the future revenue growth for GPU-derived AI ASSPs will lag other types of chips in the cloud and data center market as buyers seek more efficient alternatives. As a result, the GPU-derived AI chips’ share of market revenue is set to decline to 54% in 2026, down from 82% in 2021.
The company will be competing with its neigbor Xilinx, which ranked second in the report. Based in San Jose, it supplies programmable silicon products which are increasingly used for AI inferencing.
Alphabet’s Google came in third place by market share – with its proprietary Tensor Processing Units (TPUs) employed extensively in its own hyperscale cloud operations.
Intel and its AI-specific Habana chips ranked fourth, while semiconductor maker AMD was fifth.
Just days prior, Nvidia unveiled the RTX A2000 — a compact version of its data center GPU designed for advanced 3D rendering and AI workload acceleration.
Set to debut in October, the hardware features third-generation Tensor Cores to enable AI-augmented tools and applications.