AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

IT & Data Center

Nvidia launches two server GPUs: A30 and A10

by Louis Stone
Article ImageSmaller PCIe accelerators to complement the A100

Chip design giant Nvidia has added two GPUs to its server line-up, the A30 and the A10.

Both are less powerful and cheaper than the company's flagship A100 GPU line, and aimed at a different set of use cases.

A GPU for every purpose

The A30 is based on the same compute-oriented Ampere architecture as the A100, and is designed for AI inference and mainstream enterprise compute workloads, such as recommendation systems, conversational AI and computer vision.

The chip has a little over half the performance of the A100 in FP32, FP64, and FP16.

The A10, meanwhile, is not based on the same compute-oriented framework and is pitched for deep learning inference, interactive rendering, computer-aided design and cloud gaming workloads. It does not support FP64, which is required for most HPC set ups.

Both chips use a lot less power than the A100, which has a thermal design point of 400W (250W for PCIe version). The A30 and A10 consume 165W and 150W respectively.

In the latest MLPerf benchmark testing program, Nvidia dominated – and was the only company to submit results for every test in the data center and Edge categories.

The A100 led the benchmark, but A10 and A30 also proved successful.

“As AI continues to transform every industry, MLPerf is becoming an even more important tool for companies to make informed decisions on their IT infrastructure investments,” said Ian Buck, general manager and vice president of Accelerated Computing at Nvidia.

“Now, with every major OEM submitting MLPerf results, Nvidia and our partners are focusing not only on delivering world-leading performance for AI, but on democratizing AI with a coming wave of enterprise servers powered by our new A30 and A10 GPUs.”

The new chips come just weeks after Nvidia announced that it plans to launch its own Arm CPU, Grace.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

AI for Everything Series

Oge Marques explaining recent developments in AI for Radiology

Author of the forthcoming book, AI for Radiology

AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up