Equivalent to 2,000 of Nvidia’s fastest AI chip, with ‘near perfect’ large language model training

Ben Wodecki, Jr. Editor

November 15, 2022

2 Min Read

Cerebras Systems, the startup behind some of the largest AI chips ever made and backed by tech luminaries, has unveiled Andromeda, a 13.5 million core AI supercomputer to enable faster and deeper AI capabilities.

Andromeda, which is designed for both commercial and academic work, contains a cluster of 16 Cerebras CS-2s – based on the company’s sizable WSE-2 AI chips – each containing compute performance over 100 times compared to most GPUs.

Andromeda can deliver more than 1 Exaflop of AI compute – or one quintillion floating point operations per second. To put that power in perspective, a human would have to perform one calculation every second for 31.7 billion years.

Powering that computer are 18,176 AMD EPYC processors and more cores than 1,953 Nvidia A100 GPUs combined. Cerebras claims the supercomputer has 13.5 million cores, 1.6 times as many as the largest supercomputer in the world, Frontier, which has 8.7 million cores.

The supercomputer is the only AI supercomputer to “ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone,” according to the chipmaker.

Andromeda is designed to greatly reduce the time taken to train large language models, with the company claiming it achieves “near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.”

GPUs cannot process large language models with enormous sequence lengths. But Andromeda can, the company said.

This “GPU impossible work was demonstrated by one of Andromeda’s first users, who achieved near perfect scaling on GPT-J at 2.5 billion and 25 billion parameters with long sequence lengths — MSL of 10,240,” according to Cerebras. “The users attempted to do the same work on Polaris, a 2,000 Nvidia A100 cluster, and the GPUs were unable to do the work because of GPU memory and memory bandwidth limitations.”

Access to Andromeda is available now, with several customers and academic researchers already running workloads. The company provided University of Cambridge graduate students with free access.

The supercomputing platform can be used simultaneously by multiple users, who can specify how many of Andromeda’s CS-2s they want to use – meaning Andromeda can be used as a 16 CS-2 supercomputer cluster for a single user working on a single job, or 16 individual CS-2 systems for sixteen distinct users.

The supercomputer is based in Colovore, a Santa Clara, California-based data center in the heart of Silicon Valley.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like