AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

IT & Data Center

Graphcore unveils new chip and server for AI workloads

by Louis Stone
Article ImageThe Intelligence Processor Unit gets a little more intelligent

British startup Graphcore has announced its latest artificial intelligence chip, the Colossus Mk2 GC200 Intelligence Processor Unit (IPU), which will be available in its new M2000 machine.

The company claims GC200 is eight times more powerful than the previous iteration of the IPU; each M2000 will ship with four of the new chips, promising one petaflop of AI compute in a 1U box.

Gunning for Nvidia

Each 7nm GC200 chip contains 1,472 independent processor cores and 8,832 separate parallel threads, supported by 900MB of in-processor RAM.

The silicon features 59.4 billion transistors, more than the 54 billion in Nvidia's largest GPU, the A100 – but less than the Cerebras WSE’s 1.2 trillion transistors (to be fair, each WSE costs $3m).

Four GC200s in a compact M2000 server will retail for $32,450. Currently, it is not possible to buy an individual GC200 chip.

It is possible to scale up, however: six M2000s can be linked to a single CPU server, while those looking for more can purchase ‘IPU-POD64s’ with 16 M2000s in a 19-inch rack, and theoretical scalability reaching 64,000 IPUs, for 16 exaflops of AI computing power.

Tested by the company on Google’s EfficientNet B4 image classification model (with 88 million parameters) the M2000 was more than 64 times faster than an Nvidia V100-based system, and over 16 times faster than the latest 7nm graphics card.

Graphcore M2000

An independent analysis of the Mk1 and M1000 by Citadel, a hedge fund and Graphcore customer, found the IPU was able to outperform Nvidia chips for some, but not all, workloads.

European search engine Qwant, a Graphcore customer, also evaluated the older chip model. It found that it was "a promising technology that deep learning practitioners should keep an eye on."

Graphcore, like many other AI chip startups, hopes to be able to differentiate itself by designing silicon purpose-built for AI. Nvidia, which last week passed Intel in valuation due to booming GPU sales, develops chips based on a platform that was originally created for video games.

While there are numerous AI chip startups, Graphcore has proved one of the most successful – at getting investment, at the very least. It has raised more than $450 million from Robert Bosch Venture Capital, Samsung, Dell Technologies Capital, BMW, Microsoft, Arm co-founder Hermann Hauser, and DeepMind co-founder Demis Hassabis, at a $1.95 billion valuation.

It's less clear how successful the products have been commercially, but its customers include Carmot Capital, the University of Oxford, and Lawrence Berkeley National Laboratory.

Graphcore’s biggest win came in 2018, when Microsoft said it would offer the IPU in preview on its Azure cloud. The company is also experimenting with the chip for internal workloads.

The IPU is separately available on Cirrascale’s cloud in the West, and in China on Kingsoft Cloud.

Folowing the latest announcement, J.P. Morgan said that it would evaluate the system for potential use.

"If you’re looking to add Machine Intelligence compute into your data center, there’s nothing more powerful, or easier to use than a Graphcore IPU-Machine M2000," CEO Nigel Toon said.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up