AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

IT & Data Center

Cirrascale offers AI megachip from Cerebras as a cloud service

by  
Article ImageThe largest chip ever built is now available in the cloud.

Specialist cloud operator Cirrascale – which pitches services at deep learning, artificial intelligence, and high-performance computing customers – now offers remote access to Cerebras' CS-2 AI machine.

The service is available immediately, and gives access to a dedicated CS-2 system (which includes both the WSE-2 chip and the conventional hardware).

It has a one-week usage minimum, with rates of $60,000 per week, $180,000 per month, or $1.65 million a year.

Buying the system standalone would cost around $2 million.

Wafer scale sizes, cloud prices

The Wafer Scale Engine-2 features 2.6 trillion transistors and 850,000 'AI-optimized' cores on a single chip. For comparison, the most complex GPU ever built features just 54 billion transistors – but costs significantly less.

The huge chip is being used by the Department of Energy’s National Energy Technology Laboratory (NETL), GlaxoSmithKline, Tokyo Electron Devices, the Pittsburgh Supercomputing Center, and the University of Edinburgh – but the high price of entry has put many potential customers off.

Together with Cirrascale, the company hopes that the comparatively cheaper cloud alternative will allow prospective customers to test the hardware.

NETL previously trialled the earlier CS-1 processor, pitting it against 16,384 Intel Xeon Gold 6148 cores in its Joule supercomputer for a very specific computational fluid dynamics workload – and the CS-1 proved 200 times faster. Against a single GPU, it was approximately 10,000 times faster.

The wafer scale approach relies on the fact that connections between cores on a single chip are thousands of times faster than connections between cores across separate chips, even if they are installed in the same system, thanks to the fundamental physical limits of transferring data over distances.

However, the researchers found significant limitations, particularly with the chip's limited memory; for a single processor, it actually has a huge amount of memory, but not as much as thousands of aggregated servers.

Andrew Feldman, CEO and co-founder of Cerebras, said that the company was "excited to partner with Cirrascale and introduce the Cerebras Cloud @ Cirrascale, bringing the power and performance of our CS-2 system to more customers looking to glean world-changing insights from massive datasets sooner."

He added that the hardware "can deliver cluster-scale acceleration of AI workloads easily with a single system, thereby enabling customers to deploy solutions faster, using large, state-of-the-art models for training or inference."

Cirrascale’s CEO, PJ Go, also welcomed the partnership, claiming that the two companies were “truly democratizing AI by broadening deep learning access and enabling large-scale commercial deployments across leading Fortune 500 customers, research organizations and innovative startups.”

The cloud company was previously one of the first to offer access to Graphcore’s IPU chip for AI, along with more traditional GPU and IBM Power alternatives.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up