Invests heavily in AI hardware as it seeks an advantage for its cloud services

Louis Stone, Reporter

May 20, 2021

4 Min Read

Invests heavily in AI hardware as it seeks an advantage for its cloud services

Google has revealed the latest version of its in-house artificial intelligence chip, the Tensor Processing Unit.

The fourth generation TPU is twice as fast as its predecessor, the company claimed.

Google will deploy the TPUs in clusters of 4,096 units, known as pods.

Each pod is capable of more than one exaflops of compute (single precision), and features “10x interconnect bandwidth per chip at scale than any other networking technology," Google CEO Sundar Pichai said at the company's I/O conference this week.

Exclusive to cloud

“This is the fastest system we’ve ever deployed at Google and a historic milestone for us,” Pichai said.

“Previously to get an exaflop you needed to build a custom supercomputer, but we already have many of these deployed today and will soon have dozens of TPUv4 pods in our data centers.

The chips will be available on Google Cloud later this year; they will not be available for purchase, with the hardware serving as an incentive for customers to use its cloud platform, which currently trails behind AWS and Microsoft Azure.

When the TPU first launched in 2016 it was just able to run inference workloads, but over successive iterations, the design has proved itself to be a capable inference and training chip.

Last year, 4,096 TPU v3 chips were used by Google to deliver over 430 petaflops of peak AI performance, besting a bunch of benchmarks administered by the MLPerf consortium. The company claimed victory, but Nvidia also topped several of the benchmarks, including those for single chip performance.

In this year's MLPerf tests, 256 TPU v4 units took 1.82 minutes to train the ResNet-50 v1.5 algorithm with the ImageNet data set to 75.9 percent accuracy.

It took 1.06 minutes for the same feat to be achieved on a system built with 768 Nvidia A100 GPUs and 192 AMD Epyc 7742 CPUs. A system built with 512 Huawei Ascend 910 chips combined with 128 Intel Xeon Platinum 8168 cores took 1.56 minutes.

By comparison, the 4,096 TPU v3 pod was able to pull it off in just 0.48 minutes. The company did not submit the results for a whole pod composed of fourth generation TPUs performing the task.

The road map to practical quantum computing

Google has invested heavily in alternative chip architectures, including security chips and video transcoding hardware for YouTube, and is working on its own System-on-Chip.

But it is not just conventional computing that the company is interested in. At I/O, Google announced it had opened a new Quantum AI Campus.

Spanning multiple buildings in Santa Barbara County, with quantum computers, research labs, and a chip fabrication site, the facility will serve as the heart of Google's quantum efforts.

At I/O, the company re-iterated the claim that it had already built systems with "beyond classical computation capabilities," a notion that is disputed by competitors.

In 2019, Google announced that it had run a workload on its quantum computer that would take 10,000 years to complete when using the world’s most powerful supercomputer at the time – the Summit, built by IBM. Soon after, IBM claimed that, with a little software optimization, the task could have been completed on a conventional supercomputer in less than two days.

Google's quantum computer managed it in just three minutes and twenty seconds, so it still won out, but the margin was much smaller than previously claimed. It also only achieved this on a single workload – proving that a random-number generator was truly random – and not anything more applicable to enterprise workloads.

Part of the reason is the current system’s quantum bits (qubits) are too fragile. "Even cosmic rays from space can destroy quantum information," Pichai said.

Google hopes to collect a number of those fragile physical qubits together, strengthening them as much as possible. They will then act in aggregate as a single logical qubit, made more stable by the number of redundant qubits.

Next, Google hopes to scale that up to a thousand logical qubits. Speaking to the Wall Street Journal, the head of the company’s Quantum AI program, Dr. Hartmut Neven, said that he expected this to cost billions and not be acheved until 2029.

“We now have the important components in hand that make us confident. We know how to execute the road map.”

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like