Google Taps AI to Identify Quantum Errors

Using a quantum simulator, researchers generated hundreds of millions of examples across a variety of settings

Berenice Baker, Editor

November 28, 2024

1 Min Read
Google's Sycamore quantum processor
Google

Google’s DeepMind and Quantum AI teams have developed AlphaQubit, a neural network-based decoder that identifies quantum computing errors more accurately than previous methods.

Quantum computers could revolutionize fields such as drug discovery, material design and fundamental physics by solving problems currently intractable for classical computers. However, to scale to a practical size, they need to overcome their susceptibility to noise and errors.

In research published in Nature, AlphaQubit made 6% fewer detection errors than tensor network methods, a slow but highly accurate decoder and 30% fewer errors than correlated matching, an accurate decoder that is fast enough to scale.

The researchers used the Transformer deep-learning architecture to train a model using data from a set of 49 qubits in Google's Sycamore quantum processor.

Using a quantum simulator, they generated hundreds of millions of examples across a variety of settings, error levels and simulations. They then finetuned AlphaQubit for a decoding task by giving it thousands of real-world error samples from a Google Sycamore quantum processor.

To see whether AlphaQubit is scalable, the team trained on simulated quantum systems with up to 241 qubits. It consistently outperformed other decoders, indicating it could scale for larger, future quantum devices.

Related:Quantum, AI Navigation Wins Time Innovation Award; Q&A

AlphaQubit is currently too slow for real-time correction in superconducting processors. However, it shows promise for scaling to larger quantum computers and represents an advance toward more reliable quantum computation.

The Google DeepMind and Google Quantum AI team said in a blog post that as quantum computing grows toward the potentially millions of qubits needed for commercially relevant applications, they will need to find more data-efficient ways of training AI-based decoders.

This article first appeared in AI Business's sister publication Enter Quantum.

About the Author

Berenice Baker

Editor, Enter Quantum

Berenice is the editor of Enter Quantum and co-editor of AI Business. Berenice has a background in IT and 20 years of experience as a technology journalist.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like