Moore’s Law Lives On – and AI Chips Prove It

But Moore’s Law in its strictest interpretation is ‘about to hit a brick wall.’

Wylie Wong, Contributor

April 20, 2023

4 Min Read
Gordon Moore 1929-2023/Intel

At a Glance

  • Gordon Moore, creator of Moore's Law and cofounder of Intel, passed away at age 94 on March 24 at his home in Hawaii.
  • The debate goes on whether Moore's Law is dead, since the laws of physics dictate the limit on how small transistors can get.
  • Advances in AI chips prove Moore's Law is still alive.

The recent death of chip pioneer Gordon Moore has put the spotlight on the long-held principle that bears his name and renewed debate on whether it is still valid or obsolete.

But Moore’s Law – his 1965 prediction that the number of transistors on a chip doubles roughly every two years with a minimal increase in cost – lives on and continues to accelerate the performance of processors at that pace, including AI chips, analysts say.

As Intel’s co-founder, Moore, who died at age 94 on March 24, contributed more to the semiconductor industry than Moore’s Law, said Glenn O’Donnell, vice president and research director at Forrester Research.

“As one of the founding fathers of the semiconductor industry, he is responsible for a lot of the early innovation at Fairchild (Semiconductor), and of course, Intel. He led Intel as CEO through a period that is arguably its greatest growth spurt,” he said.

Moore’s Law – which applies to all classes of processors, including AI chips – has guided the technology industry to unprecedented levels of innovation and exponential growth of computing power, said Manoj Sukmaran, principal analyst of data center IT at Omdia, the sister research firm of AI Business.

“Moore’s prediction gave the industry and academia a vision and goal to push the limits of semiconductor technology, and it could be one of the most critical technologies for human advancement over the past several decades.”

Related:Gordon Moore: A Life in Pictures

In recent years, however, the tech industry has debated whether Moore’s Law is still valid as the pace of chip advancements has slowed. In fact, last year, Nvidia CEO Jensen Huang said Moore’s law has ended, while Intel CEO Pat Gelsinger pushed back, saying it’s still “alive and well.”

Moore’s Law lives on

So, who’s right? The answer is it depends on whom you ask. Vladimir Galabov, Omdia’s research director of cloud and data center, recently told AI Business that he has proved Moore’s Law is still alive.

To meet the rule that the number of transistors on an integrated circuit double every two years, chips today need about 100 billion transistors, he said.

Three recently released processors have kept the industry on track with Moore’s Law: Apple’s M1 Ultra, which consists of 114 billion transistors, AMD’s 4th Gen Epyc “Genoa” chip, featuring 90 billion transistors, and Intel’s Data Center GPU Max Series – formerly named Ponte Vecchio – which has more than 100 billion transistors, he said.

The future of Moore’s Law depends on how it is interpreted, said O’Donnell. If it is viewed purely by transistor size, the industry is approaching its limit, he said.

“We’re now talking 2 to 3 nanometer chips to define the smallest transistor size. A silicon atom is only 0.2 nm, so we can’t get much smaller,” he said. “In this view, Moore’s Law is about to hit a brick wall.”

However, most in the tech industry define Moore’s Law more broadly than just its transistor size. Because chipmakers cannot make transistors smaller, they have to build up like they do with Manhattan real estate where they erect skyscrapers, O’Donnell said.

“Chipmakers are delivering on vertical stacking innovations and so-called ‘chiplets’ to squeeze more transistors on a chip,” O’Donnell said. “These architectural developments will keep Moore’s Law going for many more years. Intel’s plan to squeeze a trillion transistors onto a chip by 2030 is realistic.”

AI chips and Moore’s Law

Sukumaran agrees. He said it is increasingly challenging to continue the trajectory of Moore’s Law as the industry begins to hit roadblocks or scaling walls.

“Transistor miniaturization is becoming increasingly difficult and costly; limiting the dimensional scaling, memory bandwidth and capacity is becoming increasingly challenging; and power and thermal challenges are getting more complex,” he explained.

As a result, the industry is pivoting to domain-specific computing to achieve the cost and efficiency of computing, and specialized AI processors for AI training and inferencing are an example of that transition, Sukumaran said.

“Semiconductor fabs and vendors are also getting creative by adopting innovative packaging technologies like using chiplets, 3D stacking, etc., to increase the transistor density and keep costs low,” he said.

For example, Cerebras Systems in 2019 introduced the world’s largest and fastest processor designed for AI applications: the Wafer Scale Engine (WSE), which measures eight inches by eight inches and features 1.2 trillion transistors and 400,000 computing cores.

Then in 2021, it introduced the Cerebras Wafer-Scale Engine-2 (WSE-2), which doubled the performance with more than 2.6 trillion transistors and 850,000 cores, while consuming 20kW of power. Tesla is also working on an innovative AI processor called Tesla D1, he said.

“The very existence of domain-specific processors like AI processors is a pointer to the direction the industry has taken to achieve performance and efficiency in specialized computing,” Sukumaran said.

About the Author(s)

Wylie Wong

Contributor, AI Business

Wylie Wong is an award-winning freelance journalist specializing in technology, business and sports. He previously worked at CNET, Computerworld and CRN. An avid sports fan, he is the co-author of ‘Giants: Where Have You Gone?’, a book about the lost heroes and fan favorites of the San Francisco Giants baseball team.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like