Apple Brings AI PC Power to iPads With M4 Chip Debut

Analysts highlight Apple's new on-chip neural engine and ecosystem-focused performance

Ben Wodecki, Jr. Editor

May 8, 2024

4 Min Read
A black and blue square with the Apple logo and the 'M4' emblazoned on it on a black background

Apple has introduced the M4 high-performance chip for its latest iPad lineup, touting that its new tablets can outpace any AI PC currently available.

The announcement came during the company’s May 7 “Let Loose” event.

The new M4 is a system on a chip (SoC) that brings together a CPU and a GPU with combined power to provide high performance and power efficiency for the new iPad Pro.

The M4 chips offer 1.5 times faster CPU performance compared to the M2 chips featured in previous iPad Pro models.

The new SoCs boast a neural engine capable of running up to 38 trillion operations per second (TOPS), which Apple claims is “faster than the neural processing unit of any AI PC today.”

Johny Srouji, Apple’s senior vice president of hardware technologies said the new chips make the table “the most powerful device of its kind.”

“The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine and memory system make M4 extremely well suited for the latest applications leveraging AI,” Srouji said.

Alexander Harrowell, Omdia's principal analyst for advanced computing said Apple’s impressive hardware upgrades shouldn’t come as a surprise.

Related:Apple CEO Optimistic About Future Generative AI Investments

“The A17 Pro in the iPhone 15 gets 35 TOPS from the Neural Engine ASIC block,” Harrowell said. “It’s worth remembering the whole Apple Silicon line has mobile heritage, they were A-series before they were M-series.”

“Very crudely it looks like the big difference between the A17 Pro over the M4 is more cores in the CPU and GPU. Because after all you’ve got some more power, price, area and a bit more thermal capacity to work within a tablet.”

A recent Omdia report crowned Apple a leader in the AI PC space, recognizing its devices for their performance on creative workloads.

Omdia analyst Harrowell said the performance increases on the M4 chips are “nice to have” but touted increased memory bandwidth as a notable upgrade.

“As Transformer inference performance is bound by memory for model size and memory I/O for tokens per second rather than compute,” Harrowell said. “This is strange, they’re saying 120GB per second but the M3 chips were 150, which was controversially down on the M1.

“It makes more sense to think about the previous iPad Pro, the 2022 model, which had a baseline M2 chip. That was 100GB per second and 15.8 TOPS, so a 20% uplift in memory I/O and twice the accelerator TOPS is something a user would feel. People are badmouthing it because it’s 38 TOPS and Qualcomm’s got 45 and Intel’s going to drop 45 from the NPU in Lunar Lake, but you have to remember that the unified memory architecture helps. Also, this is a tablet, Lunar Lake is a laptop chip.”

Related:Apple Launches First Multimodal AI Model

During the company’s recent earnings call, CEO Tim Cook said Apple will be making “significant investments” in generative AI in the next quarter, with more announcements coming in June at its WWDC event.

Lian Jye Su, Omdia’s chief analyst of applied intelligence said the M4 chips show Apple is “getting ready to run a large language model on its tablet line.”

“Apple mentioned diffusion and generative models, which are the typical foundation models trialed and tested by other AI chipset vendors,” Su said. “Based on the hardware specs, it seems the large language model will not be as powerful as Qualcomm and Intel anticipate on AI PCs, but it needs not be. After all, Apple is not known for being first in the market for all emerging technologies, it picks and chooses the most optimal one for its product ecosystem.”

Despite recently unveiling its own OpenELM model, Apple is rumored to be considering leveraging large language models from other technology vendors on its devices.

“It would be interesting to see the partners Apple is working with for on-device LLM deployment,” Su said. “Some sources mention Google for the global market and Baidu for China. Others said Apple is going to utilize its own OpenELM. A bidding war coming soon, perhaps?”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like