Intel Launches Next-Generation AI Applications

Intel recently announced a collaboration with IBM to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud

Heidi Vella, Freelance journalist

September 30, 2024

2 Min Read
Getty Images

Intel has unveiled a new range of high-performance enterprise AI systems designed to provide optimal performance per watt with a lower total cost of ownership. 

These new advancements are called Xeon 6 with Performance-cores (P-cores) and the Gaudi 3 AI accelerators. The former is built to handle compute-intensive workloads with high efficiency, delivering twice the performance of its predecessor, the company said. 

It also has an increased core count, double the memory bandwidth and AI acceleration capabilities embedded in every core.

Gaudi 3 is specifically optimized for large-scale generative AI with 64 Tensor processor cores and eight matrix multiplication engines to accelerate deep neural network computations, according to the company. It also includes 128 gigabytes of HBM2e memory for training and inference, and 24 200 Gigabit Ethernet ports for scalable networking. 

Intel recently announced a collaboration with IBM to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. Intel and IBM aim to lower the total cost of ownership to leverage and scale AI while enhancing performance, the companies said.

“With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security,” said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group.

Related:Intel Co-Launches Quantum AI Challenge for HBCUs to Boost Innovation

The new AI products follow the long-awaited launch earlier in the year of Intel’s next generation Core Ultra Series 2 chips codenamed Lunar Lake. These new chips embed some notable changes to their architecture including integrating system memory into the chip and reducing the total number of CPU cores. They are generally seen as a departure from its traditional x86 paradigm. 

The chips have more than three times the AI performance compared to the previous generation and will receive free updates to Microsoft’s Copilot+, a partnership expected to deliver the application at scale, the company said. 

“With breakthrough power efficiency, the trusted compatibility of x86 architecture and the industry’s deepest catalog of software enablement across the CPU, GPU and NPU, we will deliver the most competitive joint client hardware and software offering in our history with Lunar Lake and Copilot+,” added Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group. 

These new products and upgrades are expected to address the stiff competition Intel is facing from an increase in rivals including Nvidia, AMD and Qualcomm.

Related:Intel Partners With Japan’s AIST to Launch Advanced Chipmaking Facility

About the Author

Heidi Vella

Freelance journalist, Freelance

Heidi is an experienced freelance journalist and copywriter with over 12 years of experience covering industry, technology and everything in between.

Her specialisms are climate change, decarbonisation and energy transition and she also regularly covers everything from AI and antibiotic resistance to digital transformation. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like