Xilinx uses Micron’s clever NOR flash to speed up its AI accelerators

Xilinx uses Micron’s clever NOR flash to speed up its AI accelerators

Max Smolaks

October 1, 2019

3 Min Read

High-speed Xcella memory for the adaptive compute acceleration platform

by Max Smolaks 1 October 2019

American chip designer Xilinx, which
specializes in programmable logic devices, has announced that its Versal accelerator
lineup, intended primarily for AI inference, will be equipped with high-speed Xccela
flash memory from Micron.

Xilinx is the granddaddy of custom silicon,
credited with inventing the first field-programmable gate arrays (FPGAs) back
in 1985. FPGAs have since become a market in their own right, being employed
for tasks where using conventional CPUs is impractical.

Xilinx’s long-time competitor Altera was acquired
by Intel in 2015 in an all-cash transaction valued at approximately $16.7
billion, serving as a testament to the growing importance of this chip

According to research by Tractica, the
overall market for deep learning chipsets – which include CPUs, GPUs, FPGAs, application-specific
integrated circuits (ASICs) and others - will reach $72.6 billion by 2025.

Not resting on its laurels, Xilinx recently developed a new chip type called the adaptive compute acceleration platform (ACAP), especially suitable for AI workloads and commercialized as the Versal family. And it’s Versal that is going to receive a shot in the arm from Micron’s new memory.

Xcella is a brand of NOR flash – which is
different from NAND flash widely used in both enterprise and consumer storage products.
Whereas NAND memory may only be written and read in blocks – like a hard drive
- NOR memory allows the device to read and write individual bytes. From this pointy
of view, NOR is like RAM, and programs stored in NOR flash can be executed
directly without needing to be copied into main memory first.

According to Micron, Xcella will boost
the boot, dynamic configuration performance and overall system responsiveness of
the Versa platform by up to eight times, when compared to prior-generation FPGA
platforms using older NOR flash.

The company added that Xcella delivers up to
400MB per second in double data rate mode while consuming 30 percent less
effective energy per bit over traditional quad SPI NOR flash.

choice to support Xccela flash in its Versal ACAP is a testament to the growing
importance of bandwidth for memory and storage used in artificial intelligence
applications,” said Richard De Caro, director of NOR flash for Micron’s
Embedded Business Unit.

“As autonomous driving vehicles incorporate
higher levels of artificial intelligence inference capabilities into advanced
driver-assistance systems (ADAS), Xccela flash enables Versal ACAP-based
systems to power up and configure rapidly to meet ADAS application

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like