AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

IT & Data Center

Xilinx uses Micron’s clever NOR flash to speed up its AI accelerators

by Max Smolaks
Article Image

High-speed Xcella memory for the adaptive compute acceleration platform

by Max Smolaks 1 October 2019

American chip designer Xilinx, which
specializes in programmable logic devices, has announced that its Versal accelerator
lineup, intended primarily for AI inference, will be equipped with high-speed Xccela
flash memory from Micron.

Xilinx is the granddaddy of custom silicon,
credited with inventing the first field-programmable gate arrays (FPGAs) back
in 1985. FPGAs have since become a market in their own right, being employed
for tasks where using conventional CPUs is impractical.

Xilinx’s long-time competitor Altera was acquired
by Intel in 2015 in an all-cash transaction valued at approximately $16.7
billion, serving as a testament to the growing importance of this chip

According to research by Tractica, the
overall market for deep learning chipsets – which include CPUs, GPUs, FPGAs, application-specific
integrated circuits (ASICs) and others - will reach $72.6 billion by 2025.

Not resting on its laurels, Xilinx recently developed a new chip type called the adaptive compute acceleration platform (ACAP), especially suitable for AI workloads and commercialized as the Versal family. And it’s Versal that is going to receive a shot in the arm from Micron’s new memory.

Xcella is a brand of NOR flash – which is
different from NAND flash widely used in both enterprise and consumer storage products.
Whereas NAND memory may only be written and read in blocks – like a hard drive
- NOR memory allows the device to read and write individual bytes. From this pointy
of view, NOR is like RAM, and programs stored in NOR flash can be executed
directly without needing to be copied into main memory first.

According to Micron, Xcella will boost
the boot, dynamic configuration performance and overall system responsiveness of
the Versa platform by up to eight times, when compared to prior-generation FPGA
platforms using older NOR flash.

The company added that Xcella delivers up to
400MB per second in double data rate mode while consuming 30 percent less
effective energy per bit over traditional quad SPI NOR flash.

choice to support Xccela flash in its Versal ACAP is a testament to the growing
importance of bandwidth for memory and storage used in artificial intelligence
applications,” said Richard De Caro, director of NOR flash for Micron’s
Embedded Business Unit.

“As autonomous driving vehicles incorporate
higher levels of artificial intelligence inference capabilities into advanced
driver-assistance systems (ADAS), Xccela flash enables Versal ACAP-based
systems to power up and configure rapidly to meet ADAS application

Practitioner Portal - for AI practitioners


Open source platform aims to speed up autonomous car development


Project ASLAN promises easy to install, fully documented and stable self-driving software for specific low-speed urban autonomous applications


IBM donates AI fairness and explainability tools to the Linux Foundation


Three projects move under the wing of the open source organization

Practitioner Portal


More EBooks

Upcoming Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports


AI tops the list of most impactful emerging technologies

Infographics archive

Newsletter Sign Up

Sign Up