AI needs more memory, so Micron needs more AI
by Max Smolaks 25 October 2019
Micron, one of the world’s largest
suppliers of flash storage and memory products, has entered the AI arena with an
all-in-one development platform, imaginatively named Micron Deep Learning Accelerator (DLA).
The collection of integrated hardware and software tools is based on technology
Micron obtained with the acquisition of Fwdnxt (pronounced “forward next”), a small
startup specializing in AI hardware and software, based in Lafayette, Indiana.
Fwdnxt’s core product is an inference engine that it claims provides the highest utilization of various machine learning and deep neural network processors – in other words, it enables organizations to squeeze the maximum value out of their hardware investment.
Micron, of course, is all about hardware, and it hopes that the emergence of AI workloads will help it shift a lot more of it. The company’s CEO Sanjay Mehrotra recently suggested that servers for AI will require six times more memory than servers for traditional workloads, and twice the amount of flash storage.
“Fwdnxt is an architecture designed to create fast-time-to-market edge AI solutions through an extremely easy to use software framework with broad modeling support and flexibility,” said Sumit Sadana, EVP and chief business officer at Micron.
generations of machine learning inference engine development and neural network
algorithms, combined with Micron’s deep memory expertise, unlocks new power and
performance capabilities to enable innovation for the most complex and
demanding edge applications.”
Oregon Health and Science
University was among the first customers to get their hands on new DLA gear,
and is using it to process and analyze 3D electron microscopy images in order
to find new cancer treatments.
Another unnamed customer is using DLA-based convolutional neural networks (CNNs) to classify the results of high
energy-particle collisions and detect rare particle interactions.