Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
November 11, 2020
A new class of SSDs equipped with an FPGA and memory to chew through common data processing tasks without involving the CPU
Samsung and Xilinx have started shipping their first SmartSSD that features an FPGA chip alongside 4TB of flash memory.
Developed in collaboration between two silicon vendors, it becomes one of the first commercial products to support computational storage, an emerging IT architecture in which data is processed at the storage device level to reduce the amount of work that has to be done by the CPU and other server components.
Acceleration of AI inferencing is being positioned as one of the promising use cases for the technology.
Xilinx has also released Xilinx Storage Services –a collection of libraries for Linux storage software that enables multiple use cases for SmartSSD without the need for complex FPGA programming; the first libraries deal with compression, decompression and encryption, and more are expected in the immediate future.
The announcement was made at the Flash Memory Summit Virtual Conference and Expo, taking place November 10-12.
The SmartSSD is the culmination of years of collaboration between Samsung, one of the world’s largest manufacturers of flash memory, and Xilinx, one of the world’s largest vendors of field-programmable gate arrays (FPGAs).
What makes FPGAs unique is their adaptability: these chips can be reconfigured on-the fly to serve as radically different hardware accelerators, unlike CPUs, GPUs and other application-specific integrated circuits (ASICs) that have their functionality literally baked in at the point of manufacture.
Because of their reconfigurable nature, FPGAs make particular sense in these early days of computational storage, when standards and applications are still being defined.
Its creators say the SmartSSD can be used to solve a broad range of data center problems in database management, video processing, artificial intelligence, complex search, and virtualization.
“From transparent compression to next-gen AI inferencing acceleration, the range of functions performed on the SmartSSD CSD is limited only by a developer’s imagination,” Pej Roshan, vice president of marketing for Data Center Group at Xilinx, said.
Thanks to the recent launch of a free, unified developer suite called Vitis, interested parties can write software for Xilinx FPGAs in familiar high-level languages such as C, C++ and OpenCL. Vitis also includes runtimes, libraries, APIs, drivers, examples of applications, tutorials and documentation – and a dedicated portal for AI resources.
“We’ve been deploying FPGAs as accelerators in storage systems for decades. What’s new about computational storage is this open programming model – from software land, you can add hardware accelerators directly into the data subsystem,” Jamon Bowen, planning and storage segment director for Data Center Group at Xilinx, told AI Business.
Each SmartSSD looks like a standard 2.5-inch SSD but includes a Kintex UltraScale+ FPGA chip and 4GB of DDR memory. The Kintex features more than a million system logic cells – Xilinx parlance for individual elements that can implement logic functions.
The device operates within the same power envelope as a conventional SSD and can serve as a simple drop-in replacement. Another interesting point about the architecture is that memory of the device is exposed and is addressable by the host system, something that “enables that host software command and control,” Bowen said.
Some of the first products based on SmartSSDs include Apache Spark acceleration from Bigstream that promises up to 10x faster job performance without changing any code, a high throughput parallel search-in-storage appliance from Lewis Rhodes Labs that can search petabytes in minutes, a video transcoding system from CTAccel that promises to reduce transcoding costs by up to 60 percent, and a compression system from Eideticom that promises to multiply drive capacity by 10 while using using 70% less CPU resources than conventional compression algorithms.
You May Also Like