Intel Furthers Machine Learning Capabilities

Intel Furthers Machine Learning Capabilities

Robert Woolliams

July 1, 2016

8 Min Read

Intel provided a wealth of machine learning announcements following the Intel® Xeon Phi™ processor (formerly known as Knights Landing) announcement at ISC’16.

Building upon the various technologies in Intel® Scalable System Framework (Intel® SSF), the machine learning community can expect up to 38% better scaling over GPU-accelerated machine learning* and an up to 50x speedup when using 128 Intel Xeon Phi nodes compared to a single Intel Xeon Phi node*. The company also announced an up to 30x improvement in inference performance* (also known as scoring or prediction) on the Intel® Xeon E5 product family due to an optimized Intel Caffe plus Intel® Math Kernel Library (Intel® MKL) package. This is particularly important as Intel notes the Intel Xeon E5 processor family is the most widely deployed processor for machine learning inference in the world.*

Reflecting Intel’s very strong commitment to open source, the CPU optimized MKL-DNN library for machine learning has been open sourced. Rounding out a cornucopia of machine learning technology announcements, the company has created a single portal for all their machine learning efforts at Through this portal, Intel hopes to train 100,000 developers in the benefits of their machine learning technology. They are backing this up by giving early access to their machine learning technology to top research academics.

Machine and deep learning

Interest in machine learning is accelerating as commercial and scientific organizations are realizing the tremendous impact it can have across a wide range of markets ranging from Internet search, to social media, to real-time robotics, self-driving vehicles, drones and more.

Machine learning, and the more specialized deep learning approach, encompasses floating-point-, network- and data-intensive ‘training’ plus real-time, low-power inference (or ‘prediction’) operations. Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase. While the training procedure is computationally expensive, evaluating the resulting trained neural network is not.

In a nutshell, trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-time and real-world pattern recognition tasks on platforms ranging from low-power devices to the most widely-deployed inference devices in the world, Intel Xeon processors. In addition, the new Intel Xeon Phi processors make an ideal multi-TF/s (Teraflop per second) training engine.

The role of Intel Scalable System Framework

Intel introduced additional details for Intel SSF to help customers purchase the right mix of validated technologies to meet their needs, including Intel HPC Orchestrator software, a family of modular Intel-licensed and supported premium products based on the publicly available OpenHPC software stack, to further reduce the burdens of HPC setup and maintenance on labs and OEMs, by providing support across the system software stack for the HPC ecosystem.

Intel Orchestrator software and the Intel Xeon Phi processor product family are but part of Intel SSF that will bring machine-learning and HPC computing into the exascale era. Intel’s vision is to help create systems that converge HPC, Big Data, machine learning, and visualization workloads within a common framework that can run in the data center – from smaller workgroup clusters to the world’s largest supercomputers – or in the cloud. Intel SSF also incorporates a host of innovative new technologies including Intel® Omni-Path Architecture (Intel® OPA), Intel® Optane™ SSDs built on 3D XPoint™ technology, and new Intel® Silicon Photonics – plus it incorporates Intel’s existing and upcoming compute and storage products, including Intel Xeon Phi processors, and Intel® Enterprise Edition for Lustre* software.

Figure 1: Intel(R) Scalable System Framework

Figure 1: Intel(R) Scalable System Framework

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.