Deploying Machine Learning Pipelines for Enterprise Use Cases

Ciarán Daly

November 9, 2018

7 Min Read

by Nick Chang

Without a doubt, AI is already embedded in our daily lives. Streaming services like Netflix learn our viewing behaviors and patterns to deliver the shows we like best. Digital assistants like Amazon’s Alexa recognize speech patterns and respond to our questions or commands. Millions of people, meanwhile, use AI-powered apps like Lyft or Google Maps to hail the closest ride or get from A to B. Each of these everyday consumer applications relies on machine learning (ML).

However, it’s not just consumer technology giants and startups that are using ML technology to power AI-enabled applications. Enterprises in virtually every industry are now exploring ML for a wide range of different AI use cases, ranging from fraud detection to medical diagnosis, stock market prediction, and autonomous driving—to name but a few.

Picture1

While enterprise adoption is still in its relatively early stages, a recent Deloitte study predicted that the number of ML implementations and pilot projects will double in 2018 over last year—doubling again by 2020. These use cases usually fall into three categories, based on their outcomes: 1) maximizing operational efficiency, 2) improving the customer experience and 3) delivering innovation with a new business model or discovery.

1. Maximizing Operational Efficiency

Predictive maintenance (PdM) techniques are designed to help determine the condition of in-service equipment, in order to predict when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Here at BlueData, for example, we’ve worked with one of our manufacturing customers who uses Apache Spark MLlib for their ML algorithms to enhance the accuracy of failure predictions as well as improve the corrective actions needed to avoid them in the future.

2. Improving the Customer Experience

In order to retain their loyal userbases, many enterprises are deploying deep learning (DL) and natural language processing (NLP) techniques to entice their customers with new offerings or better service. For example, one of our customers in financial services is applying DL algorithms with TensorFlow to determine the most convenient car loan program that meets their customers’ specific needs, in order to reduce the complexity of the car buying experience. Other customers in the healthcare industry are analyzing sensor data using ML in order to deliver personalized medicine and improve disease diagnosis through genomics research.

3. Delivering Business Innovation

One of the reasons that companies like Blockbuster, DEC, and Toys’R’Us have gone out of business is their inability to build new revenue models that can sustain growth in the face of disruption. Using AI and ML technologies to drive business innovation, we see enterprises moving in the other direction. For example, last year Allstate announced the creation of a standalone unit (named Arity) for a telematics business.

With ML algorithms, their goal is to expand their revenue stream beyond insurance by offering analytics products and services to third parties. Another example is one of our customers in the life sciences industry using AI and ML to dramatically accelerate the drug discovery process – bringing potentially life-saving new medicines to market much faster than ever before.

Challenges in Building Distributed ML Pipelines

In working with these and other enterprise customers to deploy machine learning and deep learning pipelines for their AI use cases, we’ve seen several patterns emerge.

Here are some of the most common challenges we’ve seen with enterprises that are looking to build, deploy, and operationalize their ML / DL pipelines:

  • The analytics tools that they’ve used traditionally were built for structured data in databases. The AI use cases that they need to work on with ML / DL tools require a large and continuous flow of typically unstructured data.

  • Their data scientists and developers may have built and designed their initial ML / DL algorithms to operate in a single-node environment (e.g. on their laptop, virtual machine, or cloud instance). But they need to parallelize the execution in a multi-node distributed environment.

  • They can’t meet their AI use case requirements with the data processing capabilities and algorithms of a single ML / DL tool. They need to use data preparation techniques and models from multiple different tools, whether open source and/or from commercial vendors.

  • The new data access patterns and modeling techniques required for AI use cases with ML / DL are new and unfamiliar to most data scientists and developers, and the learning curve is steep.

  • Increasingly, data science teams are working in more collaborative environments. It’s truly a team sport, and the workflow for building distributed ML / DL pipelines spans multiple different domain experts.

  • For many ML / DL deployments, it’s common practice to use hardware acceleration such as GPUs to improve processing capabilities. But these are expensive resources and this technology can add to the complexity of the overall stack.

One of the most popular ML / DL tools is TensorFlow, but there are many other open source and commercial tools that may be required depending on the use case. Data scientists and developers want to work with their preferred ML / DL tools, they need the flexibility to enable rapid and iterative prototyping to compare different techniques, and they often need access to real-time data. In most large organizations, they also need to comply with enterprise security, network, storage, user authentication, and access policies.

Getting started: beyond existing enterprise systems

However, most enterprises lack the skills to deploy and configure these tools in a multi-node distributed environment. And it can be challenging to integrate these environments with their existing security policies, data infrastructure, and enterprise systems – whether on-premises, in the public cloud, using CPUs and/or GPUs, with a data lake or with cloud storage. These organizations quickly realize:

  • The technologies and frameworks for ML / DL are different from existing enterprise systems and traditional data processing frameworks.

  • There are multiple components (both software and infrastructure) and it’s a complex stack, requiring version compatibility and integration across these various components.

  • It’s a time-consuming endeavor to assemble all the systems and software required, and most organizations lack the skills to deploy and wire together all of these components.

  • Bottom line, it’s difficult to build and deploy multi-node distributed environments for ML / DL pipelines in the enterprise – even for sandbox and dev/test use cases.

The exploratory and iterative nature of ML / DL means that your data scientists can’t afford to wait for days or weeks before getting access to the tools they need. But it may take weeks and even months for your team to get ramped up and started.

For example, you will likely need to hire or train team members to gain expertise in technologies like TensorFlow. You’ll need to build pipeline integrations between these different frameworks and tools, and test them on the infrastructure you plan to use. And as you begin to add more use cases and users, you’ll need to scale the infrastructure and integrate more tools into the stack.

Today, enterprises can get up and running quickly with distributed ML / DL applications in multi-node containerized environments – either on-premises or in the public cloud - thanks to new container-based software solutions. Fully-configured environments can be provisioned in minutes, with self-service and automation. Data scientists and developers can rapidly build prototypes, experiment, and iterate with their preferred ML/DL tools for faster time-to-value. And their IT teams can ensure enterprise-grade security, data protection, and performance – with elasticity, flexibility, and scalability in a multi-tenant architecture.

To learn more and see how it works, visit the BlueData booth at The AI Summit New York, December 5-6

Nick Chang Head Shot


As Head of Worldwide Services and Customer Success at BlueData, Nick has global responsibility for all aspects of customer success at BlueData - including consulting, professional services, education, and support. Nick and his team work with enterprise organizations across multiple industries around the world, helping to drive digital transformation and business innovation with their AI / ML and Big Data initiatives.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like