AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
Without a doubt, AI is already embedded in our daily lives. Streaming services like Netflix learn our viewing behaviors and patterns to deliver the shows we like best. Digital assistants like Amazon’s Alexa recognize speech patterns and respond to our questions or commands. Millions of people, meanwhile, use AI-powered apps like Lyft or Google Maps to hail the closest ride or get from A to B. Each of these everyday consumer applications relies on machine learning (ML).
However, it’s not just consumer technology giants and startups that are using ML technology to power AI-enabled applications. Enterprises in virtually every industry are now exploring ML for a wide range of different AI use cases, ranging from fraud detection to medical diagnosis, stock market prediction, and autonomous driving—to name but a few.
While enterprise adoption is still in its relatively early stages, a recent Deloitte study predicted that the number of ML implementations and pilot projects will double in 2018 over last year—doubling again by 2020. These use cases usually fall into three categories, based on their outcomes: 1) maximizing operational efficiency, 2) improving the customer experience and 3) delivering innovation with a new business model or discovery.
Predictive maintenance (PdM) techniques are designed to help determine the condition of in-service equipment, in order to predict when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Here at BlueData, for example, we’ve worked with one of our manufacturing customers who uses Apache Spark MLlib for their ML algorithms to enhance the accuracy of failure predictions as well as improve the corrective actions needed to avoid them in the future.
In order to retain their loyal userbases, many enterprises are deploying deep learning (DL) and natural language processing (NLP) techniques to entice their customers with new offerings or better service. For example, one of our customers in financial services is applying DL algorithms with TensorFlow to determine the most convenient car loan program that meets their customers’ specific needs, in order to reduce the complexity of the car buying experience. Other customers in the healthcare industry are analyzing sensor data using ML in order to deliver personalized medicine and improve disease diagnosis through genomics research.
One of the reasons that companies like Blockbuster, DEC, and Toys’R’Us have gone out of business is their inability to build new revenue models that can sustain growth in the face of disruption. Using AI and ML technologies to drive business innovation, we see enterprises moving in the other direction. For example, last year Allstate announced the creation of a standalone unit (named Arity) for a telematics business.
With ML algorithms, their goal is to expand their revenue stream beyond insurance by offering analytics products and services to third parties. Another example is one of our customers in the life sciences industry using AI and ML to dramatically accelerate the drug discovery process – bringing potentially life-saving new medicines to market much faster than ever before.
In working with these and other enterprise customers to deploy machine learning and deep learning pipelines for their AI use cases, we’ve seen several patterns emerge.
Here are some of the most common challenges we’ve seen with enterprises that are looking to build, deploy, and operationalize their ML / DL pipelines:
One of the most popular ML / DL tools is TensorFlow, but there are many other open source and commercial tools that may be required depending on the use case. Data scientists and developers want to work with their preferred ML / DL tools, they need the flexibility to enable rapid and iterative prototyping to compare different techniques, and they often need access to real-time data. In most large organizations, they also need to comply with enterprise security, network, storage, user authentication, and access policies.
However, most enterprises lack the skills to deploy and configure these tools in a multi-node distributed environment. And it can be challenging to integrate these environments with their existing security policies, data infrastructure, and enterprise systems – whether on-premises, in the public cloud, using CPUs and/or GPUs, with a data lake or with cloud storage. These organizations quickly realize:
The exploratory and iterative nature of ML / DL means that your data scientists can’t afford to wait for days or weeks before getting access to the tools they need. But it may take weeks and even months for your team to get ramped up and started.
For example, you will likely need to hire or train team members to gain expertise in technologies like TensorFlow. You’ll need to build pipeline integrations between these different frameworks and tools, and test them on the infrastructure you plan to use. And as you begin to add more use cases and users, you’ll need to scale the infrastructure and integrate more tools into the stack.
Today, enterprises can get up and running quickly with distributed ML / DL applications in multi-node containerized environments – either on-premises or in the public cloud - thanks to new container-based software solutions. Fully-configured environments can be provisioned in minutes, with self-service and automation. Data scientists and developers can rapidly build prototypes, experiment, and iterate with their preferred ML/DL tools for faster time-to-value. And their IT teams can ensure enterprise-grade security, data protection, and performance – with elasticity, flexibility, and scalability in a multi-tenant architecture.
As Head of Worldwide Services and Customer Success at BlueData, Nick has global responsibility for all aspects of customer success at BlueData - including consulting, professional services, education, and support. Nick and his team work with enterprise organizations across multiple industries around the world, helping to drive digital transformation and business innovation with their AI / ML and Big Data initiatives.