by Jelani Harper
The foundation of any data-driven initiative should always be a well thought out, holistic data strategy. This particularly applies to the incorporation of artificial intelligence into the workplace, including the many manifestations of machine learning.
Quite simply, there are so many different areas in which machine learning can impact organizational processes that it’s imperative to map out how it reinforces business objectives.
Mapping those objectives is a critical component of data strategy, which EWSolutions President and CEO David Marco defines as the overall guiding principles for using data. This, he says, “includes our business cases, our measurements, our success [at] an executive level [for] short term and long term objectives.”
According to ASG Senior Vice President of Product Management Marcus MacNeill, the everyday applications of a data strategy has to encompass both ‘defensive’ and ‘offensive’ measures.
When it has been implemented in accordance with the principles Marco mentions, machine learning can optimize deployments for each of these aspects of data strategy. Otherwise, machine learning and AI attempts may result in low ROI, data sprawl, and substantial data governance issues.
A Good Offense—Predictive Analytics
Perhaps the most lucrative means of embedding machine learning into data strategy is what MacNeill calls an offensive data strategy.
“This is really about maximizing those business goals,” MacNeill argues. “It’s about being able to move quickly, to be agile, to open the aperture, to put the information in the hands of the people who truly need it to be able to do their jobs, and do it well.”
The predictive prowess of machine learning facilitates these advantages in numerous ways, particularly in terms of business intelligence. This promises the enterprise the capacity to transition from a reactive, historic reporting stance, towards a ‘proactive’ posture using AI and machine learning. “It’s not just because it sounds cool, but it’s about solving really interesting problems and applying that technology against a set of use cases,” MacNeill says. “In many cases, it’s beyond the realm of humans to do it.”
The Best Defense—Risky Business
The defensive side of data strategy is largely about reducing enterprise risk in all of its forms.
“It’s about running the business, doing the things we traditionally thought of in terms of data management: securing the data, understanding privacy, understanding how it aligns to regulations and compliance,” MacNeill specifies.
When properly implemented, machine learning can accelerate proficiency in each of these areas. It’s a vital aspect of numerous security analytics solutions and use cases in which data are aggregated and analyzed at scale for anomaly detection.
Machine learning has also been added to numerous data cataloging options, which is invaluable for cataloging documents—and, in some cases, specific information within documents—for regulatory compliance or personally identifiable information. MacNeill commented that there are burgeoning machine learning “capabilities today, particularly in the catalog around data recommendations based on users and other attributes.”
Data governance is a crucial means of organizing and implementing each of these manifestations of a defensive data strategy. Machine learning can provide certain data provenance benefits that help to fortify policy management, regulatory compliance, and governance.
“One of the things we see as offering a lot of value in particular is around what we do with lineage models,” MacNeill noted. “Lineage models are very useful. We know how to interact with them, how to traverse them to understand what’s going on. But there’s also a lot of inherent value in that model in terms of understanding anomalies, or how the lineage changes over time that algorithmically would fit in well with machine learning.”
If properly planned out and utilized, machine learning is one of the most effective means of implementing a well defined data strategy. It’s useful for increasing revenues with its predictive capabilities, as well as for decreasing risk with its advanced pattern recognition.
However, it’s critical for organizations to view this form of cognitive computing analytics in terms of their wider data strategies to best leverage this technology to fulfill organizational objectives—which is the whole point of data strategy.
“World class data management starts with an enterprise data strategy,” Marco posited. “Data management is enterprise wide, it’s not departmental.” So are data strategy and the sapient use of machine learning.
Related: 2019 Trends In Machine Learning
Unassailable Hybrid and Multi-Cloud Security for AI in the Workplace
As the big data ecosystem transitions into a service oriented economy fueled by the pervasiveness of the cloud’s Service Oriented Architecture, one nagging concern remains.
Indeed, the long term sustainability of the cloud’s value proposition, which is instrumental for preparing data for machine learning or accessing artificial intelligence services involving natural language processing, augmented reality, image recognition, and more, likely depends on solving this issue.
Many organizations find themselves still scrambling to resolve the traditional cloud computing vulnerability—cyber security—which has redoubled its importance due to the promises of hybrid and multi-cloud use cases for facilitating the foregoing facets of AI in the workplace.
But as the rash of ensuing cyber security breaches indicates, traditional perimeter security based methods simply don’t work in the cloud era.
“For much of IT, the perimeter was basically on premise,” DH2i CEO Don Boxley explained. “Today that’s changed. We’ve got this new IT reality: hybrid cloud and multi-cloud.”
Consequently, there’s a substantial need for alternative perimeter security methods so organizations can safely exploit the advantages of the cloud’s distributed computing capabilities for AI. Recent developments in this space have empowered common workplace AI scenarios such as partner networks and remote users, making cloud AI in the workplace both practical and secure.
Software Defined Perimeters for Partnerships
Securely enabling partner access to computing networks is a consistent challenge for cloud deployments of workplace AI. Most of these involved expanding IT networks via Virtual Private Networks, which simply make organizations and their strategic partners bigger targets for cyber security threats and lateral movement attacks.
Similar means of attempting to manage access control lists and firewall policies to include partners is extremely difficult, as changing business requirements and technology implementations require nearly constant updates. “Broken authentication and access controls are the most common ways for attackers to assume other partners’ identities and access unauthorized functionality or data,” Boxley observed.
Each of these issues is readily mollified by the utility of contemporary software defined perimeters, which indiscreetly connect applications—as opposed to organizations’ entire networks and those of their partners—for infallible, almost undetectable data transmissions.
These dynamic perimeters link the various applications or servers of organizations relevant for Machine Learning-as-a-Service analytics, for example, close the ports for invisible transmissions, and reduce (rather than broaden) overall network attack surface. Boxley mentioned the cardinal benefit of this approach is that, “Modern remote-access solutions give network admins the ability to segment by application, not by network. This limits remote users to fine-grained access to specific services, eliminating lateral network attacks.”
Remote Users and Microtunnels
The democratization of cloud AI in the workplace naturally extends to remote users within organizations—as do the compartmentalized micro-tunnels which connect applications for decentralized access to cloud AI services.
This approach enables users to work from anywhere and still get enterprise-grade security as though they were at the office. The extreme portability of these perimeters is well suited for the diversity of clouds (hybrid, public, private, etc.) users may need to access a cloud data warehouse in Amazon, yet still run AI analytics in Google Cloud since these mechanisms are “designed from the ground up to be able to create secure tunnels from one host to another host anywhere,” Boxley commented. “By definition they’re designed to scale for a hybrid, multi-cloud environment.”
Furthermore, the data in the tunnels are encrypted and protected with Public Key Authentication. The tunnels themselves are connected via a cloud matchmaker service that randomly selects a port to connect the gateways between applications when access is requested.
Randomly generating these ports, closing them for hidden transmissions, and transporting the data between gateways via User Datagram Protocol (as opposed to the more widely used Transmission Control Protocol) makes it so even other users on the network won’t know applications or servers are remotely connected.
Secure AI Anywhere
According to 451 Research Chief Analyst Eric Hanselman, “What’s happening is that work applications have been scattering themselves to the four winds for a while.”
This sentiment is particularly reflective of mission critical applications of AI, many of which are accessible as services through the cloud. Although traditional perimeter security options are largely regarded as inadequate for such heterogeneous use cases, emergent software defined security options have several means of providing the dynamic positioning, flexibility, and fortifications to protect this data throughout the workplace and today’s distributed settings.
Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.