by Mark Stadtmueller and Russ Loignon 


NASHVILLE – The value of data and the ways in which artificial intelligence can create or deliver enterprise wealth should not be controlled by the few.

Feudalism was a medieval political system in which a lord owned all the land while vassals and serfs farmed it. Today, as it pertains to AI, it has been suggested that large AI players are controlling the process and the system while smaller, more nimble companies are competing for a slice of the pie. As a result, some have suggested that 2018 is the year in which we have begun to see the emergence of ‘AI feudalism‘.

“Business results from AI can be impressive, but if most enterprises cannot overcome these challenges, then AI feudalism looms.”

While this is the most recent reference to potential concerns regarding AI feudalism, it is not the first. If one were to search the internet of AI feudalism, the results are staggering. These many articles speak of concerns around the impact and control of proliferating data lakes. Take healthcare, for example. Is health data owned by the providers, payers, patients, or other stakeholders? In the case of the Internet of Things, is the data owned by the data collector or the contributor? The ownership and subsequent benefaction of data through AI needs to be handled responsibly – and so far, the lack of progress on this issue is concerning.

Business results from AI can be impressive, but if most enterprises cannot overcome these challenges, then AI feudalism looms. The three key challenges contributing to this are: data size and compute requirements; data governance and accountability; as well as data science and engineering skillsets.

Concern is rightfully growing that only a handful of large organizations will be able to address these challenges. Yet, another view as to why these larger AI providers are perceived as the only solution may come down to their reputation, capabilities, or pure momentum. However, there are many innovative and more nimble companies capable of taking these challenges head-on as well. By investing in enterprise AI, these companies are enabling independence and freedom for businesses through distributed control of the technology.

Data size and compute requirements

AI has led to groundbreaking results in computer vision, natural language processing, speech recognition, and other specialized capabilities that allow us to learn from data. However, data volumes are growing as a result, with compute requirements intensifying. Even with cloud computing, the resources necessary to support this growth is often too much for many businesses to completely harness.

To innovate in AI, businesses need to not just use models that others have trained, but also train their own models unique to their infonomics. This, argues author Douglas B. Laney, enables companies to monetize, manage, and measure their data as a competitive asset. As a result, scaling computer resources efficiently is a key and necessary capability.

One way to address these challenges is to make the training task easier. At Lucd, we are investing in something called reservoir computing – a computational framework where the data training is conducted at the readout portion of the process. It has been shown that AI accuracy is maintained while speed is increased and the cost is reduced. Modern deep learning training models and data are often very large with training challenges that often have trouble converging to a solution. RC is fast and computationally simpler compared to current deep learning methods. By streamlining the training task, businesses will have greater capability to handle the data size and compute requirements.

Data governance and accountability

If data training is left up to the large tech and AI players, so too will innovation. This raises some key concerns for enterprises. How do you know how the models were trained? Bias and accountability in data and training can unwittingly distort results. Additionally, if models are not unique to the enterprise and their customers, how can a business responsibly train from data?

Feudal AI lords, on the other hand, can throw resources at the problem, while businesses cannot afford to handle their unique data as if they were just learning to differentiate between cat and dog photos. For more businesses to differentiate with AI, they need to be able to handle data effectively and responsibly.

Our Unified Data Space is one example of an effective mechanism with which businesses can leverage both a persistent data store and a responsible data factory for training. Automated features can reduce costs to maintain compliance as well as an internal security audit framework. With a responsible data factory, businesses can then learn from data effectively and grow in an efficient, secure, and compliant way.

Data engineering skillsets

Finally, AI feudalism is starting to occur because the data ‘munging’ required to prepare data for AI training is not easy and too few people can accomplish this task. The first part of data preparation is understanding the data. This is accomplished by transforming and visualizing it. By making it easier to view data and work with it, a data scientist or even a ‘citizen’ data scientist can gain greater understanding of what should be modeled.

To tackle this, we implemented a Unity interface to improve visual data analytics and analysis with an easy-to-use, immersive AI. Unity is a 3D visualization capability most commonly leveraged by mobile gaming developers. Through these visualizations, gamers intuitively learn how to play the game, and through this type of experience, more people can intuitively learn the ‘game’ of data visualization and exploratory data analysis. In the face of limits to the data visualization capabilities of web interfaces and with the big data requirements of AI, this rich gaming engine interface accomplishes better data exploration and preparation.

Overcoming the challenges of AI feudalism

In September, Gartner published key findings about data science and machine learning obstacles and requirements. It should come as no surprise that they identified three key needs: operationalization at scale, data management, and ease of data analysis. These are the exact challenges that are causing the emergence of AI feudalism. Overcoming these challenges is precisely what enterprise AI accomplishes – making it the true antithesis to AI feudalism.

Catch Lucd.ai and hundreds of other AI leaders at tomorrow’s AI Summit New York, at the Javits Center December 5-6. Find out more.


MarkS

Mark leads Product Strategy at Lucd, aligning capability, intellectual property, and platform investment with our enterprise’s AI powered growth opportunities.

 


61-APC_0100

Russ Loignon is SVP for Market Strategy at Lucd. His background includes 20+ years of progressively responsible sales, strategy and business development. He is responsible for driving new and advanced technology solutions .