The need to know: Trust anchors

The need to know: Trust anchors

Max Smolaks

August 23, 2019

6 Min Read

The cost of getting AI wrong extends beyond the financials—lost revenue, fines from compliance failures—to reputational, brand, and ethical concerns

by Martin Sokalski 20 August 2019

Key business decisions at scale have adetermining effect on success; as an example, should we approve a credit cardfor a customer?

Among the decisions for each customer: the annual percentage rate, the spending limit, and a long list of other factors. Machine learning models are typically making these decisions for millions of customers.

In a very real sense, given the scale, the business is in the hands of a handful of smart data scientists—and the machines they build and train—using ground truth created from historical loan data.

Autonomous
algorithms: then vs. now

Most algorithms today are relatively simpleand deterministic: they produce the same output from a predetermined set ofstates and a fixed number of rules. The approaches for evaluating them forvalidity and integrity are largely established and adopted. In fact, in ourestimation, over 80 percent of the leading practices needed to maintain theiraccuracy and effectiveness are known.

Think of expert systems in manufacturing.Think of actuarial science that uses deterministic rules or decision tables ininsurance. Think of robotic process automation in financial services.

It isn’t that hard to determine whether theconclusions they reach are acceptable—and sound and scalable supervision isrelatively easy.

These rules can get very complex,especially when the number of attributes (also known as features, or variables)in the data or the number of records increases.

Machine learning and deep learning—andother types of AI—are creatures of a different kind. They are trained to learnfrom data (commonly referred to as ground truth) instead of being explicitlyprogrammed, which means they can “understand-learn-uncover” the nuances and thepatterns in the data, they can handle a very large set of attributes, and areoften significantly more complex in how they do what they do.

Think of training a prediction model from aset of a million past loan applications, which in turn uses 100 attributes.Think of detecting a tumor from a million MRI images. Think of classifyingemails. Once trained and evaluated, these models can be provided with new orunseen data from which they can make predictions. They are probabilistic innature and respond with a degree of confidence.

While all of these aspects are good, it canbe unclear what the models are doing: what they learn, particularly whenemploying opaque deep learning techniques such as neural nets, how they willbehave, or whether they will develop unfair bias over time as they continue toevolve. That’s why understanding which attributes in the training datainfluence the model’s predictions has become very important.

Algorithmic
risk: trust in the machine

Let’s take a closer look at a potentialproblem for the business leader in the loan division of a big financial firm.

If an error hides within an algorithm (orthe data feeding or training the algorithm), it can influence the integrity andfairness of the decision made by the machine. This could include adversarialdata or data masking as ground truth.

The business leaders are on the hook forpreserving the brand reputation for the firm, even as the AI modelsincreasingly make decisions that might not be understood or in line withcorporate policies, corporate values, guidelines, and the public’sexpectations. Multiply these issues by the number of algorithms the loandivision is utilizing. This is when trust weakens or actually evaporates.

Keeping AI in check

A number of techniques, including thosebased on renormalization group theory, have been proposed. As modelsacross AI tasks—including computer vision, speech recognition, and naturallanguage processing— become more sophisticated and autonomous, they take on ahigher level of risk and responsibility. When left untrained for long periods,things can go awry: runtime bias creep, concept drift, and issues such asadversarial attacks can compromise what these models learn. Imagine compromisedMRI scans or traffic lights being manipulated in a smart city.

Continuous-learning algorithms alsointroduce a new set of cybersecurity considerations. Early adopters are stillgrappling with the magnitude of risks presented by these issues on thebusiness.

Among the risks are adversarial attacksthat hit the very foundation of these algorithms by poisoning the models ortampering with training data sets, potentially compromising privacy, the userexperience, intellectual property, and any number of other key businessaspects. Consider the impact on lives or an environment of an adversarialattack in medical devices or industrial control systems. Tampering with datacould disrupt consumer experiences by providing inappropriate suggestions inretail or financial services. Such attacks might ultimately erode thecompetitive advantage that the algorithms were intended to create.

With complex, continuous-learning algorithms, humans need to know more than just the data or attributes and their respective weights to fully realize the implications of the AI getting it wrong or going rogue; they need to understand aspects such as the context and intended purpose under which the model was developed, who trained them, provenance of the data and any changes made to it, and how the models were (and are) served and protected. And they need to understand what questions to ask and what key indicators to look for around an algorithm’s integrity, explainability, fairness, and resilience.

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.