Tackling Trust in Machine Learning and Neural Networks: See It to Believe It

Ciarán Daly

April 18, 2019

6 Min Read

by Jelani Harper

SAN FRANCISCO - Issues of explainability, interpretability, and regulatory compliance all share one thing in common: they contribute to a marked distrust of advanced machine learning and neural networks.

Although it’s not always easy to understand the various weights and measures that determine the outcomes of these predictive artificial intelligence models, the actions based on their results are usually perfectly clear.

By focusing on those actions—such as what decisions models made about options for supply chain management, patient care, or product offers—organizations can not only validate the worth of these techniques, but also develop much needed trust in them.

“It’s more of a trust issue,” admits Geoff Annesley, One Network EVP. “The people that are using [AI platforms], they may be a little cynical when they start out. But what we find is that they quickly start to trust the decisions because they see well, I’m running in parallel here when the decisions are made and it’s beating me.”

By concentrating on the results of the decisions of neural networks and machine learning, organizations can clearly see whether or not their models are trustworthy. Methods based on human-in-the-loop, data visualizations, and statistical AI feedback are critical for validating the results of this AI technology which otherwise may be too difficult to properly implement.

Related: Everyday applications of AI in financial services

Human-in-the-loop

The tenets of human-in-the-loop is perhaps one of the most time honored and vital means of understanding the impact of machine learning in production. It helps users regulate AI’s automation capabilities so “it’s not a black box” Annesley remarks. The oversight of humans is necessary to ensure that even the most dense forms of machine learning, such as deep neural networks, remain trustworthy.

In this way, human-in-the-loop facilitates three paradigms of automation, including:

  • Semi-autonomous automation - With this method, the output of cognitive computing predictive models merely serves as a recommendation which humans can choose to either use or ignore. In a supply chain network, “you may be a transportation guy and you want the system to detect issues and come up with recommendations and the tradeoffs and why you’re making those decisions, and then let people pick what they want,” Annesley explains.

  • Fully autonomous automation - Even with autonomous automation, humans can still see what actions were based on advanced machine learning and assess their results in business terms. Creditable AI platforms with neural networks “have a record of all the decisions that were made and what were the tradeoffs and why we made the decisions,” Annesley says.

  • Object-oriented automation: Object-oriented automation is perhaps the most effective form of automation because it enables users to prioritize objectives that serve as the basis of decisions made by cognitive analytics. For instance, users can stipulate they want to optimize delivery time while reducing cost in supply chain networks. These stipulations then function as the parameters upon which machine learning models make decisions and are their top priorities.

Regardless of which of these forms of automation organizations select, the element of human oversight provides an additional layer of trust in how machine learning models are put in production.

Related: AI sector facing diversity 'crisis', new study shows

Visual confirmation

Another means of fostering trust in even opaque neural networks and machine learning models is to visualize their results in accordance with business metrics. Conventional Business Intelligence functionality is primed for this use case, relying on a bevy of interactive dashboards and visualizations that are influential for understanding model results in ways most relevant to the business. The ability to visualize the output of advanced machine learning models on a dashboard is useful because these results “are all measured and you can see them over time,” Annesley comments.

By defining the proper metrics for model outputs in relation to business objectives, users can tell immediately if models are helping them reach their goals or need to be recalibrated to do so.

Furthermore, these metrics are the foundation for comparisons validating trust in models. In the supply chain vertical, for instance, “We even have metrics on planners so if you want to compare people who are making decisions to each other, you can do that,” Annesley mentions. “You can also compare them to digital agents that are making decisions, and compare them together and see who’s doing better, and learn.”

Machine learning feedback

Most of all, organizations can overcome transparency and trust issues with AI by running machine learning on the results of actions based on predictive models to ascertain what can be done better to improve outcomes. According to Annesley, this feedback loop is important for seeing “how the system’s improving as it learns and gets better.”

Alternatively, organizations might see that the system is not advancing in the attainment of its business-defined objectives and requires model recalibrations to do so.

In this respect, the objective-oriented automation method is particularly efficacious since with it, users “really are measuring the quality of decisions,” Annesley says. “For example, you may want to maximize revenues or maximize patient outcomes in a healthcare network, or maximize revenues in a high-tech retail network and…measure the decisions against those objective functions.”

Analyzing the results of these decisions with machine learning shows organizations how to better their predictive models, reinforcing trust in them.

Related: AI for good - addressing bias, ethics, and social responsibility

Trusting multiple models

Although it’s never easy to peek inside the black box of complicated machine learning models, organizations have several means of validating them to make them trustable. Central to these is the idea of human-in-the-loop, which mitigates automation with human supervision.

Moreover, the use of visual mechanisms such as dashboards is influential for understanding model results according to business metrics; running machine learning on the outcomes of decisions solidifies a valuable feedback loop for future iterations.

Most importantly, perhaps, organizations should realize that deep neural networks and black box techniques aren’t always necessary for each use case.

“If I have an algorithm that can come up with a right answer, and it’s deterministic and the best answer, why would I use a neural network to do that?” Annesley asks. “There’s certain places where neural nets just make no sense.” In this case, the combination of both advanced and basic machine learning models produces the best results; the transparency of the latter is another means of reinforcing trust in this technology.     

Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like