AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
by Jelani Harper
SAN FRANCISCO - Issues of explainability, interpretability, and regulatory compliance all share one thing in common: they contribute to a marked distrust of advanced machine learning and neural networks.
Although it’s not always easy to understand the various weights and measures that determine the outcomes of these predictive artificial intelligence models, the actions based on their results are usually perfectly clear.
By focusing on those actions—such as what decisions models made about options for supply chain management, patient care, or product offers—organizations can not only validate the worth of these techniques, but also develop much needed trust in them.
“It’s more of a trust issue,” admits Geoff Annesley, One Network EVP. “The people that are using [AI platforms], they may be a little cynical when they start out. But what we find is that they quickly start to trust the decisions because they see well, I’m running in parallel here when the decisions are made and it’s beating me.”
By concentrating on the results of the decisions of neural networks and machine learning, organizations can clearly see whether or not their models are trustworthy. Methods based on human-in-the-loop, data visualizations, and statistical AI feedback are critical for validating the results of this AI technology which otherwise may be too difficult to properly implement.
The tenets of human-in-the-loop is perhaps one of the most time honored and vital means of understanding the impact of machine learning in production. It helps users regulate AI’s automation capabilities so “it’s not a black box” Annesley remarks. The oversight of humans is necessary to ensure that even the most dense forms of machine learning, such as deep neural networks, remain trustworthy.
In this way, human-in-the-loop facilitates three paradigms of automation, including:
Regardless of which of these forms of automation organizations select, the element of human oversight provides an additional layer of trust in how machine learning models are put in production.
Another means of fostering trust in even opaque neural networks and machine learning models is to visualize their results in accordance with business metrics. Conventional Business Intelligence functionality is primed for this use case, relying on a bevy of interactive dashboards and visualizations that are influential for understanding model results in ways most relevant to the business. The ability to visualize the output of advanced machine learning models on a dashboard is useful because these results “are all measured and you can see them over time,” Annesley comments.
By defining the proper metrics for model outputs in relation to business objectives, users can tell immediately if models are helping them reach their goals or need to be recalibrated to do so.
Furthermore, these metrics are the foundation for comparisons validating trust in models. In the supply chain vertical, for instance, “We even have metrics on planners so if you want to compare people who are making decisions to each other, you can do that,” Annesley mentions. “You can also compare them to digital agents that are making decisions, and compare them together and see who’s doing better, and learn.”
Most of all, organizations can overcome transparency and trust issues with AI by running machine learning on the results of actions based on predictive models to ascertain what can be done better to improve outcomes. According to Annesley, this feedback loop is important for seeing “how the system’s improving as it learns and gets better.”
Alternatively, organizations might see that the system is not advancing in the attainment of its business-defined objectives and requires model recalibrations to do so.
In this respect, the objective-oriented automation method is particularly efficacious since with it, users “really are measuring the quality of decisions,” Annesley says. “For example, you may want to maximize revenues or maximize patient outcomes in a healthcare network, or maximize revenues in a high-tech retail network and…measure the decisions against those objective functions.”
Analyzing the results of these decisions with machine learning shows organizations how to better their predictive models, reinforcing trust in them.
Although it’s never easy to peek inside the black box of complicated machine learning models, organizations have several means of validating them to make them trustable. Central to these is the idea of human-in-the-loop, which mitigates automation with human supervision.
Moreover, the use of visual mechanisms such as dashboards is influential for understanding model results according to business metrics; running machine learning on the outcomes of decisions solidifies a valuable feedback loop for future iterations.
Most importantly, perhaps, organizations should realize that deep neural networks and black box techniques aren’t always necessary for each use case.
“If I have an algorithm that can come up with a right answer, and it’s deterministic and the best answer, why would I use a neural network to do that?” Annesley asks. “There’s certain places where neural nets just make no sense.” In this case, the combination of both advanced and basic machine learning models produces the best results; the transparency of the latter is another means of reinforcing trust in this technology.
Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.