How Do We Instil Trust In AI?

Ciarán Daly

December 3, 2018

5 Min Read

by Ben Taylor

LONDON - As AI filters into so many of society’s functions, from financial consulting to legal advice to healthcare plans, it seems like an upward trajectory is inevitable.

But for AI to truly fulfill its practical promise - as well as change public perceptions - the industry needs to tackle its explainability problem. The way to do it: by keeping human knowledge at the heart of technology implementation.

Too often, this has not been the case.

Holding algorithms to account

The techniques used by typical algorithmic AI systems are vast and complex, operating on labyrinthine sets of probability and correlation. Unless you possess a specialist knowledge of how they work at a data science level, they can appear alien.

Naturally, then, a sense of alienation ensues. This is reflected in public attitudes towards AI, which are too often characterised by confusion and fear.

Clearly, there are scenarios in which public demand for explainability is tempered; times when the stakes are lower. Rationale hardly seems essential when the issue is an algorithm that recommends The Beatles to a Taylor Swift fan; it is an absolute necessity if an automated investment recommendation results in the loss of thousands of pounds.

Bias, too, can be exacerbated by unchecked algorithms. Let’s take financial services as an example: if a neural network - or any black box system - calculates a low credit score for a customer, there’s no way of knowing whether weighting is influenced by gender or ethnicity imbalances in the data. And even if the accuracy rate is convincing, without being able to provide full explanations to their clients and customers, advisors are limited to under-informed, frustrating interactions.

Human knowledge in the driver’s seat

Though the literal opposite may be taking place given the slow onset of driverless cars, when it comes to AI adoption trends businesses are seeing the success and clarity of opting for a human-driven approach.

An implementation process involving technical AI specialists collaborating with an organisation’s subject matter experts means that specific pain points can be identified and models created based on real-world knowledge and experience. The resulting solution doesn’t take a data scientist to run - it can be understood and controlled by business people.

The correlation between the rise in human-governed AI with an increasing push towards transparency is no coincidence; the two are dependent on one another. This is because the biggest benefit to having an AI platform governed by human rules is that there are subject matter experts who can always provide that much-needed clarity about why and how decisions are being automated.

I’m encouraged by the forecasts for 2019: they show that humans are increasingly being brought back to the forefront of AI implementation, with Forrester reporting that enterprises will add knowledge engineering to the mix to “build knowledge graphs from their expert employees and customers.”

Related: Explainable AI - Holding Algorithms To Account

Regulation can’t guarantee transparency

As a member of the All Party Parliamentary Group on AI (APPG AI), I’ve taken part in numerous discussions on the explainability of AI - and much of it has been stimulated by new attempts at government regulation.

The results of such attempts so far are a mixed bag, and understandably so: AI as a whole is difficult to define, let alone regulate. I was struck by the stark conclusion of Stanford University’s One Hundred Year Study of Artificial Intelligence: the panel’s consensus being that “attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI, and the risks and considerations are very different in different domains.”

True, in Europe we have the General Data Protection Regulation (GDPR), but the legislation’s fineprint hardly puts the issue to rest: rather than guaranteeing people the right to a rationale behind automated decisions, it offers the “right to be informed” about the existence of automation.

So to a large extent, unless regulation soon catches up to the lightning pace of innovation, AI systems and the companies leveraging them will need to police themselves with ethical technology.

Bringing about explainable AI

Regardless of whether legislation can enforce it, what the market needs is technologies that model automated decision-making on human expertise – rather than elusive data-driven matrixes. This kind of expert-down approach ensures a better understanding of the technology we use, and it’s especially important to regulated industries, where there is a real need for technology that can provide the auditable decisions that vast neural networks so often fail to deliver.

Time will tell whether the tech giants are serious about taking action on explainability to match their recent promotion of the cause, or whether is there an element of virtue-signalling involved. Think of the traditional audit process for financial institutions: internal operations are checked for errors or abuse to ensure that due process is respected and that the public is being fairly treated. Should far-reaching algorithms be held to the same account?

Explainability is no longer the fringe issue of a few years ago. It’s now of central importance to any company using or considering using AI. Forrester recently found that 45% of AI decision makers say trusting the AI system is either challenging or very challenging, while 60% of 5000 executives in an IBM Institute of Business survey expressed concern about being able to explain how AI is using data and making decisions.

From the public and from businesses alike, the demand for transparency is reaching fever pitch. The way to bring it to life: build explainable AI around the expertise of your best people.

Meet Ben and the Rainbird team at next week's AI Summit NYC, December 5 - 6 at the Javits Center

BenTaylor_RainbirdCEO.jpg

 

As the co-founder and CEO of Rainbird Technologies, Ben Taylor is the driving force behind the fusion of human expertise and automated decision-making. He continues to push the boundaries of the platform’s capabilities, enhancing and developing it to serve a variety of data-driven processes.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like