Explainable AI: Holding Algorithms to Account
Explainable AI: Holding Algorithms to Account
May 18, 2018
by Ben Taylor
LONDON, UK - What are we talking about when we talk about explainable artificial intelligence (AI)?
Over the past few years, there have been few topics that have fuelled as much discussion or debate as AI. As its capabilities continue to evolve and its presence becomes more commonplace, so too does the conversation.
At present, AI is perhaps the most professionally competitive and commercially in-demand technological innovation. It is driving an arms-race in Silicon Valley – inspiring tech giants such as Google, Facebook and Microsoft to expanded their activities and plunder the very best graduate talent – while simultaneously capturing the imagination of consumers through devices such as Alexa and Siri.
And yet there is another side to the technology that has raised cause for concern among politicians, academics and technologists alike. As machines make more and more decisions about us – from mortgage applications and insurance policies, to recruitment selections and legal processes – concern about how these decisions are made has grown.
This is often the case when an individual finds themselves on the wrong end of a decision made by AI and is left with little understanding as to how it was reached.
A right to explanation
With the General Data Protection Regulation (GDPR) incoming, talk has turned to the “Right to Explanation” regarding how automated computer systems work. But the understanding of what this part of GDPR entails has been deeply misunderstood.
In fact, people who have found themselves on the receiving end of a contentious AI decision have long been able to contest it under national laws.
In the UK, the Data Protection Act allows for automated decisions to be challenged in court. But, the law does not go far enough. UK firms, in common with their counterparts in other countries, are under no legal obligation to disclose or release information they consider to be a trade secret. Instead, their obligation only extends as far as describing how an algorithm works.
This means that a person seeking to understand why their mortgage application was turned down, might only be told what information was considered. They may learn that their age, occupation, postcode and credit history were contributing factors, but not learn what information was relevant or why their application was rejected.
The general consensus appears to be that GDPR will change this and deliver transparency and accountability for AI-powered systems. But there are no legal guarantees. The regulation has been drafted to be powerful yet broad, open to the discretion and interpretation of both national and European courts. It does not define what constitutes a satisfactory explanation or how those explanations are to be reached. The likelihood is that it will be some years before we have a clear picture.
Related: The AI Spring Is Upon Us – But There’s Lots Of Work Still To Do
Trust without scrutiny?
And so, what the technology community refers to as the “black box” problem continues. How can you build public trust and garner support in systems that effectively remain free of legal scrutiny? The truth is that you can’t.
While these automated computer systems continue to be opaque and to operate outside of public view, confusion and distrust will remain.
The techniques used by AI systems to reach decisions are difficult for a layman to fully understand. They are vast, intricate and complex – operating on the basis of probability and correlation – and unless you possess a specialist knowledge of how they work at an algorithmic level, can appear alien. And while some may consider transparency a technical issue best left to developers, it has real-world consequences.
Without explanation, without transparency, we are stripped of agency and bereft of autonomy. We are dependent on the determinations of systems many of us don’t understand and don’t trust.
While the majority of the public take no issue with this in commercial applications (such as Netflix recommendations, auto-tagging on Facebook or suggestions from Siri), it is a different matter entirely when data is used to make financial, legal or healthcare decisions. Nothing much of great concern rests on the former, while decisions driven by the latter are potentially life-altering.
Making AI transparent, explainable, and accountable
The end goal has to be to work alongside, rather than for or against, automated computer systems. We want to use AI to enhance human expertise, rather than replace. But in order to achieve this, these systems must be developed to either explain themselves or produce an auditable trail that details their reasoning.
The option to keep people in the dark – with little to no understanding of how their data has influenced approval or rejection – is not an option. As it stands, the current disconnect between how automated decision-making works and the public’s understanding of it cannot continue. It’s time for a change. Otherwise we will remain in a position of diminished influence, simply executing the orders of these systems.
As technologists working in artificial intelligence we must look at developing systems that are transparent, auditable and accountable. We must seize the opportunity to inspire public trust and confidence in these technologies. If we don’t, the general public will find it difficult to ever fully place their faith in the decision making of automated computer systems. The impact of which would likely be stalled adoption rates, and the potential loss of the vast and positive benefits that AI-powered technologies bring.
As the co-founder and CEO of Rainbird Technologies, Ben Taylor is the driving force behind the fusion of human expertise and automated decision-making. He continues to push the boundaries of the platform’s capabilities, enhancing and developing it to serve a variety of data-driven processes.
About the Author
You May Also Like