Transparency in the face of controversy: AI’s explainability problem
Transparency in the face of controversy: AI’s explainability problem
March 20, 2020
by James Duez, Rainbird 20 March 2020
We continue to see stories about the potential dangers of AI and its impact on society. There are numerous high risk areas identified for automation bias, from border control to insurance claims - yet most sectors continue to embrace emerging technologies with little consideration of how opaque many of them are.
In fact, nearly three quarters of financial services firms are looking to invest more in AI in the next five years.
So how do we ensure that increased adoption of AI does not correlate with increased risk of bad decision-making? It all comes down to transparency, and ensuring decisions made by AI and automation platforms are explainable in human terms.
A reason for fear?
It isn’t surprising that people fear AI. When used incorrectly it has the potential for discrimination and bias at industrial scale, while leaving the general public completely in the dark. This summer’s allegations against the Home Office, for their use of a secretive algorithm to process visa applications, demonstrated how easily discrimination can proliferate. The Home Office failed to detail the factors targeted by facial recognition software to assess risk, or how regularly the algorithm was updated, most likely because it is not readily explainable.
The central problem with black box AI is that statistical methods such as neural networks are largely inexplicable to anyone but data scientists, meaning the decisions they automate are not transparent to the wider population. This makes it impossible for the reasoning that lies behind these decisions to be examined. Machine Learning operates by seeking patterns in data, rather than following clear rules of logical inference as humans do. As a result, they can easily draw irrational conclusions from unbalanced data, and it can be difficult for humans to understand why, certainly on a case by case basis.
Utilizing transparency
A recent survey found that 89% of financial services firms cite AI’s lack of transparency as the main inhibitor to adoption. This does not have to be the case. By putting human logic back in the loop, organizations can ensure that AI is transparent and interpretable.
Human-centric, probabilistic rules-based models of automation enable business people, not data scientists, to audit every automated decision without the need for data scientists. Corporations are able to leverage this human-centric AI to ‘remember’ every decision and explain it to a regulator in human terms. It can even re-evaluate previous decisions to ensure they were fair in the face of new legislation. This technology would have been invaluable in situations such as the Home Office investigation, ensuring that any accusations of AI discrimination could be easily investigated.
Better together
AI needs to be thought of as a tool to augment human potential, not replace it. The ability to reproduce human-like thinking within a machine has arrived, enabling expert human knowledge to be turned into machine intelligence and scaled across an organization, providing not only massive efficiency gains but also better consumer outcomes. There will always be complex tasks where human intervention is imperative, however. McKinsey predicts that increased adoption of AI by businesses will shift demand for jobs away from repetitive tasks towards those which are socially and cognitively driven.
In fact, more and more of these technologies are becoming accessible to business people as ‘low code’ platforms, not just to IT. This technology ‘devolution’ is accelerating and is proliferating the creation of cognitive solutions within the operation - made by the people, for the people. Instead of fearing the technology, organizations can reap the benefits that AI provides. For example, with less menial jobs to complete, employee time can be concentrated on the tasks that machines are traditionally weak at, driving new revenue and improving customer services.
Fear not
Capturing and codifying human expertise can offer huge business benefits, but to fully realize this potential, AI must be transparent and interpretable. The key to creating a trustworthy AI culture is to use flavors of AI that can be built and customized by ordinary humans and output not just accurate answers, but human-readable explanations of their decisions. If AI decisions can be explained, there is no need to fear the outcomes.
James Duez is CEO at Rainbird, a British company that develops an AI-powered, automated decision-making platform
About the Author
You May Also Like