Interpretable Automation Is The Future Of AI

Interpretable Automation Is The Future Of AI

Ciarán Daly

May 27, 2019

5 Min Read

by Ben Taylor

LONDON - The march of technological progress may seem inevitable at times, but the truth is very different. In the AI space, the current period of growth could quickly come to a grinding halt without the engagement of a wider segment of business people, in both the design and use of new technology.

How we can swerve the ‘skills gap’ entirely

According to Deloitte, 69% of enterprises have described the AI skills gap as “moderate, major or extreme” due to the difficulty involved in finding skilled people to staff their new AI-driven business models.

Educational movements have sprouted to combat this:
encouraging coding skills in schools or online classes, for example. Coursera
and the like are all well and good, and a step in the right direction. But is
it realistic to expect this kind of tech-heavy learning to become widespread
enough to engender business transformation? To do so would be to fall into the
trap of idealism. A more logical approach is to make technology bend to our
will, rather than vice versa. In essence, we can bridge the so-called AI skills
gap by circumventing it - and we do this with enterprise-ready, human-centric
automation.

What this means is automation that plays to the strengths we already possess. According to a recent EY survey, a lack of AI talent in the marketplace is a core barrier to AI adoption, but it rightly emphasises that this problem is exacerbated when so many platforms are targeted at data scientists and AI specialists.

Our thought process is worth scaling

Knowledge mapping is an inherently human way of thinking through a problem. Essentially, it represents how we all problematise: with probabilistic reasoning, and a fluid path to the end goal. Combined with a running stream of real-time data, this is how most transactional business decisions are made. Technology that can echo this thought process is able to replicate human expertise on a large scale, and with reasoning that we can all readily understand.

At Rainbird we’ve made this our modus operandi. Subject matter experts, whether fraud investigators or insurance underwriters, can model logic about any topic into a knowledge map. Modellers have the freedom to use code or an intuitive visual interface, suited to all kinds of learners. The end result is a system that can accurately represent real-world logic and apply it to real-world scenarios.

Related: How to reskill your employees for the AI era

Bring human experts back into the fold

Two broad (and popular) churches of AI - neural networks and
RPA - have kept an organisation’s human experts at arm’s length. Neural
networks operate in mathematical ways that we can barely comprehend, while RPA
is so menial as to barely require the nuances of human thinking. Their
widespread usage has created a distance between subject matter experts and
their subject matter, and this has been reflected in their diminishing results:
KPMG estimates, for instance, that barely more than 1 in 10 enterprises have
managed to reach industrialised scale with task-based RPA. Neural networks,
meanwhile, are data-hungry, at a time when data is still very finite. A growing
number of experts, articles and bloggers are predicting a coming ‘AI winter’
based on the limitations of these technologies.

The technology primed to thrive in their absence brings
human experts back into the fold.  Forbes
has recently heralded a human-centric “rebirth of rules”, also calling RPA a
mere “gateway drug” to
digital transformation. If businesses are serious about facing the future as
dynamic, adaptable outfits, they’ll need to take this next step on the ladder
to go beyond narrow rules-based tools, towards goal-based technology based on
human logic.

Related: The AI revolution will not be locked in

The importance of transparency

We underestimate the importance of understanding the things
that impact us. The mission-creep of unaccountable automation into our lives
has limited our autonomy and agency as citizens - interpretable automation is
how we claim it back.

Business leaders are beginning to appreciate this notion. In an IBM Institute of Business Value study, about 60 percent of 5,000 executives — up from 29 percent in 2016 — expressed concern “about being able to explain how AI is using data and making decisions.” Taylor Wessing recently underwent Rainbird implementation. Integral to successful technology adoption, in the words of Taylor Wessing IT Director Kevin Harris, was “getting the people on the front lines comfortable and understanding how it works”.

The interpretable element carries similar significance for
customers and regulators alike. For customers, the transparency For regulators,
the assurance of all the factors, and their weighting, behind every automated
decision. Everything is auditable, and no one is left in the dark.

By putting people at the helm, our tech-driven future can have a distinctly accessible, auditable, human feel.

Join the Rainbird team and 20,000 other business and AI leaders at The AI Summit London, June 12-13. Find out more

Ben Taylor is the CEO of Rainbird Technologies

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like