AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Neural networks can disempower human workers

Article Image

The case for human intervention amidst rapid AI adoption

by Ben Taylor, Rainbird 22 August 2019

Neural networks have become the alchemy of our age, the search for a magical, mystical process that allows you to turn a pile of data into gold. They are widely seen as a silver bullet that can generate new insights and expert decisions on an unprecedented speed and scale.

Yet this ignores the reality that ‘deep learning’ systems are difficult to create or audit and most organisations lack the necessary expertise or ‘data hygiene’ to use them effectively.

Despite a recent Oxford University survey finding that AI will have a 50 percent chance of outperforming humans in all tasks in just 45 years, there are numerous complex functions where human intervention is imperative in order to provide the level of transparency needed.

The importance of human expertise

Machine-learning systems derive insights from such complex probabilities and correlations that only a trained data scientist can begin to understand them. This means machine-learning can be a closed book to the very experts in the business that most depend on it and can therefore have the effect of disempowering other employees. ML systems are also prone to producing irrational and inexplicable decisions because it is difficult to work out whether the algorithm derived its decisions from some unseen variable in the data, such as an unnoticed feature of an image.

Neural networks also cannot think outside the context of their ‘learning environment’ and thus a neural network is only as good as the data it was trained on. This means they are prone to inheriting biases from data; if an AI is trained to autonomously filter job candidates by analysing a sample of previous recruits, it might reject female candidates if 80 percent of the training sample happened to be male.

Because neural networks do not follow human rules of logic, they can be prone to spurious correlations. For example, an insurance AI analysing poorly-structured driver data could decide to increase premiums for Renault owners just because Renaults happened to be over-represented among a sample of dangerous drivers.

It is also a myth that neural networks can work ‘off the shelf’ without human intervention. Effective deployment of deep learning systems requires expert data scientists to assist everything from procurement to configuration, auditing and data hygiene. This makes it increasingly expensive to implement neural networks because the requisite data science talent is a scarce and increasingly in-demand resource.

The fact that neural networks can only be configured and trained by data scientists means that nobody else in the organisation understands how they work. This hampers the ability of relevant subject matter experts to audit AI decisions internally and undermines an organisation’s ability to justify those decisions to regulators and customers.

At the same time, the ability to reproduce human thinking with machines enables human expertise to be spread and scaled at speed across a company or a country. The process of capturing and codifying human expertise and experience for AIs not only allows machines to reproduce human judgements but also enables the secrets behind expert decisions to be explained and taught to other human employees, helping up-skill the existing workforce. This makes limited human resources stretch further and enables organisations to rapidly respond to increased demand and manpower shortages.

Since AIs will increasingly be helping human professionals from lawyers to fraud prevention teams, it makes more sense for their human ‘colleagues’ to be involved in customising and auditing them. The only answer is a return to ‘rules-based’ AI systems that reflect human thinking and can therefore be configured and audited by relevant subject matter experts.

Encoding human expertise for auditable machines empowers employees to turn their expertise into a ‘blueprint’ for best practice across a business which can improve machines and humans alike, bringing consistency to the whole organisation’s performance. They also free up human experts to concentrate on more strategic tasks. Rules-based algorithms augment rather than replace human talent, making humans smarter rather than taking their jobs away.

Ben Taylor is CTO at AI-powered, automated decision making platform Rainbird. An authority on artificial intelligence, Ben is passionate about solving complex challenges through the use of innovative technologies.

Trending Stories
All Upcoming Events

Upcoming Webinars

Latest Videos

More videos


More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

AI for Everything Series

David Hardoon explaining recent developments in Data Science and AI

Author of Getting Started with Business Analytics: Insightful Decision-Making and the forthcoming book, Creating a Data Culture: Failing to Succeed

AI Knowledge Hub

Newsletter Sign Up

Sign Up