Why the ethical overhaul of the government’s approach to AI is long overdue

by Max Smolaks
Article Image

How should the new British government approach the question of regulation?

by James Duez, Rainbird 20 January 2020

Whatever your political allegiances in the wake of the election, it is in all of our interests to ensure a cross-party consensus on the ethical use of AI.

The issue may have gone somewhat under the radar of this election campaign, but making AI and data usage fair and accountable is integral to securing the kind of society we all want to live in.

Bringing algorithms back into human control

As a unanimous, playfield-levelling set of guidelines, the seven key principles set out by the EU for trustworthy AI was an exemplary blueprint for how our next government should approach AI ethics, whoever that may consist of.

Our politicians have so far been slow to follow suit. While encouraging to see that the Labour manifesto put emphasis on ensuring data protection for NHS patient information, it’s disconcerting to hear Boris Johnson’s blanket assertion last week that EU regulations present a “barrier” to growth.

The Liberal Democrats’ 2019 General Election manifesto contained the most detailed approach to technology, AI and data. Their “Lovelace Code of Ethics” for AI application is designed to ensure the use of personal data and artificial intelligence is "unbiased, transparent and accurate, and respects privacy”. They propose convening a citizens’ assembly to determine when it is appropriate for the government to use algorithms in decision-making, which would introduce much-needed human control over the issue.

This is an intuitively sensible idea. Since AI will increasingly become integral to the day-to-day workings of every department, it makes sense for there to be a human ‘layer’ involved in customizing and auditing them.

A move away from the black box approach

There have been numerous recent controversies surrounding the so-called ‘black box’ approaches to algorithmic data collection and analysis. These have triggered a greater awareness of the need for adequate transparency and training in AI implementation.

It’s a difficult time to surface such important technical news among the noise of repetitive soundbites to “Get Brexit Done” or “Save the NHS”, but these issues have not received nearly enough airtime in this election cycle.

Most recently, machine learning systems have come under fire for their use in making benefits decisions, demonstrating how the irresponsible deployment of complex, opaque algorithms can expose the public sector to a legal and ethical minefield with significant risks and costs for us taxpayers.

On the flip side, organizations who deploy transparent AI systems, forged not from statistical data but from ‘human-expertise’, stand to benefit significantly. Such systems can be probabilistic yet also entirely auditable.

Turning ‘human knowledge into machine intelligence’ has for years been a transformative concept that has at last become reality. If you turn your best person’s decision-making capabilities into software, you enable all human workers to perform to a higher standard. It’s like taking your best person and scaling what they do digitally across a broad workforce, sharing skills, saving time, and most importantly, making better quality outcomes which are readily interpretable.

All this automation frees up human experts to concentrate on the tasks that machines are rubbish at; truly listening, understanding and caring for customers.

How to eradicate unethical AI at the source

As we have learnt so many times before, campaign promises can often lead to no social benefits. We have had a myriad of voluntary ‘Code of Ethics’ set out by previous governments. Companies have pledged allegiance to these various directives without facing any serious consequences for failing to abide by them.

To force the issue, the Centre for Data Ethics and Innovation should be granted the power to reprimand those in breach of our ethical standards. They should have the mandate and resources to ‘pull the plug’ on algorithms deemed unethical or dangerous. The sooner this becomes unanimous policy across our political spectrum, the better chance we have of preventing a repeat of incidents such as gender-recognition cameras that only work on white men, or a potentially discriminatory Home Office visa application system. The Liberal Democrats have already pledged these powers and the main parties are catching up. It’s disappointing that it has taken so long.

While many business leaders and everyday consumers are becoming more aware of these issues, it is time for collective, decisive action. The EU’s guidelines are a unifying set of rules and the perfect starting point. We must take control of the transparency, accuracy and ethics with which we design the algorithms that increasingly determine our life’s course and move away from the ‘black box’ approach to ‘big data collection and analysis’.

If we can ensure that the AI that impacts us all is accountable, we can take a huge step towards ensuring that AI is working for us, and not against us.


James Duez is CEO at Rainbird, a British company that develops an AI-powered, automated decision-making platform

Practitioner Portal - for AI practitioners

Story

Hesai and Scale AI open-source LiDAR data set for autonomous car training

6/2/2020

Scale claims this is the first time such data has been released with zero restrictions

Story

IBM adds free AI training data sets to Data Asset eXchange

5/28/2020

Big Blue has something for you

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

More Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

9/30/2019
More Research Reports

Infographics

Understanding the advantages of AI chatbots over rule-based chatbots

Infographics archive

Newsletter Sign Up


Sign Up