Regulating AI in the UK: The need to balance transparency, trust, and tech

Westminster eForum speakers call for increased public education on algorithms

Ben Wodecki, Jr. Editor

July 9, 2021

4 Min Read

Westminster eForum conference tackles AI

The UK considers itself to be one of the world leaders in AI, but issues around public trust and algorithmic transparency took center stage during discussions at the latest Westminster eForum event.

Speakers at the Future of AI in the UK policy conference stressed that any potential AI regulation must be ethical and take into consideration the views of both experts and the wider public.

The UK government is currently working on a National AI Strategy, and the Johnson administration outlined its intention to “unleash the transformational power of AI” as part of the country’s 10 tech priorities for 2021.

Whatever the government proposes needs to be both “tangible and practical,” said Stephen Bonner, executive director of regulatory futures and innovation at the Information Commissioner’s Office (ICO).

Comparing me to EU

Stephen Metcalfe MP, who chairs the All-Party Parliamentary Group on Artificial Intelligence, told attendees that parliamentarians developed a more “nuanced” understanding of AI than they had prior to the group’s inception five years ago, with dystopian visions of an AI-powered armageddon quashed aside.

The Tory MP for South Basildon and East Thurrock said that MPs were previously subject to “a lot of misinformation” when it came to AI, but now he seemed optimistic that his colleagues understand the technology better.

“The UK is a power to lead the world in creating appropriate regulation in AI behind China and the US,” he suggested.

On the plans to potentially regulate AI, Sana Khareghani, who heads the UK’s Office for Artificial Intelligence, said that her team has been spending "a lot of time" talking with industry and academia while penning the upcoming National AI Strategy. She told attendees that the Alan Turing Institute has sought to make it a priority to listen to as many viewss as possible.

Upon referencing the EU’s plans to regulate AI, Khareghani said the proposed legislation was "noted" and described it as "a very interesting piece of work.”

The proposed EU regulation would force AI systems to be categorized in terms of their trustworthiness and potential impact on citizen's rights. Systems found to be infringing human rights would be banned from sale.

“The EU is making very fast strides on governance,” Khareghani said, adding that the UK is aiming to play a leading role as well.

When developing a regulatory regime, Britain’s main consideration should be how competitive in terms of AI it wants to be, Dr. Darminder Ghataoura, head of AI and data science at Fujitsu, said. “Regulation, and how much we align to certain other laws globally, should be looked at from a data perspective,” he added.

CMS Law partner Dr. Sam De Silva suggested the need to consider whether AI regulation was actually required – “there are other areas of tech that aren’t regulated,” he said.

“The EU has had a good fair go at this, but the problem is trying to regulate tech is that law takes a long time to create – and tech moves fast.”

Algorithms and excitement

Ghataoura suggested UK legislators should come up with an archive-based, searchable way of understanding algorithms. He reminded attendees that we have limited access to understanding algorithms up until “something goes wrong.”

An example he offered was Ofqual, the UK’s exam regulator, using an algorithm to determine GCSE and A-level exam grades amid school closures during the pandemic – with pupils given grades that were around 40 percent lower than their teachers' assessments.

Ghataoura said that an AI registry would “help everyone get acquainted” with algorithms and even allow some businesses to see if a certain model is appropriate for their needs.

Transparency and education were cited as vital considerations for any AI regulation in the UK by Natalia Domagala, the Cabinet Office’s head of data ethics policy.

“We need to ensure the highest ethical standards of models and we need to educate the public about their impact,” she said, adding, “We want to ensure the public understand AI – to achieve that, there is a wider challenge of educating the public.”

Echoing Domagala’s comments, Jessica Smith, the deputy director at the Center for Data Ethics and Innovation, said that public engagement would be essential to develop effective governance of AI.

“With all the excitements about AI, it’s important not to lose sight of what it’s all about – improving society with AI,” Smith added.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like