Legal expert compares plans to EU’s ‘gold standard’ approach

Ben Wodecki, Jr. Editor

July 19, 2022

3 Min Read

Legal expert compares plans to EU’s ‘gold standard’ approach

The U.K. government unveiled plans this week to regulate AI – including commitments for enterprises to adopt ethical AI.

The plans were showcased in a policy paper titled “Establishing a pro-innovation approach to regulating AI” and include proposed rules addressing future risks so businesses can develop safer AI technologies.

Also released were six principles that regulators must apply to AI systems deployed in their respective sectors.

The likes of media regulator Ofcom, the Financial Conduct Authority and energy regulator Ofgem will be tasked with ensuring businesses follow the principles – unlike the EU’s AI Act which would force governance through a central regulatory body.

The six key principles that regulators would need to adhere to when regulating AI are the following:

  • Ensure that AI is used safely

  • Ensure that AI is technically secure and functions as designed

  • Make sure that AI is appropriately transparent and explainable

  • Consider fairness

  • Identify a legal person to be responsible for AI

  • Clarify routes to redress or contestability

The U.K. government argued that this approach would support growth, increase AI adoption and remove “unnecessary barriers being placed on businesses.”

“We want to make sure the U.K. has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work,” said U.K. digital minister Damian Collins.

“It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

Related stories:

UK Online Safety Bill faces the chop amid prime minister vacuum

EU’s new tech regulations could ban Meta, Google, Amazon

Game changer? UK eases copyright rules for AI training data

Regulating artificial intelligence – a year after the EU’s proposed AI Act

AI regulatory updates from around the world

These regulatory plans come amid a British government in chaos. Prime minister Johnson was ousted and a lack of ministers means legislation cannot be passed – including the Online Safety Bill – which would regulate how online platforms protect users and their data.

Collins has only been in the post since early July. He replaced Chris Philip who joined almost 60 ministers who chose to resign over Johnson’s premiership after a string of scandals, most recently, appointing a minister whom he was aware was the subject of sexual abuse claims.

Prior to replacing Phillip, Collins chaired the parliamentary committee for digital, culture, media and sport.

‘Practical risk’ of losing out to the EU

The government was expected to release a white paper outlining its plans to regulate AI towards year-end.

According to AI lawyer Tom Sharpe from Osborne Clarke, the paper’s sudden release “may have been prompted by the current upheavals in the U.K. government.”

The senior associate warned that with a pro-innovation approach, there is a “practical risk” for U.K.-based AI developers that the EU's AI Act becomes the ‘gold standard’ if they want their product to be used across the bloc.

“To access the EU market, the U.K. AI industry will, in practice, need to comply with the EU Act in any case,” he added.

Sharpe suggests that the principle regarding embedding fairness considerations may leave a headache for how regulators interpret ‘fairness.’

“It will be interesting to see how rights of appeal develop in this area. A decision on fairness by a public body would usually be subject only to the more limited right to seek judicial review – essentially a procedural check that the decision was reached in the manner required, rather than a check that it is correct on its merits,” he said.

To ensure AI technologies are 'appropriately transparent and explainable,' the lawyer believes that in some high-risk circumstances, regulators may deem that decisions that cannot be explained should be prohibited entirely – "for instance in a tribunal where you have a right to challenge the logic of an accusation."

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like