Regulators have 12 months to come up with sector-specific AI rules

Ben Wodecki, Jr. Editor

March 29, 2023

4 Min Read
Getty Images

At a Glance

  • The U.K. publishes a white paper outlining a list of principles regulators must consider for rules on adopting AI.
  • The U.K. is adopting a ‘light touch’ approach versus the legislative path the EU is taking.

The U.K. government has published a white paper containing principles for regulating AI in a way that it believes will not stifle innovation.

The paper, AI regulation: a pro-innovation approach, outlines five principles to guide its regulators: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Instead of creating a single, new regulator to tackle AI, existing legislators will have this added responsibility.

The white paper, which follows AI spending provisions in the recent budget, states the need to have "clear routes to dispute harmful outcomes or decisions generated by AI.”

It also calls for organizations developing and deploying AI to be able to explain a system’s decision-making process in detail that “matches the risks posed by the use of AI.”

Following the publication of the white paper, U.K. regulators such as Ofcom and the Engineering Council have 12 months to issue practical guidance on AI that incorporates these principles. Also, legislation "could be introduced" to ensure regulators “consider the principles consistently,” the government said.

“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely,” said U.K. science and tech secretary Michelle Donelan.

A notable inclusion in the white paper is the need for regulators to ensure AI complies with existing laws in the U.K., including the General Data Protection Regulation (GDPR) as a holdover before Brexit. Donelan is spearheading the U.K.’s attempts to scrap the GDPR and replace it with a British-only version as the Sunak administration seeks to remove all remaining EU laws.

Legislation avoidance

The white paper is the result of a series of inquiries into the governance of AI which began last September under the short-lived Truss administration.

In unveiling the white paper, the U.K. government said its goal was to "avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulate AI.”

The U.K. differs from the EU approach to legislating AI, which would categorize each system based on levels of trustworthiness. Critics have argued the latter is too vague and difficult to enforce.

The U.K. has pressed on. In 2021, the government unveiled its National AI Strategy at the AI Summit London. The following year, it published a specific AI strategy for defense as well as a roadmap for AI implementation in the country’s public health care provider, the National Health Service.

The U.K. is adopting a more flexible approach similar to the U.S., which has avoided any substantial legislation and instead opted for an AI Bill of Rights, a set of principles outlined in what is essentially a white paper.

However, the U.K. is keeping tabs on generative AI, specifically chatbots like OpenAI’s ChatGPT. A junior minister recently confirmed that conversational AI tools will be included in the scope of the prospective Online Safety Bill, which would regulate how online platforms protect users and their data.

Prime Minister Rishi Sunak has been keen to encourage AI adoption. He has actively encouraged using AI to cut record wait times for health care. He also personally intervened in an attempt to convince chipmaker Arm to go public in London, though to no avail.

‘Suck it and see’ approach?

Time will tell if the U.K. government’s sector-by-sector approach has the desired effect, Fladgate tech lawyer Tim Wright said.

“What it does do is put the U.K. on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator."

Tom Sharpe, an AI lawyer from Osborne Clarke, said the risk with the U.K.'s ‘light touch’ approach is that while it is suitably fast-moving and flexible, it also “creates a complicated regulatory patchwork full of holes.”

"In comparison to the EU's top-down regulatory framework, the U.K. is taking something of a ‘suck it and see’ process, with a bottom-up approach gradually identifying which holes need legislative patches,” he said.

"However, given how the EU's AI Act is tying itself up in knots over definitions of high risk, what to do about generative AI, etc., a ‘light touch,’ sector-focused approach is starting to feel like it might be better. It probably maps more appropriately to how technology advances, being more flexible and adaptable, and won't have to wait for the legislative process."

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like