Sponsored By

UK Regulators Call for Action on AI Bias, IP Rights

Government pressed to keep pace with AI innovation governance over fears the U.S. and EU are ahead

Ben Wodecki

September 4, 2023

4 Min Read
MPs call on AI governance to 'safely harness the benefits' of AI.

At a Glance

  • U.K. members of parliament are urging the government to rapidly develop AI regulation to avoid falling behind.
  • They caution that the U.K.'s current hands-off approach to AI has left it lagging behind countries setting global standards.

A British parliamentary report has called on the U.K. government to address essential challenges for AI safety, including issues around intellectual property and employment.

The interim report, published by the Science, Innovation and Technology Committee, warns of “a growing imperative” to ensure AI governance and regulatory frameworks aren’t left “irretrievably behind by the pace of technological innovation.”

“Policymakers must take measures to safely harness the benefits of the technology and encourage future innovations, whilst providing credible protection against harm,” the report reads.

The report, penned by members of parliament (MPs), sets out 12 challenges of AI governance that need to be addressed in future frameworks.

Among them are issues around bias in AI systems, using AI to deliberately misrepresent someone and promoting transparency.

The 12 challenges are:

  1. Bias. AI can introduce or perpetuate biases that society finds unacceptable.

  2. Privacy. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.

  3. Misrepresentation. AI can allow the generation of material that deliberately misrepresents someone’s behavior, opinions or character.

  4. Access to Data. The most powerful AI needs very large datasets, which are held by few organizations.

  5. Access to Compute. The development of powerful AI requires significant computing power, access to which is limited to a few organizations.

  6. Black Box. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.

  7. Open source. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.

  8. Intellectual Property and Copyright. Some AI models and tools make use of other people's content: Policy must establish the rights of the originators of this content, and these rights must be enforced.

  9. Liability. If AI models and tools are used by third parties to harm, policy must establish whether developers or providers of the technology bear any liability for harms done.

  10. Employment. AI will disrupt the jobs that people do and that are available to be done. Policymakers must anticipate and manage the disruption.

  11. International Coordination. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.

  12. Existential. Some people think that AI is a major threat to human life: If that is a possibility, governance needs to provide protections for national security.

The inclusion of a point related to copyright follows an earlier report from MPs that called on the government to follow through with its pledge to scrap rules allowing AI developers to train models on protected works. The government had pledged to scrap the rule back in February but no such move has been made.

MPs warn UK’s pro-innovation approach risks it already falling behind

In late March, the U.K. government published a white paper that tasked regulators with setting individual rules relating to AI, arguing for a more 'pro-innovation approach’ to AI compared with counterparts like the EU.

In response to this white paper, MPs on the Science, Innovation and Technology Committee said it “should be welcomed as an initial effort” but it risks falling behind the pace of development of AI.

MPs argued that the U.K. risks falling behind as other jurisdictions are the ones setting international standards.

The committee called on the government to introduce a “tightly-focused AI bill” to “help, not hinder” the country’s attempts to become a leader in AI governance. Prime Minister Rishi Sunak has routinely sought to position the country as an AI leader.

“Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the U.K. can offer,” the MPs argued.

Just this week, the government set out the ambitions for its upcoming global AI safety summit in November.

There have been some concerns about nations like China, which has taken a more authoritative approach to AI, over whether they should be invited to the event. The report states that invitations should be “extended to as wide a range of countries as possible” and that a forum should be established solely for “like-minded countries who share liberal, democratic values, to ensure mutual protection against those actors—state and otherwise—who are enemies of these values and would use AI to achieve their ends.”

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like