Expert View: How US, UK and EU Differ in Regulating AI

The U.S. and U.K. are taking a far different approach than the EU. Will there be a 'Brussels effect'?

5 Min Read
US, EU regulators look at each other with binoculars

The U.S., U.K. and EU are important players in the development and use of AI globally, and the regulations they introduce will likely influence policy makers elsewhere.

Perhaps unsurprisingly in the post-Brexit era, the U.K. and the EU are taking very different paths. Whilst the EU harbors ambitions to be the de facto super-regulator for all things AI, the U.K. is pushing ahead with a more principles-based and less centralized approach. On this, the U.S. seems more aligned with the U.K., taking the view that excessive regulation will hamper innovation.

The EU rule book

The EU has proposed a package of rules that are designed so that consumers can trust the AI tools that companies increasingly embed in their products and services. This AI legal framework, when it finally comes into force, will be spread across a new AI Act, a new AI Liability Directive and a revised Product Liability Directive. A flagship EU initiative, this regulatory package will introduce the first truly comprehensive set of rules governing AI developers and operators while protecting users.

The AI Act takes a risk-based approach to the regulation of a wide range of AI applications covering all sectors, except systems exclusively developed for military use. Companies using AI systems, especially those classified as high-risk, will need to implement comprehensive compliance programs. These requirements are reinforced by the AI Liability Directive, which sets out a new liability regime for AI systems clarifying, on an EU-wide basis, how claims for damages involving AI systems will be handled.

Related:US, EU Strike Landmark Agreement to Cooperate on AI

AI systems posing what the EU calls an ‘unacceptable risk’ will be prohibited across the region entirely whilst many other AI systems will fall in the high-risk bucket. High-risk systems include products falling under the EU product safety regulation, such as toys and medical devices, as well as certain AI applications such as

  • Biometric identification,

  • Critical infrastructure,

  • Educational and vocational settings,

  • Recruitment and workplace management,

  • Access to essential private and public services including credit scoring systems,

  • Law enforcement,

  • Migration, asylum, and border control, and

  • The administration of justice and democratic processes.

Developers and providers of high-risk AI systems will need to carry out pre-deployment ‘conformity assessments’ to demonstrate that their systems meet the relevant requirements, and they will need to be reported to independent oversight authorities in each member state (notified bodies), although in some cases – such as biometric identification systems such as facial recognition − the assessment must be performed by the notified body itself.

Conformity assessment will also be needed whenever a high-risk AI system undergoes substantial modification. Developers and producers of high-risk systems will also have to perform post-market monitoring analysis, as well as registering in an EU database.

As for AI systems that pose minimal or no risk – such as video games and spam filters − no specific regulatory requirements will apply, although certain use cases such as deepfakes, chatbots, and other automated systems made for human interaction will be subject to transparency requirements and will need to make sure that consumers know they are interacting with manipulated content.

EU member states will set the level of penalties for non-compliance; however, penalties of 6% of a company’s total worldwide annual revenue or €30 million, whichever is greater, are mooted for the worst offenses.

As a result, companies will need to manage risk arising from using such systems through implementing a range of measures such as comprehensive quality and risk management systems, incident reporting processes and procedures, governance and oversight, and publishing technical documentation of a high-risk AI system before it goes to market or is put into service.

U.S. and U.K. take lighter-touch approach

Compared to the EU, whose proposed comprehensive set of rules means that, in the future, the majority of AI systems and tools placed in the EU market or used by EU subjects will be heavily regulated, the U.K. government has set out how it proposes to take a much lighter touch, business-friendly approach.

According to a 2022 policy statement issued by the U.K. government, it plans to set out some high-level principles and then leave their implementation to existing U.K. regulators. A white paper on the subject has been mooted by the government and is expected to provide more details − although there do not appear to be any plans for it to resemble the AI Liability Directive, with specific liability issues handled on a sector-by-sector basis (for example, for driverless vehicles).

In the U.S., whilst regulatory guidelines have been proposed by several federal agencies, state and local governments, centralized legislative moves have been more limited such as the establishment of the National Institute of Standards and Technology, which is charged with developing a voluntary risk management framework for trustworthy AI systems, and the National AI Initiative Act, which is designed to coordinate across multiple U.S. government agencies to implement a national AI strategy.

There is also the blueprint for an AI Bill of Rights, which seeks to enshrine key principles designed to promote the effective governance of AI.

Will EU influence U.S., U.K. rules?

The U.S. and the U.K. governments appear to be far more concerned with not stymieing innovation amongst developers of AI and preserving international competitiveness than the EU, with its comprehensive rules intended to protect the fundamental rights of individual EU citizens.

However, it may be that the new EU AI regulatory framework will galvanize other policy makers to look at introducing similar regulations. This ‘Brussels effect’ was seen after the EU brought in its General Data Protection Regulation in 2018, with other jurisdictions aping the EU’s regulatory approach towards privacy and security of personal data.

If the U.S., U.K. and other major economies go their own way, there is a risk of a patchwork of competing regulatory systems. As AI-based platforms, systems and tools become ever more ubiquitous, regulatory segmentation risks bringing about barriers to international trade and inconsistent government oversight – whereas what businesses want is a unified international approach that governs global AI supply chains whilst promoting best practices and protecting individuals.

About the Author(s)

Tim Wright, partner at Fladgate LLP

Tim Wright is a partner at the law firm of Fladgate LLP in London.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like