An opinion piece by the EU regulatory partner at Reed Smith LLP

4 Min Read

Digital technologies, particularly artificial intelligence (AI), pose challenges to governments’ traditional rulemaking. For the most part, existing rules and norms were not designed to handle these technologies and their new business models, which blur the lines across the traditional value chain and economic operators.

Meanwhile, the tendency for digital technologies to develop faster than the regulation governing them makes them a legislative headache. Digitalization also makes it harder to attribute responsibility for damage or harm caused by the use of technology to end users, especially as AI-embedded services become increasingly autonomous and self-learning.

Moreover, the global reach of AI is an issue of most concern as it pays no regard to national or jurisdictional boundaries.

Europe paving the way

The EU believes that the best way to tackle these challenges and provide businesses with clarity and certainty is by implementing AI-specific guardrails. To that end, the European Commission proposed its highly anticipated AI Act on April 21, 2021. The proposed Act is currently reviewed intensively by the two co-legislators (European Parliament and Council) and the expectation is that final agreement on the Act will be reached sometime in late 2023.

With the legislation predicted to affect up to 35% of AI systems used in Europe, the Act will apply to private and public sector actors that are providers and users of an AI system. Similar to the EU General Data Protection Regulation (GDPR), the Act will apply to providers in third countries (including the U.K.) who place services with AI systems in the EU’s single market, as well as to providers whose AI systems produce outputs which are used in the EU.

Four levels of risk

The level of risk created by AI systems will determine the obligations imposed by the Act on providers and users. There are four tiers to this regulation, which starts with banning AI systems that are deemed an unacceptable risk, such as real-time remote biometric identification in publicly accessible spaces by law enforcement.

AI systems that are deemed to be high risk will be subject to extensive obligations, and minimal or no risk AI systems (such as spam filters) will be largely left unregulated, though providers will be encouraged to adhere to voluntary codes of conduct.

Finally, limited risk AI systems, such as chatbots, will be subject to transparency obligations and may similarly choose to adhere to voluntary codes of conduct. Interestingly, depending on the exact application, an AI system may jump from limited risk to high-risk, such as a chatbot that advises on the eligibility for a loan.

Of these four categories, the Act is mostly concerned with high-risk AI systems. These fall into two categories. Firstly, AI systems that are embedded in products or used in already EU-regulated sectors such as medical devices, automotive vehicles, aircraft, toys, and industrial machinery will be incorporated into existing conformity assessment processes.

The second category of high-risk AI systems is detailed in an exhaustive but broad list of stand-alone AI systems that affect fundamental rights when used in a specific setting such as hiring, employee management, biometric ID systems, and credit-scoring, as well as AI used by authorities for access to public services or law enforcement.

Providers of high-risk AI systems will need to implement a risk management process across the entire lifecycle; conform to data and data governance standards; document the systems in detail; systematically record its actions; provide information to users about its function; and enable human oversight and ongoing monitoring. Several of these obligations will be onerous, difficult to achieve and lead to high administrative and compliance costs.

A growing framework

As if the proposed AI Act and other major new pieces of legislation covering the digital economy was not enough to overwhelm even major corporations, on Sept. 29 of this year, the European Commission proposed a brand-new AI Liability directive (next to revamping the existing Product Liability Directive). The proposed fault-based liability framework essentially aims at lowering evidentiary hurdles for claimants injured by AI-related products or services to bring civil liability claims.

The U.K. is also looking at implementing its future AI regulatory model. The current light-touch approach differs from the EU’s prescriptive approach under the Act. For example, in its Policy Paper of this past July 18 titled ‘Establishing a pro-innovation approach to regulating AI,’ it stated its desire not to enact an AI-specific law.

Whilst recognizing that AI is partially regulated through a patchwork of legal and regulatory requirements built for other purposes, the U.K.’s preferred approach is to set out some high-level, cross-sectoral principles such as safe, fair and transparent use of AI instead of a single, centralized AI framework.

These cross-sectoral principles will allow a plethora of regulators to establish risk-based criteria and thresholds at which additional requirements come into force. However, to avoid overwhelming business operating in this space, this will preferably be done through guidance or voluntary measures. Only time will tell whether the U.K. will follow in the EU’s heavy-handed approach or set off down its own legislative path for AI regulation.

About the Author(s)

Wim Vandenberghe, Reed Smith LLP partner

Wim Vandenberghe is the EU regulatory partner at Reed Smith LLP.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like