The AI White Paper has bold aims: to outline the EU’s approach to creating an ecosystem of trust and excellence for AI, signalling the start of a process that is anticipated to result in a regulatory regime

Max Smolaks

March 27, 2020

5 Min Read

by Roch Glowacki and Elle Todd, Reed Smith

Given its reputation as a regulatory trendsetter, it’s fair to say that the European Commission’s (EC) recent AI White Paper was highly anticipated, as a culmination of years of painstaking efforts to prove itself as an AI thought leader. The AI White Paper has bold aims: to outline the EU’s approach to creating an ecosystem of trust and excellence for AI, signalling the start of a process that is anticipated to result in a regulatory regime that will influence the way we think about AI across the globe.

It must be remembered that Europe has already proven its legitimacy as an exporter of regulatory approaches. For instance, despite its flaws, the introduction of the GDPR set expectations for data protection and privacy regulation across the world. Indeed, the EU had already paved the way on AI regulation with its ethics guidelines for trustworthy AI in 2019.

Despite this, the Paper has received a barrage of criticism since its release, with many attacking the EC’s approach to AI as underwhelming and flawed. However, much of the criticism fails to appreciate the delicacy of the thorny issues at play and the challenge faced by policymakers.

What are the EU’s priorities for AI regulation?

The EC’s primary goals are to foster public trust in AI and enable Europe to take advantage of changing data flow paradigms and edge computing, which is gathering pace. This means continuing to follow an ethics-first approach, avoiding internal market fragmentation, promoting open data initiatives and establishing a risk-based approach to AI governance, for a technology that is set to create a fundamental shift in our daily way of life. Specifically, the EC wants to ensure the legitimacy, quality and accessibility of data that is fueling our economy, particularly in high-stakes sectors such as the financial services or healthcare.

Naturally, the AI White Paper was never going to be received with open arms by everyone. However many of the proposals shouldn’t have come as much of a surprise as they strive to balance ethics against technological progress by closely following the framework set out in the EU’s expert group’s ethics guidelines for trustworthy AI.

These guidelines span seven key ethical requirements, of which principles of transparency and accountability are particularly controversial. Transparency means ensuring that erroneous AI decisions are traceable and explainable, whilst accountability envisages providing users with appropriate redress mechanisms for when AI solutions fail. Far from being created in a vacuum, the new rules reflect Europe’s long-standing tradition of embracing a human rights-centric agenda. Few would disagree with these commendable aspirations for AI, but applying these principles in practice means that constructing an effective AI regulatory framework is a constantly evolving challenge.

What are the key criticisms leveled against the AI White Paper?

Many commentators have accused the proposals of not going far enough. In particular, the EC’s approach to facial recognition technology was felt to be lacking in rigor. A leaked draft initially suggested a five-year blanket ban on the headline-dominating use of facial recognition technology in public spaces, following several legal challenges to its use by police forces. Whilst such a ban would have likely received criticism as being too premature, many believe the EC has gone too far the other way, and treated the issue too lightly. That being said, equally, it is a sign that the EC is trying to tread carefully without embracing a radical agenda that could stifle innovation.

The rise of AI nationalism in other parts of the world has also amplified some of the concerns. The European approach could be perceived as reinforcing a wider trend of governments protecting national interests by hampering data flows. Concerns center on the possible creation of a few digital trade zones, which has already prompted the G20’s ‘Osaka Track’ to propose a framework for promoting cross-border data flows. This criticism, however, fails to appreciate the need for a coordinated approach to AI at the EU-level to preempt internal and global market fragmentation.

There are also unanswered questions about the overarching AI governance structure and the utility of the proposed criteria for identifying AI deployments that should attract increased levels of scrutiny and regulation. While the Paper doesn’t hold all the answers and the EU’s proposals do lack some finer points of detail, the direction of travel is clear: an ethics-driven, risk-based and targeted approach to AI.

It is worth recalling that, following the EC’s proposal in 2012 to strengthen online privacy rights and the digital economy, it took over 6 years for the GDPR as we know it, to come into force. Therefore, the current AI debate is just the beginning of what is expected to mature into a world-leading AI regulatory regime. The high-stakes of AI regulation is of course bound to continue to attract criticism. With the consultation process still open, many of the initial wrinkles are expected to be ironed out, as the EU continues to flesh out the direction of its AI policy. Watch this space.

Roch Glowacki is Senior Associate, and Elle Todd is Partner, at Reed Smith, a global law firm with more than 1,500 lawyers in 28 offices throughout the US and EMEA.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like