Experts from Baker McKenzie discuss potential changes to the EU law

AI Business

February 24, 2020

10 Min Read

Experts from Baker McKenzie discuss potential changes to the EU law

Last week, the European Commission launched a whitepaper that will shape its approach to regulating artificial intelligence systems. The 27-page document outlines proposals for new rules and tests, including those around legal liability for tech companies. One of its stated aims is to level the playing field between technology giants from the US, and homegrown European firms.

The document is not a legally binding text, but a statement of intent, and the Commission has launched a public consultation on the proposals, running until 19 May; the responses will inform the regulatory regime across the European Union.

To find out more about the meaning of the proposals, AI Business got in touch with three legal professionals from Baker McKenzie, one of the world’s largest law firms.

Balancing risk and reward

by Raffaele Giarda
Chair of Baker McKenzie’s Technology, Media & Telecommunications Industry Group

rome_giarda_raffaele_06536_cvs.jpg

With its whitepaper,
the European Commission aims to set the path forward for regulating AI, which
it rightly describes as one of the most important applications of the data
economy. The Commission does not yet propose specific legislation, nor does it
answer as yet the pressing and complex questions such as who should be
responsible for harm caused through AI and how to ensure the regulatory
framework is sufficiently flexible to accommodate further technological
progress while providing the much needed legal certainty.

While the
European Commission undoubtedly aims to build a clear European regulatory
framework for AI (rather than a fragmented country-by-country approach), it
takes the view that it is premature to propose specific rules at this stage and
instead opens a public consultation giving business and other stakeholders the
opportunity to help shape a future AI governance framework.

That
said, the whitepaper does provide some interesting insights into what this
framework would look like:

  • The framework would prescribe a number of mandatory legal requirements for so-called "high-risk" AI applications in order to ensure the regulatory intervention is proportionate. As a result, many AI applications would fall outside the scope of the framework. The Commission proposes two cumulative criteria for determining whether an AI application is "high-risk": namely the sector in which the AI application is employed and the actual use case. Interestingly, high-risk sectors preliminarily mentioned in the whitepaper are healthcare, transport, energy and parts of the public sector. These criteria will require a lot of further thinking and are an area that businesses may want to comment on as part of the consultation.

  • The whitepaper touches on the types of mandatory legal requirements that would apply to such high-risk AI applications. These are the "usual suspects" and include an appropriate degree of human oversight, adequate training data, record keeping requirements, transparency, robustness and accuracy. They are another area that businesses may want to comment on during the consultation.  

It will
come as good news to business that in its whitepaper, the European Commission
frequently highlights the fact that AI, and technology in general, are a force
for good and critical enabler in solving some of the world's most pressing
challenges, such as the fight against climate change. It further states the
need to promote and accelerate the uptake of AI in Europe and makes the point
that Europe is way behind North America and Asia when it comes to investment in
research and innovation. It pledges to significantly increase investment in
these areas and to facilitate the creation of European excellence and testing
centers that attract best-in-in class researchers.

Finding
the right balance between creating an ecosystem in which AI can flourish and
ensuring Europe becomes a global leader in technology, on the one hand, and
protecting society from the risks such technology may bring, on the other hand,
is the challenge. Fundamental human rights, such as the right to privacy, human
dignity, freedom of expression and non-discrimination are at stake. But so is
arguably Europe's economic future. So, we must press ahead and embrace the
future in which AI will play a central role.

Facial
recognition technology

The whitepaper
specifically addresses the use of facial recognition technology in public
spaces which, in recent months, has attracted much attention by the media,
governments, regulators and the general public as a result of new uses of the
technology proliferating with limited oversight. Recognizing that numerous
socially beneficial use cases exist for this technology - think of its
potential to increase security in public spaces through responsible use by law
enforcement - the European Commission categorically considers it high-risk
because of the significant risk it poses to human rights and civil liberties.

There is
no mention of the previously discussed policy measure of a temporary
moratorium. But rather than charting a clear way forward for this technology,
the Commission foreshadows a broad European debate on, firstly, the specific
circumstances, if any, which might justify the technology's use in public
spaces, and secondly common safeguards. This does not come as a surprise and is
ultimately intended to build public trust in, and acceptance of, this
potentially intrusive technology before allowing its use more widely. This
approach might also help build a European consensus, rather than a fragmented
Member State approach, on whether this technology should be permitted at all
and, if so, how to impose responsible limits on its use.

Looking
beyond Europe, different regions are at different stages of the debate around
facial recognition technology. Notably, cultural norms seem to heavily
influence the direction of travel across continents. While in the US various
technology-specific laws are being introduced, across Asia Pacific the use of
this technology seems to be more accepted and calls for regulation seem less pressing.

Important questions remain

london_mclean_sue_58487_cvs.jpg

by Sue McLean
Global tech lead for FinTech and Blockchain at Baker McKenzie

Rather than unveil new rules for AI, the Commission points out the risks posed by AI, the existing laws that apply to AI, plus its intention to update laws to fix any gaps which may exist. The Commission says that it would like strict rules for high-risk systems such as in health, policing and transport, and a voluntary labeling scheme for low-risk applications, plus there is talk of AI and ethics.

But the white paper does not include any detailed proposals for new regulation and the Commission has backtracked on its original proposal for a five-year moratorium on facial recognition in public spaces. So, we remain in ‘wait and see’ mode in terms of what new regulation the EU will actually seek to introduce on AI.

The Data
Strategy is a lot more interesting and significant, outlining the EU's data
ambitions and setting out a broad range of proposals including in terms of data
sharing, cloud, IP law, anti-trust and tech sovereignty. But it also raises a
range of questions:

  • How can Europe create its own tech giants and really compete in the global data economy when rivals in the US and China don’t have the EU's strict data privacy laws to navigate?

  • There's a big focus on extracting value out of industrial data, but how big a market is there in the B2B sharing of industrial data? Is it really only anti-trust concerns and a lack of a clear data sharing framework that prevents businesses voluntarily sharing non-personal data at the moment?

  • Also, the Commission wants to facilitate voluntary data sharing, but what exactly does the Commission have in mind when it talks about “addressing barriers on data sharing and clarifying rules for the responsible use of data?”

  • Interestingly, the Commission indicates it may introduce new rules which would mandate a data portability right where a market failure is identified in a particular sector. This would appear similar to the UK government's smart data proposals which involve extending open banking principles to other markets, including energy, and telecoms and digital platforms.

  • Post-Brexit, the UK won't need to follow EU rules. So, if the EU are too heavy handed in regulating AI and data, this could provide a good opportunity for the UK tech sector.

Improving competition

defonseka_joanna_gws.jpg

by Joanna de Fonseka
Senior Associate in Baker McKenzie’s Technology Group

The EU's new data strategy seeks to position the EU as a competitive market to commercialize data, while preserving high privacy, security, safety and ethical standards. One of the key proposals is to create European-level and sectoral data pools, or "data spaces," to facilitate data sharing across organizations, based on a set of data sharing standards, tools and governance mechanisms. 

The Commission is also proposing a new "Data Act", which would be designed to facilitate business-to-business and business-to-government data sharing, as well as creating an "enhanced data portability right" to give individuals more control over who can access and use their data.

Fundamentally,
this is about making data an enabler of competition rather than a barrier to
it. The Commission's view is that currently, the "data advantage"
enjoyed by larger players can create barriers to entry for SMEs and start-ups.
Much in these proposals is essentially about levelling the playing field and
persuading larger companies to share their data with start-ups, the public
sector or other businesses - the logic being that this will promote competition
and ultimately benefit consumers.

The proposals also signal a push for data-driven innovation, as the EU seeks to compete with markets like the US and China.  However, there is naturally a tension between the creation of data spaces, which are designed to promote data sharing, and the strict EU privacy rules enshrined in the GDPR. The Commission has stressed that the proposed data spaces will be developed "in full compliance" with data protection rules and according to the highest available cybersecurity standards - but this is likely to be challenging in practice.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like