EU proposes new AI strategy: The legal commentaryEU proposes new AI strategy: The legal commentary
Experts from Baker McKenzie discuss potential changes to the EU law
February 24, 2020
Experts from Baker McKenzie discuss potential changes to the EU law
Last week, the European Commission launched a whitepaper that will shape its approach to regulating artificial intelligence systems. The 27-page document outlines proposals for new rules and tests, including those around legal liability for tech companies. One of its stated aims is to level the playing field between technology giants from the US, and homegrown European firms.
The document is not a legally binding text, but a statement of intent, and the Commission has launched a public consultation on the proposals, running until 19 May; the responses will inform the regulatory regime across the European Union.
To find out more about the meaning of the proposals, AI Business got in touch with three legal professionals from Baker McKenzie, one of the world’s largest law firms.
Balancing risk and reward
by Raffaele GiardaChair of Baker McKenzie’s Technology, Media & Telecommunications Industry Group
With its whitepaper,the European Commission aims to set the path forward for regulating AI, whichit rightly describes as one of the most important applications of the dataeconomy. The Commission does not yet propose specific legislation, nor does itanswer as yet the pressing and complex questions such as who should beresponsible for harm caused through AI and how to ensure the regulatoryframework is sufficiently flexible to accommodate further technologicalprogress while providing the much needed legal certainty.
While theEuropean Commission undoubtedly aims to build a clear European regulatoryframework for AI (rather than a fragmented country-by-country approach), ittakes the view that it is premature to propose specific rules at this stage andinstead opens a public consultation giving business and other stakeholders theopportunity to help shape a future AI governance framework.
Thatsaid, the whitepaper does provide some interesting insights into what thisframework would look like:
The framework would prescribe a number of mandatory legal requirements for so-called "high-risk" AI applications in order to ensure the regulatory intervention is proportionate. As a result, many AI applications would fall outside the scope of the framework. The Commission proposes two cumulative criteria for determining whether an AI application is "high-risk": namely the sector in which the AI application is employed and the actual use case. Interestingly, high-risk sectors preliminarily mentioned in the whitepaper are healthcare, transport, energy and parts of the public sector. These criteria will require a lot of further thinking and are an area that businesses may want to comment on as part of the consultation.
The whitepaper touches on the types of mandatory legal requirements that would apply to such high-risk AI applications. These are the "usual suspects" and include an appropriate degree of human oversight, adequate training data, record keeping requirements, transparency, robustness and accuracy. They are another area that businesses may want to comment on during the consultation.
It willcome as good news to business that in its whitepaper, the European Commissionfrequently highlights the fact that AI, and technology in general, are a forcefor good and critical enabler in solving some of the world's most pressingchallenges, such as the fight against climate change. It further states theneed to promote and accelerate the uptake of AI in Europe and makes the pointthat Europe is way behind North America and Asia when it comes to investment inresearch and innovation. It pledges to significantly increase investment inthese areas and to facilitate the creation of European excellence and testingcenters that attract best-in-in class researchers.
Findingthe right balance between creating an ecosystem in which AI can flourish andensuring Europe becomes a global leader in technology, on the one hand, andprotecting society from the risks such technology may bring, on the other hand,is the challenge. Fundamental human rights, such as the right to privacy, humandignity, freedom of expression and non-discrimination are at stake. But so isarguably Europe's economic future. So, we must press ahead and embrace thefuture in which AI will play a central role.
The whitepaperspecifically addresses the use of facial recognition technology in publicspaces which, in recent months, has attracted much attention by the media,governments, regulators and the general public as a result of new uses of thetechnology proliferating with limited oversight. Recognizing that numeroussocially beneficial use cases exist for this technology - think of itspotential to increase security in public spaces through responsible use by lawenforcement - the European Commission categorically considers it high-riskbecause of the significant risk it poses to human rights and civil liberties.
There isno mention of the previously discussed policy measure of a temporarymoratorium. But rather than charting a clear way forward for this technology,the Commission foreshadows a broad European debate on, firstly, the specificcircumstances, if any, which might justify the technology's use in publicspaces, and secondly common safeguards. This does not come as a surprise and isultimately intended to build public trust in, and acceptance of, thispotentially intrusive technology before allowing its use more widely. Thisapproach might also help build a European consensus, rather than a fragmentedMember State approach, on whether this technology should be permitted at alland, if so, how to impose responsible limits on its use.
Lookingbeyond Europe, different regions are at different stages of the debate aroundfacial recognition technology. Notably, cultural norms seem to heavilyinfluence the direction of travel across continents. While in the US varioustechnology-specific laws are being introduced, across Asia Pacific the use ofthis technology seems to be more accepted and calls for regulation seem less pressing.
Important questions remain
by Sue McLeanGlobal tech lead for FinTech and Blockchain at Baker McKenzie
Rather than unveil new rules for AI, the Commission points out the risks posed by AI, the existing laws that apply to AI, plus its intention to update laws to fix any gaps which may exist. The Commission says that it would like strict rules for high-risk systems such as in health, policing and transport, and a voluntary labeling scheme for low-risk applications, plus there is talk of AI and ethics.
But the white paper does not include any detailed proposals for new regulation and the Commission has backtracked on its original proposal for a five-year moratorium on facial recognition in public spaces. So, we remain in ‘wait and see’ mode in terms of what new regulation the EU will actually seek to introduce on AI.
The DataStrategy is a lot more interesting and significant, outlining the EU's dataambitions and setting out a broad range of proposals including in terms of datasharing, cloud, IP law, anti-trust and tech sovereignty. But it also raises arange of questions:
How can Europe create its own tech giants and really compete in the global data economy when rivals in the US and China don’t have the EU's strict data privacy laws to navigate?
There's a big focus on extracting value out of industrial data, but how big a market is there in the B2B sharing of industrial data? Is it really only anti-trust concerns and a lack of a clear data sharing framework that prevents businesses voluntarily sharing non-personal data at the moment?
Also, the Commission wants to facilitate voluntary data sharing, but what exactly does the Commission have in mind when it talks about “addressing barriers on data sharing and clarifying rules for the responsible use of data?”
Interestingly, the Commission indicates it may introduce new rules which would mandate a data portability right where a market failure is identified in a particular sector. This would appear similar to the UK government's smart data proposals which involve extending open banking principles to other markets, including energy, and telecoms and digital platforms.
Post-Brexit, the UK won't need to follow EU rules. So, if the EU are too heavy handed in regulating AI and data, this could provide a good opportunity for the UK tech sector.
by Joanna de FonsekaSenior Associate in Baker McKenzie’s Technology Group
The EU's new data strategy seeks to position the EU as a competitive market to commercialize data, while preserving high privacy, security, safety and ethical standards. One of the key proposals is to create European-level and sectoral data pools, or "data spaces," to facilitate data sharing across organizations, based on a set of data sharing standards, tools and governance mechanisms.
The Commission is also proposing a new "Data Act", which would be designed to facilitate business-to-business and business-to-government data sharing, as well as creating an "enhanced data portability right" to give individuals more control over who can access and use their data.
Fundamentally,this is about making data an enabler of competition rather than a barrier toit. The Commission's view is that currently, the "data advantage"enjoyed by larger players can create barriers to entry for SMEs and start-ups.Much in these proposals is essentially about levelling the playing field andpersuading larger companies to share their data with start-ups, the publicsector or other businesses - the logic being that this will promote competitionand ultimately benefit consumers.
The proposals also signal a push for data-driven innovation, as the EU seeks to compete with markets like the US and China. However, there is naturally a tension between the creation of data spaces, which are designed to promote data sharing, and the strict EU privacy rules enshrined in the GDPR. The Commission has stressed that the proposed data spaces will be developed "in full compliance" with data protection rules and according to the highest available cybersecurity standards - but this is likely to be challenging in practice.