European Artificial Intelligence Policy: The Road Ahead

Ciarán Daly

July 2, 2019

4 Min Read

by Sophie Goossens and Roch Glowacki

LONDON - Recent
technological blunders involving biased algorithms, chaos-inflicting drones,
self-driving car fatalities, uncontrolled spread of fake news and terrorist
content online, and a series of privacy breaches have forced many to re-think
our relationship with technology. Europe’s latest efforts to establish thought
leadership in the field of AI have to be considered in a context where the
success of the GDPR has emboldened the European legislator and established its
legitimacy as a regulatory trendsetter.

In April, the EU published Ethics Guidelines for Trustworthy AI (the “Guidelines”) on how those involved in deploying AI-powered solutions should do so in an ethical manner. The Guidelines propose a set of seven key requirements that AI systems should comply with in order to be deemed trustworthy.

Related: A retrospective of GDPR's first year - what it means for AI

These include, inter alia, requirements of transparency (including traceability and explainability to enable identification of reasons why an AI decision was erroneous) and accountability (including auditability, minimisation and reporting of negative impact, trade-offs and providing a redress mechanism).

The
Guidelines envisage being able to assess the underlying algorithms, data and
design processes of an AI system. This does not necessarily imply that
information about business models and intellectual property related to the AI
system must always be openly available. This may not always be possible or
worthwhile. A whole field of research, Explainable AI (XAI), already tries to
address the difficulties posed by “black box” neural networks that often defy
explainability and proposes alternative methods of appraisal (such as using
counterfactuals).

However, the
Guidelines imply that at least certain types of solutions may need to be open
to independent audits. In the future, this may be required in respect of
safety-critical applications or those deployed in heavily regulated industries (for example,
healthcare or financial services). Even if it
does not become mandatory to design AI systems with
interpretability in mind, the market may still
penalise solutions that lack the desired level of transparency.  

Related: Is your data ready to handle AI?

Earlier this
year, in March, the European legislator also made up its mind on the sensitive
topic of text and data mining (“TDM”) of datasets protected by copyright. With
the entire future of the European AI at stake, the legislator had to make a
decision on whether copyright holders should have a say in the way their
content may need to be normalised for TDM purposes.

Unlike the US (where the act of normalising data protected by copyright is generally considered to be covered by the doctrine of “fair use”), and several Asian countries which have very robust copyright exceptions to cover this issue, Europe chose to set itself apart by endorsing an exception from which rightsholders can opt-out “in an appropriate manner”.

While it is too early to predict how dramatically this opt-out mechanism will affect the availability of datasets in Europe, it is fair to assume that a lot will depend on the sector’s capacity to devise quick, effective and innovative licensing schemes for copyright-protected datasets. As the text enters its transposition phase, AI developers need to be aware of intellectual property rights that may protect the data that they have already harvested or are currently harvesting online.

Last month, the EU’s AI expert group has also released the AI Policy and Investment Recommendations. The? report makes for a sobering read. Given the number and scope of the recommendations, there is an awful lot to do to ensure that Europe remains relevant in the global race for AI supremacy.

There will, however, be no immediate general overhaul of EU regulatory landscape to accommodate AI technology. Instead, we are likely to see piecemeal progress in updating the existing legal frameworks. The European Commission is already looking at the applicability of the Product Liability Directive to new technologies.

It also remains to be seen whether, as part of the evolving legislative landscape, new AI-dedicated regulators will be created over the coming years. A number of countries (including the UK) have already set up national offices for AI or dedicated intra-ministry groups that could be spun out into standalone regulatory bodies.

In the future, an entire sector focused on assessing and certifying the trustworthiness of AI solutions is also likely to emerge with new certifications (“made in Europe”, etc.) or perhaps even Trustpilot-type of websites for comparing AI-based solutions such as image-recognition software, recommendation engines or instantaneous translators. 

In the meantime, all stakeholders are invited to test the Guidelines’ assessment list for Trustworthy AI and provide practical feedback on how it can be improved. The piloting phase will run until 1st December 2019 and, following an evaluation process, the Commission will decide on the next steps in early 2020.

Sophie Goossens is Counsel and Roch Glowacki is Associate of Reed Smith

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like