Proposals include strict limits on the use of real-time biometric identification systems

Ben Wodecki, Jr. Editor

April 21, 2021

5 Min Read
A robot next to the European Commission logo. The European Commission just introduced the EU AI Act, which will regulate all AI systems across the EU
Some AI systems could be outright banned under proposed rulesEuropean Commission

The European Commission has unveiled the bloc’s first-ever legal framework on AI, with new rules aimed at guaranteeing the safety and fundamental rights of people and businesses, while strengthening investment in AI and the uptake of the technology across the EU.

If the draft Regulation is approved, it will be applied directly in the same way across all member states.

Among the new rules, all remote biometric identification systems are considered “high-risk” and subject to strict requirements. Law enforcement agencies would be prohibited to use such systems in publicly accessible spaces.

“Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use,” Thierry Breton, the commissioner for internal markets, said.

Clamping down on ‘high-risk’ AI

The draft Regulation addresses specific risks posed by AI, with all systems being required to be categorized in terms of their impact.

‘Unacceptable risk' covers systems that are deemed to be a "clear threat to the safety, livelihoods, and rights of people.” Any product or system which falls under this category will be banned. The Commission said this category includes AI systems or applications that manipulate human behavior to circumvent users' free will and systems that allow ‘social scoring' by governments.

Last year, the Commission released a white paper that warned that AI systems could be used to intrude into citizen's private lives and cause discrimination, as well as for criminal purposes.

The prohibition of remote biometric identification systems does have some exemptions, with such systems only to be used when necessary to “search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offense.”

Such use is subject to authorization by a judicial or independent body and to appropriate limits in time, geographic reach, and the databases searched.

“The EU has previously mooted the possibility of banning facial recognition systems. It hasn’t gone so far in the draft Regulation, but this is an important proposed restriction on the use of facial recognition technology,” Minesh Tanna, AI lead at legal firm Simmons & Simmons, told AI Business.

The next category, ‘High-risk’, includes systems for critical infrastructure which could put life or health at risk, systems for law enforcement that may interfere with people's fundamental rights, and systems for migration, asylum seeking, and border control management, such as verification of the authenticity of travel documents.

The Commission said AI systems deemed to be high-risk will be subject to “strict obligations” before they can be put on the market, including risk assessments, high quality of the datasets, ‘appropriate’ human oversight measures, and high levels of security.

“The list of high-risk AI systems could have been broader, but a potential area of uncertainty arises from the core definition of “AI system” which appears to be very broad. It includes not only typical “machine learning” approaches, but also “logic- and knowledge-based approaches” which could potentially capture a wide range of technologies within the categories of high-risk AI systems,” Tanna explained.

‘Limited risk’ and ‘Minimal risk’ categories require limited or no obligations, covering chatbots, AI-enabled video games, and spam filters. Most AI systems will fall into this category, with the Commission noting that the draft Regulation does not intervene here, as these systems represent “only minimal or no risk for citizens' rights or safety.”

In terms of governance, the Commission proposes that national market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation.

Further, voluntary codes of conduct have been proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

Margrethe Vestager, executive vice-president for a Europe fit for the Digital Age, said, “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Tanna noted that any new rules will have a relatively long implementation period: “The last Article in the draft Regulation proposes that the Regulation will apply two years after it comes into force, save for certain provisions relating to the establishment or functioning of relevant national and European bodies. It looks as though organizations providing and using AI systems will therefore have a period of time in which to ensure compliance with the Regulation, after it comes into force.”

Increase coordination and data sharing

Alongside the new rules, the Commission also unveiled an update to its Coordinated Plan on AI.

Under the revised plan, funding allocated through the Digital Europe and Horizon Europe programs will be used to create public-private partnerships on AI research, expand cross-border exchange of information, and invest in critical computing capacities.

First published in 2018 to define actions and funding instruments related to AI, the Coordinated Plan on AI now proposes what the Commission describes as “concrete joint actions for collaboration” aimed at aligning with the European Strategy on AI and the European Green Deal.

The Commission said the revised plan will accelerate investments in AI to drive economic and social recovery following the pandemic.

Machines beware

The recently unveiled Machinery Regulation, which replaces the EU Machinery Directive, contains provisions which the Commission said will “ensure the safe integration” of AI into machinery.

To comply with new regulation, businesses which use AI-capable machinery must conduct a single conformity assessment.

Machines covered in the latest regulation would likely include 3D printers, construction machinery, and industrial production lines, the Commission said.

The newly unveiled rules will need to be greenlit by the European Parliament and member states.

Member states will need to also make any national legislative adjustments to comply fully, should the proposal come to fruition.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like