Regulating artificial intelligence – a year after the EU’s proposed AI Act

An opinion piece from a BDB Pitmans solicitor and member of the Society for Computers and Law

June 22, 2022

5 Min Read

An opinion piece from a BDB Pitmans solicitor and member of the Society for Computers and Law

As the public debate about the pros and cons of artificial intelligence (AI) rages on, one thing is clear – it is a global industry crying out for regulation. With the introduction last year of the Artificial Intelligence Act (AI Act), the European Union may well be the one to set the standards in an industry that has been mostly left to its own devices, no pun intended.

From the General Data Protection Regulation (GDPR) to the Digital Services Act (DSA), we are now all too familiar with the EU’s legislation in the tech sector. Therefore it is of no surprise that the EU is now seeking to get ahead and develop a global standard for digital players.

The AI Act

The AI Act proposes a risk-based approach to evaluating AI systems by placing them into three categories: unacceptable-risk systems, high-risk systems, and limited or minimal-risk systems.

Unacceptable-risk AI systems are prohibited by the draft regulations, which includes AI systems or applications that have the potential to manipulate, distort human behavior or both. This includes voice assistant-enabled toys that encourage dangerous behavior as well as systems that allow ‘social scoring' by governments.

Examples of high-risk AI systems include technology used in safety components of products (e.g. AI application in robot-assisted surgery), critical infrastructures (e.g. transport), and employment tools (e.g. resume-sorting software for recruitment procedures).

Limited and minimal-risk AI systems include AI chatbots, AI-enabled video and computer games, spam filters, inventory management systems, customer and market segmentation systems, and other AI systems that represent a minimal or zero risk to citizens’ rights or safety.

Systems in the unacceptable-risk category would no longer be permitted in the EU. High-risk systems would be subject to the largest set of requirements, such as ensuring transparency of information to users, maintaining robust cybersecurity arrangements, and implementing a risk management system.

Minimal risk systems would have significantly fewer requirements, primarily in the form of transparency obligations, such as informing users that they are interacting with a machine, and notifying users if image, audio, or video content has been generated or manipulated by AI to falsely represent its content.

An example of this could be an AI-generated video showing an elected official or political candidate making a public statement, which in fact, was never made. The requirement to create such awareness will apply to systems in all risk categories.

As with the GDPR, the draft regulations propose potentially hefty fines for non-compliance. For the most serious breaches, fines can reach up to 30 million euros or up to 6% of global annual revenue for the preceding financial year for companies, whichever is higher.

For those utilizing AI systems that fail to comply with any other requirement or obligation, fines could be up to 20 million euros or up to 4% of worldwide annual revenue for the preceding financial year for companies, whichever is higher.

Meanwhile fines for supplying incorrect, incomplete, or false information to notified bodies and national authorities can reach up to 10 million euros or up to 2% of worldwide annual revenue for the preceding financial year for companies, whichever is higher.

AI Act impact is global

The proposed regulations would have extraterritorial reach, so would also apply to providers of AI systems that are based outside the EU but provide an output within the Union.

In the U.K., there has been similar debate about the legislation and regulation of AI. As the EU's objectives are similar to those being discussed in the U.K., governments and regulators will inevitably look to the EU regulations when deciding what legislative or regulatory framework is required, and what it may look like.

Recently, all political groups within the European Parliament submitted a substantial number of proposed amendments including

  • changes to the definition of AI itself;

  • reviewing the extent of the sanctions regime;

  • the potential to widen the scope so as to capture AI applications in the metaverse, blockchain-backed currencies and NFTs; and

  • extending the potential list of prohibited practices to include emotion recognition and recommender systems that systematically suggest disinformation and illegal content.

Following contributions made by the IMCO and LIBE Committees earlier this year, the EU Commission has plenty to consider, including its own powers to investigate and enforce the AI Act across multiple member states.

Driving innovation

Companies will be keenly monitoring these developments. However the current direction of the draft regulations has the potential to drive innovation of AI systems in a number of areas.

Boardrooms and investors are becoming keen on the potential for AI to inform, monitor and influence ESG (environmental, social and governance) policy and goals. AI would allow investors to collect and analyse huge volumes of information when accounting for ESG risks and opportunities.

With the campaign to tackle climate change, AI systems also have the potential to collect and process the volume of data needed to assess and reduce carbon emissions. Particularly since last year’s U.N. Climate Change Conference in Glasgow (COP26), there has been a marked increase in the calls to utilize AI systems to monitor and assess energy, water, transport and agricultural systems, in order to try and achieve the goal of net zero.

A good example of the above two areas converging is electric vehicle (EV) charging. There is now a race in the tech sector to identify work habits and commuting patterns by turning to machine learning algorithms in order to develop and offer commercially viable EV charging installations for employees and collate data that shows how and when power is drawn down from the grid. It is clear that AI is making great strides and is firmly here to stay.

It is important to note that the AI Act currently focuses almost exclusively on the risks AI poses to the public, as opposed to the risks to organizations and their commercial interests. That said, it does allow organizations to take this framework and create their own risk-based strategies when developing and implementing AI systems.

The sooner companies begin establishing their own AI risk-management programs, the greater their potential will be for long-term success with this critical technology.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like