New rules would require defendants to prove their systems did not cause harm

Ben Wodecki, Jr. Editor

September 29, 2022

3 Min Read

New rules would require defendants to prove their systems did not cause harm

Consumers harmed by an AI system will have an easier time bringing legal action under newly proposed EU rules.

Revisions to the EU’s Product Liability Directive would broaden protections for victims, including alleviating the burden of proof.

A simplified legal process would introduce a ‘presumption of causality,' which would save victims from having to explain how a potentially complex AI technology harmed them if “a relevant fault has been established and a causal link to the AI performance seems reasonably likely.”

This essentially means a defendant would be under the legal spotlight to show that its system is not the cause of the harm suffered. 

Other changes to the directive include more tools to seek legal restitution by introducing a right-of-access to evidence from companies and suppliers. This would apply to cases in which high-risk AI is involved. (What’s high risk would be established under the EU’s AI Act, a proposed legislation currently being wrangled over by lawmakers.)

“We want the AI technologies to thrive in the EU,” said Věra Jourová, the EU’s vice president for values and transparency. “For this to happen, people need to trust digital innovations. With today's proposal on AI civil liability, we give customers tools for remedies in case of damage caused by AI so that they have the same level of protection as with traditional technologies and we ensure legal certainty for our internal market.”

Changes to state civil litigation regimes

According to the EU, the revisions are an attempt to modernize the Product Liability Directive. The bloc said it plans to “reinforce the current well-established rules” to cover new technologies and innovations like AI.

The changes could force changes to the way technical documents are written, according to John Buyers, head of AI at the law firm Osborne Clarke, as the potential for claimants to get hold of a defendant's regulatory compliance documentation to inform their claims could potentially sway a case’s outcome.

“There's no doubt that the AI industry – at least as regards applications classified as high risk under the Act – is going to need to apply time and thought to compliance with the new Act and how best to protect their interests,” he added.

Buyers noted that the proposed changes would need to be turned into national law by EU member states, that way individual countries can tailor the provisions “in the form that best suits the specifics of their civil litigation regime."

To the EU’s point on modernizing rules, Rod Freeman, head of the global products practice at law firm Cooley, said the directive has been “virtually unchanged for more than 35 years.”

“We now find ourselves at a point where the European Commission has concluded that change is needed in order to protect consumers, especially considering new risks and challenges created by connected products, artificial intelligence, and the increasing importance of data security, as well as e-commerce and other new marketing methods,” Freeman said.

“Whilst some of the most important aspects of the reforms focus on adapting liability rules to deal with new technologies, by opening up a review of the Directive generally, these reforms affect all product sectors.” 

‘Promote innovation and enhance public trust’

Wary of the proposed changes was the (Business) Software Alliance (BSA). The group suggests that lawmakers adopt a risk-based approach to AI liability.

While welcoming the effort to harmonize rules across the bloc, Matteo Quattrocchi, policy director of EMEA at BSA, said that policymakers should “clarify the allocation of responsibility along the AI value chain to make sure that responsibilities for compliance and liability are assigned to the entities best placed to mitigate harms and risks.”

“The goals of AI governance should be to promote innovation and enhance public trust. Earning trust will depend on ensuring that existing legal protections continue to apply to the use of artificial intelligence,” Quattrocchi added.

The Software Alliance recently took issue with the EU’s AI Act, describing it as “virtually impossible” to enforce due to the proposed legislation being overly broad.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like