AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Health & Pharma

Can AI regulators learn from the life sciences?

by Alex Denoon and Julian Hitchcock, Bristows
Article ImageHow to deal with hype, ethical complexity, and the difficulties of future-proofing

The European Commission published in May a proposed Artificial Intelligence Regulation (AIR), the first ever legal framework intended to govern the use and development of AI.

The AIR “supports the objective of the Union being a global leader in the development of secure, trustworthy and ethical artificial intelligence […] and ensures the protection of ethical principles.”

The draft regulation establishes “an ecosystem of trust” in which AI development is promoted while upholding the rights of citizens under the EU Charter of Fundamental Rights.

The Commission wanted to anticipate and prevent the risks related to the use of this new technology.

It is a direct attempt to raise the bar on existing AI standards, as the EU has already done in other sectors – especially in the life sciences.

No industry is more familiar with technological hype, ethical complexity, and the difficulties of future-proofing than the field of life science regulation.

Just think of IVF, cloning, human embryonic stem cells, hybrid and synthetic embryos, GMOs, human genome editing: each of these innovations has been met with ethical and social issues, and, consequently, policies and regulations.

Under the proposed regulation, the companies behind artificial intelligence products will be required, for the first time, to deal with notified bodies, and to apply for permits; they may even see the use of their product limited in scope.

This will sound familiar to those with experience in the medical devices and diagnostics sector, who will be well used to the challenges of demanding regulation and of having to demonstrate compliance to it.

To others, it will come as a shock.

Lessons to learn

No area is as data-complex as the life sciences, so it’s unsurprising that AI is producing such dramatic advances there.

Medicine in particular has much to gain (think proteins), but with lives involved, this transformative power, with its reliance on machine-driven assumptions, has to be unleashed cautiously.

What lessons can we learn from previous attempts to regulate technological change?

Life sciences lawyers are almost spoiled for choice when it comes to examples of previous efforts of aspirational regulatory ‘bar raising’ that have not quite gone as planned.

The EU Medical Devices Regulation (MDR), for example, became fully applicable on  May 26, 2021, following four years of market turbulence due to the new requirements to gain a CE-mark.

Like the AIR, the MDR required re-certification of almost all products.

The MDR also required the re-qualification of all notified bodies, the organizations designated to assess the conformity of products before they can be placed on the market, which predictably created bottlenecks and delays.

Just like the MDR, the In Vitro Diagnostics Medical Device Regulation (IVDR), scheduled to come into force in the EEA in May 2022, requires most in vitro diagnostics to obtain a CE-mark with the involvement of a notified body, whereas previously manufacturers were able to self-certify.

Notified bodies, only just recovering from all the MDR applications, are already faced with a new heavy workload, with industry associations1 calling for a delay to the implementation of the IVDR.

AI regulators should learn from the MDR/IVDR experience: what looks good on paper can translate to a practical nightmare if the red tape is too much and the infrastructure is too weak.

Will there be enough expertise to fill the job description of an AI notified body? Can we avoid another regulatory bottleneck?

Another lesson to learn from the life sciences sector is to avoid putting regulations in place too early, when they can become counterproductive, like the EU’s GMO Directives.

These were designed to protect human health and the environment from the theoretical dangers of organisms produced using recombinant DNA techniques but ended up super-bureaucratizing the use of precision-edited organisms (for example with the Nobel-winning CRISPR Cas-9 technology), whilst exempting the production of random mutants by ionizing radiation (for example in the animal farming sector).

Failing to protect the environment or human health, the GMO Directives were simply too cautious, impeding European research and competitiveness at a time when genetic editing technologies have a vital role to play in meeting the global challenges of food security, climate change, biodiversity, health, energy use, and sustainability.

The right balance

Luckily, the European Commission is well aware of the problem for gene editing, and it recently reassessed the regime, concluding “there are strong indications that the applicable legislation is not fit for purpose for some [new genetic technologies] and their products, and that it needs to be adapted to scientific and technological progress.”

Given the potential of artificial intelligence, the hope is that they saw the parallels.

The Commission made the bold claim that AIR will square the protection of rights with making the EU a global leader in the development of AI.

This is a crucial balance to achieve.

If the weight is all on protecting the rights, it can rest onto the shoulders of innovative businesses, in the shape of the regulatory burdens of development.

In other words, the cost of over-regulation can deter small innovators, leading to a market dominated by corporate giants.

It’s not impossible to get the balance right and the Commission thinks it can.

In fact, the AIR is almost pedantically based on the template of “New Legislative Framework” (NLF) regulations, under which notified bodies undertake conformity assessments of products, a tried-and-tested approach for many product areas, including medical devices and IVDs.

Further, to allow smaller innovators to be able to enter the sector, AIR includes provisions for ”regulatory sandboxes” to foster the development of AI systems in a controlled environment for a limited time.

It seems that the European Commission has learnt from the lesson of the GMO Directives experience: protect from risks, but encourage innovations that can benefit the environment, human health, and economy.

Let’s hope that they learn from the life sciences sector and translate the draft into a lean and efficient system.

1. Notably the European Federation of Pharmaceutical Industries and Associations, and the European Cancer Patient Coalition


Alex Denoon and Julian Hitchcock, partner and counsel at Bristows.

Denoon advises clients on issues related to genomics, cell and gene therapies, 3D printing, and healthcare apps. He previously held an in-house general counsel role at Biotech Australia.

Hitchcock joined the firm with Denoon in 2018. Formerly a BBC science producer, the Bristows lawyer provides practical policy and regulatory advice on emerging biomedical products.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up