A conversation with Andrew Davies, executive director of the Association of British HealthTech Industries (ABHI)

Felix Beacher, the head of healthcare technology at Informa Tech, speaks with Andrew Davies, executive director of the Association of British HealthTech Industries (ABHI)

The progress of medical AI depends crucially on how it is regulated. Could heavy-handed government regulation strangle innovation? Are there special ethical issues involved in medical AI that require new approaches? What are the risks? How are the conversations going between government, the medical profession, health systems and innovators?

To find answers to these questions, I recently spoke with Andrew Davies, who is working at the interface between the medical AI industry and U.K. government bodies. Andrew is executive director, board member and Digital Health Lead at the Association of British HealthTech Industries (ABHI).

Felix Beacher: Could you describe what ABHI does?

Andrew Davies: ABHI is a trade association for health tech companies, representing developers, manufacturers and distributors of medical devices, diagnostics and digital health solutions. We play three broad roles. First, we provide input into the formation of government health policy from an industry perspective. Second, we provide intelligence back to our members to support their business planning. Third, we provide networking opportunities both within the industry sector and between industry and other stakeholders in health.

Beacher: How would you describe where medical AI is right now?

Davies: It's in a period of great promise. It promises to deliver better outcomes for patients and better efficiency for the health system. But we are not really mainstreaming AI yet. It's very early days. And there's a lot of work to be done in many areas to make it play a routine frontline role. I think communication is going to be critical in how we do that. We need to make sure that patients, the public and health care professionals join the journey of AI. It's as much about cultural change as it is technological change. But the possibilities for changing patient care are beyond my imagination.

Beacher: Let’s stretch your imagination a little. How do you see the development of medical AI in the next, say, 10 years?

Davies: The first step is to mainstream the technologies we currently have. Then we need to bring the more sophisticated forms of AI into the market, those with a wider scope than the current point solutions. These will have a real opportunity to transform how patients interact with the health system and I believe we will see greater use of AI as a first interface with the health system. There is already the move to creating and incentivising digital access points and incorporating AI into that interface to support self-management and triage can have a fundamental impact on use of health resources, as well as eliminating boundaries between well-being, fitness, care and health.

Beacher: Could you give your favorite example of how medical AI might benefit patients?

Davies: It’s the use of AI for scanning through lots of images for diagnosing cancer. The reason I like that example is it hits the two key things that AI can deliver - it's going to improve patient outcomes, because you can get faster diagnosis, but it's also making the best use of the workforce. And we know that health systems, globally, have issues with workforce shortages. So making health systems as efficient as possible is really critical.

Beacher: What do you think are the main obstacles to the development of medical AI in the U.K.?

Davies: Probably the main obstacle is access to data: having the data to develop, validate and verify algorithms. Having the right quantity of data is obviously very important but also we need the right quality of data. Here the shortage is data that is properly labeled and can be aggregated across datasets.

Also, I wouldn't want to de-couple the development of AI with its implementation. The route for a developer of any medical AI technology is to develop the technology, gather the evidence for market launch, pilot implementation and then wider spread adoption and on-going surveillance. Access to data throughout this process is vital.

Beacher: Could you describe the state of play for regulation of medical AI in the UK?

Davies: First of all, it's early days. It's early days for AI and therefore it's early days for the regulation of AI. But it's clearly on the radar of all relevant regulators. When we think of regulation for medical AI we are thinking of bodies specifically in health: bodies like the Medicines and Healthcare products Regulatory Agency (MHRA), the Care Quality Commission (CQC) and the Health Research Authority (HRA). They are all actively looking at how to deal with medical AI from their particular perspectives.

But there are also regulators outside the medical space that are relevant. For example, the Information Commissioners Office (ICO), when we're looking at data, but also bodies like Ofcom, when you're transmitting data. There are yet other regulators that could act in some way in this area and I've not even identified all of them yet. This shows the complexity of the regulatory landscape and the state of play is evolving. How can we make that landscape simpler, better and more transparent in terms of who does what? That’s a key question. Also in the longer term the possibility of a general regulator for AI is a possibility I could foresee.

Beacher: Imagine I'm a developer and I've just created a phone app for determining whether a mole is cancerous or not. How would I go about getting that on the U.K. market?

Davies: The first question is are you a medical device? From that description, yes you are, because you're providing a diagnostic. So, because you have a regulated medical device, you would need to go to the MHRA for oversight. You would then need to understand what class medical device you have. Probably it is class II. You would put together a technical file of the evidence on your app and you submit it to you Notified Body.

Right now, in the U.K. we're in a transition phase for medical device regulation. European regulations were transposed into the UK Medical Device Regulation (MDR). We are now between that regime and the new UK Conformity Assessment (UKCA) regime, which will replace the European MDR.

If you wanted to sell in Europe as well, you would need to go through MDR. So you would choose the regulatory approach that places you on the relevant market.

After that, in the U.K., there are a number of quasi-regulatory steps to go through. You may need to go through an assessment with the National Institute for Health and Care Excellence (NICE). You would also need to go through the Digital Technology Assessment Criteria (DTAC) process. This is something put in place by the NHS to provide baseline standards for any digital products.

Beacher: How would you compare the regulation of medical AI in the U.K. to other countries?

Davies: I can't claim to be an expert on every single country, but I would say the U.K. is certainly on a par with most. The MHRA is taking an approach of international alignment, looking at how we can work within an international framework. So they're particularly involved with the IMDRF, which is the International Medical Device Regulators Forum. For example, they proposed in their recent consultation for the new UKCA mark that the classification system for software as a medical device (SaMD) aligns closely with that of the IMDRF. The U.K. government also recently published some best practice guidance in conjunction with the FDA (U.S.) and Health Canada, so they are trying to move forward on a global basis.

For developers that's fantastic. What we don't want is the U.K. developing regulations that represent a higher barrier than necessary, or having radically different frameworks, to other countries. An international approach makes it much easier for developers to sell in multiple markets without additional regulatory burdens. So we're closely aligned with international standards, if not taking a leading role in thinking about these issues.

Beacher: What are the chief concerns facing regulators?

Davies: One concern is a tendency for what's called AI exceptionalism. That means treating AI as fundamentally different to other forms of software. In general, regulators want to avoid that. There's a general view that we should try and treat AI as any other form of SaMD and address AI issues through general standards and guidelines. I think it's a very pragmatic approach and one which will help. There’s also a concern to make the development of new standards on AI less cumbersome and easier to adopt and keep up to date.

Beacher: What are the main concerns about regulation from companies developing medical AI systems?

Davies: I think there are two things. They want clarity over what regulations already exist and what regulations might come in the near future. There is also a frustration that there are overlaps between the different regulatory regimes. In some cases regulation can become burdensome, but the biggest burden is lack of clarity, not knowing what process you need to go through in advance. So transparency on how the different regulatory regimes I mentioned earlier interact with each other is essential. This is the reason for the Multi-Agency Advice Service (MAAS), which is bringing together these health regulators. MAAS provides a service to both developers and to people within the health system who are trying to understand regulation in this space. That should really help.

Beacher: How would you describe the general attitude of the medical profession to current medical AI?

Davies: Like a lot of new technologies, you've got your standard adoption curve. That goes from the early adopters through to the majority and then to the laggards, etc. Some medical professionals are really leading the way and driving forward the use of AI. But then you have a lot who are not sure how this is going work in practice for them. A key question is how we take the enthusiasm, experience and skills of those early adopters and translate that to the majority of health care professionals.

Beacher: How would you describe the current alignment on medical AI between industry, government, the health system and the medical profession?

Davies: It’s very good. The government has been very proactive on AI strategies and not just in health. They're looking closely at how the U.K. can get a competitive advantage from AI also in automotive, financial services and other sectors. So we have a governmental framework that is pro-AI and this has trickled down to the NHS, which is also investing in AI.

I also think we have some really good interfaces between industry and the NHS. Sometimes these interactions can be highly transactional. But in digital health, we have very good relations. People tend to realize that they have common objectives and need each other to bring these to fruition.

Beacher: What ethical issues do you think are most important for medical AI?

Davies: A key one is about transparency. Understanding what is it that AI systems are doing and why are they doing it that way. This requires robust patient and public engagement.

Beacher: Would most patients really care about how it works, if they were convinced that it does work?

Davies: Some people might not care too much about how it works, as long as it does. That is absolutely their choice. But if they want to know how it works, they should be able to find out. It’s like if a physician prescribes a pill for something, hopefully they know the mechanism by which it works and some patients might be fine to simply trust them. And that's fine. But if the patient wanted to know, they should be able to find out. They should also be able to find out the possible side effects. It's the same with medical AI. We need to find a way to communicate these details to people who need or want to know.

Beacher: What other ethical issues are important?

Davies: The one that always comes up is bias. Can we be sure that people are getting equity of access and equity of outcomes? We are dealing with sensitive data from people in potentially vulnerable situations. Overall, the key question is how do we build trust? How do we take people along with the journey? And how do we use their experiences of medical AI? In this area, we need to consider all users: end-users, patients and health care professionals. I think the issue of bias is now well recognized and steps can be put in place to counter any possible issues, but we need to remain vigilant.

Beacher: What kind of nightmare scenarios are there? What would a catastrophic failure of medical AI regulation look like?

Davies: History has important lessons in many applications of health technology, whether that be software, medical devices, diagnostics, or drugs. Medical devices and pharma are well regulated. But we know that things can still go wrong. On the personal level, mistakes can cause death, perhaps because a diagnosis is missed or because of serious adverse incidents. If cases attract a lot of public attention, it can destroy public trust in the technology. The whole sector can get tarnished. I reiterate my points that transparency and communication are vital.

Post-market surveillance is something we'll really need to build on. We need to know how the technology is behaving overall, but also at the local level, compared to what was promised (either in terms of clinical applications or for efficiency savings).

About the Author(s)

Felix Beacher, head of Healthcare Tech at Informa Tech

Felix heads the Healthcare Technology team at Informa Tech. He has direct responsibility for the Ultrasound Intelligence Service and is currently working on Omdia's forthcoming intelligence service on medical AI.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like