Tom Lawry offers a candid view of the potential of AI in transforming health care

At a Glance

  • America's health system is ailing because it works on the 'break-fix' model.
  • U.S. doctors spend more time on administrative tasks then seeing patients. AI can help.
  • An AI-powered outcome might be legal, but still be unethical.

Tom Lawry, outgoing National Director for AI in Health and Life Sciences at Microsoft, offers a candid view of AI’s strengths and weaknesses, explains why what’s legal may not be ethical and what would confound regulators about the nature of algorithms in AI models.

Lawry is the bestselling author of "Hacking Healthcare - How AI and the Intelligence Revolution will Reboot an Ailing System." He recently joined the podcast of Felix Beacher, head of healthcare technology at Informa Tech.

Listen to the podcast or read the edited transcript below.

Felix Beacher: In the title of your latest book, you say that health care is an ailing system. So can you unpack what you mean by that?

Tom Lawry: It's fascinating to me that we have an exceptionally talented group of clinicians, doctors, nurses, researchers in America. We've got the best technology. We spend more per capita on each citizen than any country in the world. And yet, if you look at any standard quality measures, we come in last.

So the question is why is that? Great people, best technologies, and yet there's something missing where we score lower in certain metrics than many third world countries. … This will only get worse unless we have fundamental systemic change. And to me, that starts with acknowledging the problems and then looking at how we use things like digital transformation, to make those changes that are needed, to do a better job of leveraging those talented workers, to do a better job of managing the finite resources that are going into health care.

… America mainly works on what I call a ‘break-fix’ model. Hospitals, provider organizations make money when they fix something − they do a medical procedure, they do a surgical procedure. And as much as we all talk about managing health, if you classically follow the money, reimbursement is based on the ‘break-fix’ model rather than having extreme incentives to manage one's health. And so to me, that's the start, but it goes beyond that, even within that model there are many ways in which we can be making better use of data, better use of AI, whether it's medically related or whether it's just improving efficiencies.

So even within that ‘break-fix’ economic model, there are things we can do.

Beacher: So when you mix capitalism and medicine, you get doctors who have an incentive to make you ill essentially.

Lawry: Never in my 20 plus years of working in and around health care have I come across a clinician whose intent isn't extremely good. They didn't go to medical school thinking I want to make a lot of money. They went to medical school because it was a mission, because it was calling. They want to do the right thing. But when you look at things like the cost of a practice, they are caught many times in the same model that everyone's caught in including consumers.

Fundamentally, it gets down to how do we align incentives and the system and then arm our very limited, valued clinical population with tools and processes to allow them to be the best at what they do. And to me, that's the value of AI. AI is not about technology. AI is about empowerment.

There was a study I believe by McKinsey that showed up to a third of all activities done by doctors and nurses can be automated with AI. Here in America, a doctor spends more time doing administrative things than they do seeing patients. So you look at all the processes such as EMRs and documentation, and we've turned our best clinician sometimes into basically clerical workers.

So imagine the ability to eliminate or reduce those repetitive, low-value activities. … Another study shows that up to a third of a doctor's time can be saved by reducing these repetitive activities. So imagine going to a clinician and saying what if I could give you a third of your time back? To spend more time with patients to do research? Or to do something as simple as get home for dinner more often with your family? And that's when it becomes empowerment.

Beacher: Could you give one or maybe two examples of current medical AI systems that illustrate the potential of medical AI?

Lawry: There's a great company called Nuance that … has for years been working on what we call ambient intelligence. If you're a consumer, what happens when you go to your primary care doctor? Typically when I go in, I sit in this very small boring cubicle and the doctor walks in, looks me in the eye and says, ‘Hi, Tom, why are you here?’ … What does he do after that? He goes to the corner of the room, signs onto his computer in the EMR and starts hunting and pecking while he's asking me questions. And then after the visit, he's got to do more hunting and pecking and documentation.

Nuance has found a way to put sensors and other things in the room where a doctor can walk in, simply have that normal conversation, look you in the eye, go through the exam − and in the background, all of these things are capturing the dialogue. They're capturing many things and it’s starting to pre-develop those clinical outtake notes. It allows the doctor to focus on the patient. … Not only does it improve the quality of care, it improves physician satisfaction, and dramatically lowers the time it takes to be performing those administrative tasks.

Flipping to other things, one of the biggest things happening right now is intelligence being stitched into any type of diagnostic images, pathology images, not to replace the work of radiologists, but rather to help them get ahead of things, to save them time, to point out some things that might have taken a little longer to see on their own. But it is all about how we use AI to come in behind clinicians and make them better at something they care about.

Beacher: Do you think there are fundamental roadblocks for the development and implementation of medical AI currently?

Lawry: There are some fundamental issues with infrastructure and approach to AI. The number one thing … is we have massive amounts of data. It's growing at an exponential rate every day. And yet, we're not making the best use of that. Why? We have systems that were never set up to aggregate, assimilate, and make that data valuable (modern data estate,) which is critical to being able to develop and deploy artificial intelligence at scale.

Imagine the ability and inclination to have a problem and start pulling any data they need to solve it. Or to have things like machine learning help them make predictions around things they care about. Those are the things that relate to data that are critical to solve for.

Beyond that, there's the human element. Many people recognize the processes we have are not good, but they're the ones that people are used to. So it's that inertia for changing the way things are done. Which steps down to what's the culture of an organization for change and process improvement? Is there a culture that embraces what I call a culture of being data-driven?

… Driving value at scale comes down to what I call the leadership imperative. Leaders in these big organizations must understand what it is, how to help push for change, and be the number one evangelist and champion.

Beacher: Now let's talk about politics. Do you talk to politicians and policymakers and have those conversations?

Lawry: The technology often gets out ahead of policymakers. I think we're in that state now. Eventually, they catch up. Regulations, laws reflect what's happening and put appropriate guardrails in place.

But the simple reality is today, you can be deploying AI in health and medicine and have it be totally legal, totally compliant with any regulations such as HIPAA, in America and GDPR in Europe, and still be highly unethical. There's a whole emerging area called Responsible AI to say how do we make sure that artificial intelligence applied in health and medicine is going to benefit all citizens and not just some.

Essentially, what we're seeing is many of the biases that we see in the physical world of health care are starting to cross over into the digital world through things like algorithms, and there are many ways in which we can look to identify and mitigate that. But many times we're seeing algorithms, machine learning predictions being put in the field, and they're actually doing overall good.

But the challenge is they're benefiting one population more so than another. It's legal, it’s regulatorily compliant − that is not the right thing to be doing? That is the question we all need to pay attention to, and eventually, hopefully, the regulators and legislators will figure out how to put those guardrails in and make sure that it's basically coming down to AI and an equity for all rather than some.

Quick example. We're putting together predictive capabilities for a hospital in America. We put 100 patients in the hospital every day as inpatients. We're going to use this algorithm risk rate − which of those patients I've admitted are at high risk of an unexpected adverse event. So we put the algorithm in play. We do a pilot and what we found is of the 100 patients we put in the hospital a couple of weeks ago, we're able to predict which ones were going to have an adverse event and we reduce the adverse event by 40%.

If I'm the head of quality, that's a quality win, if I'm the head of finance, and I'm not having someone stay in the hospital longer, that's a financial win. But if I told you that statistical average of 40% consisted of being three times better at predicting and preventing adverse events of white males versus Hispanic females, would that be okay? I'm improving quality and I'm saving money. But it's better for one population on the other. That is in fact, the issue and dilemma that is happening today in the algorithm because they haven't been stress-tested.

Beacher: These kinds of inequalities I would have thought to be totally inevitable. If you've got a population of 10 million people from the United States, … you're going to have some Inuit natives, you can have some Native Americans, you can have some Pacific Islanders − all of those people are going to be massively underrepresented. So those kinds of biases cannot be escaped from.

Lawry: That's where I wouldn't disagree so much as to say we can't just say, hey, it happens so let it go. There's an old adage in the business saying, all data models are flawed, but some are useful. Here in the United States, the majority of data is based on a system where health care is not guaranteed as a right constitutionally like it is in the U.K. That means there are some people that are more represented in that database than others. So just simply being aware of that (discrepancy) … goes a long way. … (Then add) attempts to use current and emerging tools to reduce the variability to what would be considered an acceptable level. But just to turn a blind eye to it − that's why they call it an ethical issue.

Beacher: Let's talk a little bit about regulation. Do you have a perspective on how the current state regulation in the U.S. is on medical AI?

Lawry: Right now, it's emerging. We're all early in the journey, including legislators and regulators. There are certain qualities of regulations like GDPR that are attempting to address that but it needs to go farther. … Probably the best example is working by groups like the FDA. They are responsible for and required to regulate medical devices to ensure they do not do harm.

But here is the thing: When databases were simply a thing that data ran through, that was easy (to regulate). But now, devices are all becoming more intelligent with the built-in algorithms. That totally changes the approach the FDA needs to take to manage approvals of those devices.

… If you want to see the head of a regulator explode, talk to them about a continuous learning algorithm getting into a medical device where every time data is run through it the algorithm is ratcheting. So you can take and look at the algorithm that is going into it, but two days later, it's going to be different. So how do you put the guardrails around something like that? We'll eventually figure it out. But right now, these are some of the challenges faced by all and also at times slowing down the progress of doing more good.

Beacher: Did you ever get long, dark nights of the soul where you can't get to sleep because there is something you worry about with the development of medical AI?

Lawry: There’s so much to talk about and so just even going back a couple of years ago, the clinical literature on AI was that it is going to replace the nurse and the doctor. One article that was very well known and published in a journal basically suggested maybe we should think about training fewer radiologists.

Anytime I hear something like that I think two things. One, they do not understand what AI is actually good at. And two, they really don't understand what a radiologist does. The human brain is an amazing thing. It is so amazing that we have actually figured out how to outsource certain things that only the brain could do, through AI. So when it comes to things like pattern recognition, going through massive amounts of data to find things, AI superior. But then look at what the human brain is good at and will always be better than AI − things like wisdom, judgment, knowledge, experience, creativity. Think about that.

I can take data and come up with a correlation for almost anything. But anyone who understands clinical practice, knows there's a difference between correlation and causation. Our job is to take AI, bring it in behind those clinicians, those health executives to make them better at what they do by knowing what AI can help with. And then acknowledging and honoring the fact that the humans have all this knowledge that is not going to change, not going to go out of style when it comes to how we do health care, how we practice medicine.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Felix Beacher, head of Healthcare Tech at Informa Tech

Felix heads the Healthcare Technology team at Informa Tech. He has direct responsibility for the Ultrasound Intelligence Service and is currently working on Omdia's forthcoming intelligence service on medical AI.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like