A personal essay from the head of healthcare technology at Informa Tech

Getty Images

A friend of mine, let us call her Fiona, recently experienced symptoms that made her worry that she might have had a mild heart attack. She did go to the doctor, but long waits, jargon-laden medical reports and lack of a definitive diagnosis prompted her to pose her case to ChatGPT. What follows is a fascinating case study that illustrates how a large language model could be used to partly replace a physician.

I discuss the potential benefits and risks; I will leave you to decide for yourself whether my friend was reckless with her health or whether she stumbled upon a potentially game-changing way of using a large language model-based tool like ChatGPT.

This friend of mine is a brilliant woman with an insouciant sense of humor and no known health conditions either short- or long-term. But five months ago, she experienced heart palpitations one night while lying in bed. She was concerned enough to visit her doctor and underwent an electrocardiogram (ECG), which measures the electrical impulses emanating from the heart. She was referred to a cardiologist.

Two months later, Fiona had a phone consultation for around five minutes with a cardiologist. He offered her some reassurance: It was unlikely that she had had a heart attack. More likely, the palpitations could be related to a mysterious infection Fiona had two weeks prior. This may have caused pericarditis, which could have then caused the heart palpitations. He also suggested that the heroic amounts of coffee that Fiona had been drinking could potentially be a contributing factor.

Another two months on, Fiona was assessed using an echocardiogram, a type of ultrasound which images the heart. She finally received a cardiology report that considered the symptoms and reviewed the findings of the ECG and the echocardiogram. However, the report was full of cardiological jargon, and even the rough gist of it was unclear.

Fiona is a very busy woman and did not have time to do a whole lot of research into the various technicalities included in the report. So, being the kind of person who sometimes looks for left-field solutions, she pasted the cardiology report into ChatGPT and then started asking it questions.

For example, ‘what is ST elevation in anteroleads?’ ‘what is eGFR?’ and ‘how variable is eGFR in the day?’ The answers ChatGPT gave were clear and seemed to make sense. But could they really be trusted? In this case, yes, because they were consistent with answers from an internet search sourced from well-respected websites. Then she asked a question whose answer required a lot of sophistication: How do the echocardiogram results affect the interpretation of the ECG results? Again, the response was clear and seemed to make perfect sense.

Finally, she asked ChatGPT the $64,000 question: Was there evidence of a serious heart condition? She rather expected that ChatGPT would refuse to give this kind of fundamental judgement. But it did not hesitate. The response, which I slightly shortened, was as follows:

"It does not appear that there is evidence of a serious heart condition in this case.
Although the ECG showed ST elevation in anterolateral leads, the subsequent
echocardiogram showed normal left ventricular (LV) size and function, normal
diastolic function, and no significant valvular abnormalities.
Additionally, the R-test showed sinus rhythm with some episodes of sinus
tachycardia and supraventricular ectopics, but there was no evidence of
sustained arrhythmias or other serious abnormalities."

It seems that Fiona does not have a serious heart condition at this time, but ongoing monitoring and management of her blood pressure, cholesterol, and cardiac function may be warranted.

Some people in medical AI are beginning to ask whether this kind of methodology could or should replace going to the doctor, at least to some degree. While provocative, this possibility has to be taken seriously. For some people, visiting a doctor can be impractical (as in many parts of the developing world) or too costly. So it is time to start considering the pros and cons.

One problem with using ChatGPT as a doctor: ChatGPT answers questions but does not ask them.

In Fiona's case, she might not have mentioned the mysterious infection she had had two weeks prior to the symptoms. Another problem is that it is hard to know very much about the pitfalls of this kind of approach, because to my knowledge there have not been good studies looking at this yet. On the other hand, these studies will undoubtedly come.

Also, there are clear advantages to doing what Fiona did. Consulting ChatGPT was a solution to the ongoing problem of being pressed for time. Also, she had the opportunity to ask ChatGPT as many questions as she liked, and at a reasonable pace, rather than having the consultation squeezed into an eight-minute window.

Certainly, the emergence of large language model-based tools like ChatGPT is opening up new vistas for health care. Companies have already started to enter this space (An example is Mend whose CEO I interviewed recently on The Medical AI Podcast). There is a lot of exploring to do by the public, regulators, the medical profession and the health care industry.

As for Fiona, she is still trying to find time to go to her doctor. She has, however, taken up yoga.

About the Author(s)

Felix Beacher, head of Healthcare Tech at Informa Tech

Felix heads the Healthcare Technology team at Informa Tech. He has direct responsibility for the Ultrasound Intelligence Service and is currently working on Omdia's forthcoming intelligence service on medical AI.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like