Helping medical professionals save their increasingly valuable time
by Max Smolaks 23 March 2020
British medical imaging start-up behold.ai says its deep learning system can quickly and reliably label chest X-rays from Covid-19 patients with developing pneumonia as ‘abnormal’, which could help speed up diagnosis amid an ongoing pandemic.
After reviewing 28 X-rays in confirmed cases of infection, the platform called ‘red dot’ was shown to be correct 85 percent of the time – close to being on par with a qualified medical professional (see below) - but its creators say accuracy will increase as models get access to more data.
“As we evaluate further positive cases from across the world, including here in the UK, our results will be further validated,” said Dr Tom Naunton Morgan MB FRCS FRCR, chief medical officer at behold.ai. “This will increase the utility of our ‘instant triage’ and potentially help reduce the burden on healthcare systems as more and more cases of pneumonia present and require rapid diagnosis.”
Behold’s red dot is used by several National Health Service (NHS) trusts in the UK and has been cleared by the US Food and Drug Administration (FDA), with the US launch expected later this year.
The right time
established in 2016 to bring
the power of advanced image recognition to radiology
departments. The red dot platform is not meant to replace experts but
rater to assist them, helping reach correct conclusions faster and
with greater accuracy.
The core models
within the red dot have been trained using more than 30,000 example
images, reviewed and reported by experienced consultant radiology
clinicians. The company says it can label a chest X-ray as normal or
abnormal in less than 30 seconds.
Identifying diseases like lung cancer on radiology scans is one of the most promising and most immediately available applications of AI, and more specifically, deep learning. It’s a hotly contested field; even Google is involved.
In a recent paper published in the medical journal Lancet Digital Health, a team from the University Hospitals Birmingham NHS Foundation Trust found that when interpreting medical images, on average, deep learning systems classified a disease correctly 87 percent of the time – compared with 86.4 percent for healthcare professionals. Such systems also correctly gave the all-clear 92.5 percent of the time, as opposed to 90.5 percent for human doctors.
The authors noted that in this limited comparison, the healthcare professionals were not given access to additional patient information that they would have in the real world to improve the accuracy of their diagnosis.
behold.ai, said deep learning technology was available
“here and now” to help manage the burden
that will fall on health systems like the NHS in the coming weeks.
Back in 2019, the UK
government pledged to spend
£250 million ($303m) on artificial intelligence initiatives
within the NHS – including a taxpayer-funded AI Lab.