Hoping to eliminate dubious studies from medical journals

Tanwen Dawn-Hiscox

September 10, 2020

3 Min Read

Hoping to eliminate dubious studies from medical journals

A group of experts from the scientific publishing sector has devised the world’s first international guidelines to benchmark the reliability of AI systems in healthcare.

The initiative comes at a time when such technologies are increasingly touted as being capable of revolutionizing the sector.

The consortium, which includes the British Medical Journal (BMJ), Nature Medicine, and Lancet Digital Health, has created a set of requirements which medical studies relying on AI systems would need to meet in order to be published in recognized journals.

Faking it

While most medical research is subject to a high level of scrutiny before it is considered admissible, no official guidelines exist for reporting research conducted using AI systems – despite the existence of many potential flaws, such as poorly collected data, small sample sizes and narrow demographic samples.

The idea that AI could transform the healthcare sector – not just diagnostics, but also clinical workflow, medical imaging, therapy planning, patient communications, as well as the discovery of new drugs – is encouraging, and bears some promise, especially amidst the Covid-19 pandemic.

Canadian startup BlueDot claims to have used AI to detect an abnormal cluster of pneumonia cases in Wuhan, China, days before the World Health Organization alerted governments of its existence.

It is hoped that platforms such as Kaggle, which hosts the Covid-19 Open Research Dataset, could be at the heart of the most transparent and focused application of AI in healthcare to date.

However, with no reporting standards emerging from the medical community, governments and the general public run the risk of being sold on false promises, professor Alastair Denniston of the University of Birmingham, an expert in the use of AI in healthcare and member of the consortium, told The Guardian.

“There is currently a trust issue,” he said, adding that the guidelines will “ensure patients and healthcare professionals can be really confident that AI healthcare products are only deployed when they are known to be effective and safe.”

Having examined more than 20,000 studies, professor Denniston said that fewer than 1% met the sorts of standards the group has devised, with many making claims relying on best-case scenarios, or on dubious practices when it comes to the collection and processing of data.

Thanks to the new framework, an extension of scientific reporting standards developed in the mid-2000s, the group hopes that clinical trials can be designed, conducted and reported upon as reliably as those using more traditional systems.

The framework will sit alongside certifications created by the industry – such as the ANSI-certified standard launched in February by a group of 50+ companies including Amazon, Google, IBM, Microsoft and FitBit, and led by North America’s Consumer Technology Association. It defined terms related to artificial intelligence, to improve consistency and transparency in the development and use of AI technologies in healthcare.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like