July 8, 2020
Suggest the agency waits for existing products to prove any benefits before clearing more
Following rapid advances in AI for medical imaging, questions and cautions relating to safety and effectiveness of actual products are being raised by the American College of Radiology (ACR) and the Radiological Society of North America (RSNA).
The comments were published in a letter to the Food and Drug Administration (FDA), suggesting it is “premature for the FDA to consider approval or clearance of algorithms that are designed to provide autonomous image interpretation independent of physician expert confirmation and oversight because of the present inability to provide reasonable assurance of safety and effectiveness.”
The FDA earlier this year held a public workshop titled “Evolving Role of Artificial Intelligence in Radiological Imaging" to discuss emerging applications of AI, including AI devices for automating the diagnostic radiology workflow and guided image acquisition.
The workshop was intended to identify the benefits and risks associated with the use of AI in radiological imaging.
The FDA defines radiology AI as “software in which AI/ML is being used to automate some portion of the radiological imaging workflow,” such as detection, diagnosis, and reporting.
The ACR and RSNA letter suggests the FDA should wait until current AI algorithms have broader market penetration, so their safety and efficacy can be documented.
The professional organizations cautioned that if the goal was to remove the physician from the image interpretation process, the public should be assured that the algorithm will be as safe and effective as the physicians it would replace.
“The potential output of autonomously functioning AI algorithm in radiological imaging would be to use an AI algorithm to potentially bypass the physician experts in image interpretation and refer patients to physicians who are not experts in medical imaging for treatment based on the results of the algorithm,” the letter stated.
“The ACR and RSNA believe it is unlikely FDA could provide reasonable assurance of the safety and effectiveness of autonomous radiology patient care without more rigorous testing, surveillance, and oversight mechanisms throughout the total product life cycle,” it added.
The ACR and RSNA said the patients’ imaging care should rest solely with the interpreting physician and “AI is just another tool at the expert’s disposal.”
They were not the only organizations weighing in on AI in medical imaging. The Medical Imaging and Technology Alliance (MITA), a trade organization representing the manufacturers of medical imaging equipment and radiopharmaceuticals, sent the FDA a number of questions relating to potential risks posed by AI-based solutions.
For example, the organization asked whether the output generated by the algorithm would provide on-screen indicators or direct a user to review certain information or whether it would provide a conclusive diagnosis or trigger a treatment.
Other questions related to the qualifications of the user, the potential for review of the model output and any intervention, and how the updates to the AI algorithm would be managed.
One of the more significant issues related to the potential harm of an incorrect output, such as a false negative result.
About the Author(s)
You May Also Like