December 19, 2022
A new Radiology AI Safety Initiative has been formed to help ensure that algorithms do not suffer from ‘AI drift’ and endanger patient health.
Deepc, Centaur Labs, and Segmed formed an alliance to monitor AI algorithms in clinical settings to discover if they are performing suboptimally over time. Since data changes, there is a risk that historical training data used by the AI model become less reliable the longer it is used.
Post-market surveillance of AI at scale has been a challenge in the industry: the improvement of algorithms before and after clinical deployment to ensure continued patient safety. The AI model can drift, say, when data from a new group of patients or new type of scanner is introduced.
“It is crucial to assess the performance of AI before and during its deployment in routine clinical use,” the three partners said. “AI algorithms can then be re-trained on broader datasets, reflecting the target population, and making them more robust and safe.”
German medtech company deepc said its DeepcOS radiology AI platform will enable vendors to do real-time post-market monitoring to spot AI drift. Data is anonymized and cleared for AI research and development through California-based Segmed’s data platform and annotated by Centaur Labs, which is headquartered in Boston.
The three said this data can be used to “retrain the AI algorithm, improve performance, robustness and clinical safety.”
Centaur Labs CEO Erik Duhaime said “providing the ability to identify model weaknesses through deepcOS and correct them quickly in our joint offering is a huge step forward in scaling AI to clinical mass adoption.”
About the Author(s)
You May Also Like