AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Leaders

Preparing for an AI system failure

by Chuck Martin
Article ImageMembers of the VisionAIres community share their strategies for dealing with the unexpected

Sooner or later, an organization can expect to experience a failure of its AI-based systems, with the obvious challenge of identifying when and where such failure could occur.

AI system failures hardly fall into the category of Murphy’s Law (anything that can go wrong, will go wrong), but they do suggest the wisdom of planning for the unexpected.

“An AI designed to do X will eventually fail to do X,” Roman Yampolskiy, professor at the University of Louisville, wrote for the VisionAIres community.

“Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric systems misrecognize people, transcription software fails to capture what is being said; overall, it is harder to find examples of AIs that don’t fail.”

Anyone who has dealt with technology for any period of time realizes that ultimately, something will go wrong.

Yampolskiy assembled a dozen great examples, like an automated email reply generator that sent out inappropriate responses, such as “I love you,” to business associates, and medical AI systems that classified patients with asthma as having lower risk of dying of pneumonia, as well as others with far more serious consequences. You likely can think of other incidents.

With this in mind, Yampolskiy outlined some of the best practices that can help prepare an organization to deal with AI failure.

These include creating a safety mechanism for each potential software failure, having a communications plan in case of a public embarrassement, controlling user input to the system, limiting learning to verified data inputs, checking for racial, gender, age, and other common biases in algorithms, and having a less ‘smart’ backup product or service available.

While a failure may be inevitable at some point, preparation can lessen the potential impact.

The challenge is to be ready for the unexpected, which can manifest itself as an unintended consequence of applying AI in business.


More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

AI for Everything Series

Oge Marques explaining recent developments in AI for Radiology

Author of the forthcoming book, AI for Radiology

AI Knowledge Hub

Research Reports

More Research Reports


Smart Building AI

Infographics archive

Newsletter Sign Up

Sign Up