Failure is good - how ‘black box thinking’ will change the way we learn about AI

Max Smolaks

October 22, 2019

5 Min Read
Stylised image of a black swirl

All paths to success lead through failure, you have to change your perspective on it. Let’s apply the ‘logic of failure’ to artificial intelligence

By Anita Constantine, Constellation AI 22 October 2019

In a brave new world, it’s not just the brave who must stand accountable to prevent the repetition of historical errors. We live in a morass of political cover-ups, data breaches and capitalist control: now, more than ever, is the time for radical transparency. A change to regulatory mindset must happen, so we’re applying Matthew Syed’s theory of ‘Black Box thinking and logic of failure’ to Artificial Intelligence.

Too often in a social or business hierarchy, we feel unable to challenge enduring practices and behaviours. We abide by the rules and regulations we might know to be outdated and inefficient; we might witness dangerous error or negligence, and yet feel unable to challenge figures of authority. Negative loops perpetuate when people do not investigate error, especially when they suspect they may have made a mistake themselves. But, when insight can prevent future error, why withhold it? The only way to learn from failure, is to change our perspective of it, to understand that it isn’t necessarily a bad thing.

In aviation, if something — or someone — fails on board the aircraft, the indestructible Black Box will detect and record it. Independent review bodies have been established to monitor findings, with the sole purpose of discovery. The process is almost impossible to cover up. Rather than chase culpability, the findings are noted and shared throughout the industry; everyone has access to the data, so that everyone can implement their learnings. Once pilots were protected, they came forward to discuss error and failure became a necessary step in learning: it altered industry regulation. In aviation, failure is ‘data-rich’. Countless lives have been saved as a result.

In medicine,
there are numerous reasons why a human or system might fail during surgery or
patient care. In the past, mistakes have been silenced in fear of
recrimination, and vital opportunities to learn were discarded. Last year, the
NHS spent £2.6 billion in litigation for medical errors and negligence, funds
that could have been far better placed elsewhere. Mistakes aren’t a waste of
valuable resources, they are a way of safe-guarding them. Speaking up on
current failures can help us to avoid catastrophic failures in the future. In
order to create a transparent environment in which we can progress from error,
we need to move to from a blame culture, to a learning culture. To study the
environment and systems in which mistakes happen — to understand what went
wrong, and to divulge the lessons learned.

In ‘Black Box
Thinking — The Surprising Truth About Success (and Why Some People Never Learn
from Mistakes)’, Matthew Syed calls for a new future of transparency and a
change to the mindset of failure. These principles, according to Syed, are
about “the willingness and tenacity to investigate the lessons that often exist
when we fail, but which we rarely exploit. It is about creating systems

and cultures that enable organisations to learn from errors, rather than being

threatened by them.” By changing your relationship with failure to a
positive one, you’ll learn to stop avoiding it.

“All paths to success lead through failure and what you can do to change your perspective on it. Admit your mistakes and build your own Black Box to consistently learn and improve from the feedback failure gives you”.

- Matthew Syed, ’Black Box Thinking’

The AI black box

Let’s apply this
‘logic of failure’ to Artificial Intelligence as an alternative approach to
regulation, with transparency and learning based on ‘Black Box thinking’ and
aviation.

Contrary to
hard-line AI ethicists — who may have a fatalistic view on punishment when
things go wrong — a ‘Black Box thinking’ approach allows us to be real in how
we react to and deal with issues. How we work to solve them and how we translate
that to the rest of the industry, so that others might learn too.

In any industry, applying intelligent systems to the challenges we face is likely to result in unintended consequences. It’s not always obvious how to identify hazards, or even to ask the right questions, and there is always the chance that something will go wrong; therefore, no business should be placed on a pedestal. We need to collect data, spot meaningful patterns, and learn from them - taking in to account not only the information you can see, but the information you can’t. Using ‘deliberate practice’, you can consistently measure margins of error, readjusting them each time. This can be applied to every part of human learning. How can we progress and innovate if we cannot learn? How can we learn if we can’t admit to our mistakes?

What we can do,
is respond with transparency, accountability and proactivity to those
consequences: to be trusted to do the right thing, to challenge industry
standards and to consistently work on improving them. We must not build an
industry on silence, in fear of being vilified. Instead of extreme punishment,
we should create the space and processes to learn and share knowledge, using
root-cause analysis, so that issues are not repeated elsewhere. We need to
gather the best perspectives and opinions; with experts coalescing to challenge
and debate industry standards. By doing this, AI will advance more effectively
and safely, and society would reap the rewards.

Artificial Intelligence is not a new industry, it is a new age. To live successfully in this brave new world, we must readjust our thinking and be just that: brave. In good hands, technology and artificial intelligence can turbo-charge the power of learning. We’d get to a better place, faster, if we can hold people accountable and resolve issues in public. We must have the courage to face the future with openness and honesty. To not be afraid of failure, and admit to it, for the sake of learning.

Logo for Constellation AI

Constellation AI are developing fundamental technology to advance human modelling and human-like communication with machines.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like