An opinion piece by the director of data science and AI at the British Standards Institution, the U.K.'s national standards body.

Craig Civil, Director of Data Science and AI, British Standards Institution, Director of Data Science and AI, British Standards Institution

May 5, 2023

4 Min Read
Third Eye Images/Getty Images

In the movie Jurassic Park, actor Jeff Goldblum plays a mathematician who says this famous line: “Your scientists were so preoccupied with whether they could, they did not stop to think if they should.” With respect to dinosaurs roaming the earth, it is a wise sentiment. And it is also one that I think some of AI’s loudest cheerleaders could also consider.

Do not get me wrong. I am a champion of AI. With digital trust embedded, AI presents an opportunity to deliver significant societal benefits including helping us to work smarter, rest longer and play harder. But AI is not magic – it has been coded by humans – and before organizations rush to use it, they might want to consider whether it fits the specific task or challenge.

I have seen in previous organizations people spending significant sums on new “wonder technologies.” Sometimes these delivered great operational savings and customers loved the new AI app, but I have also seen the potential savings reduced by expensive-to-maintain technology and customers becoming frustrated by a less-than-intelligent app. Today, AI might very well be the answer, but you need to know what is the issue you are addressing before you can identify whether this is the case.

To realize the benefits of AI, the starting point for any organization considering integrating it into operations should be whether it is an appropriate use case and whether a complex, expensive piece of software is needed to replicate a job – or whether, whisper it, there is a less glamourous but equally effective solution already available. This matters, not simply to avoid wasting time and money, but because if AI is used without the right parameters and purpose, it will not build digital trust. People will be skeptical about its use or fail to appreciate its value.

For example, there are many questions over using AI to make decisions about the future of people in an organization. After all, while the technology may be capable, human beings are training the system, meaning a number of biases will potentially feed through to the AI code. Solely relying on AI in this respect can increase risk and far outweigh the potential benefits of time saved. The use case is not there - but could HR teams use AI in tandem with human intelligence? Absolutely.

By the same token, AI in health care is already well advanced to improve patient outcomes. Taking scans to detect cancerous cells is not entirely risk free. However, the possibility of scaling up these reviews using machine learning could mean the risks of the technology not spotting an issue needing investigation may well be outweighed by the potential for improving identification of cancer on a larger scale for the majority.

Is AI the right tool?

If AI is the right tool, applying technical skill and commercial and ethical judgements to how it is being used and the parameters in which it is operating is vital. Like everyone, I have been exploring what ChatGPT can do. I started by asking who the U.K. prime minister was – and it told me it was Boris Johnson, because it was trained on data when he was indeed still in Number 10 but the AI training data had not been updated since.

While generative AI is developing every day – Bard is connected to the internet in real time – it underscores the idea that when you get an answer from an AI source, the next step is to test it out, think on it, and apply your own learning and intelligence. Once you understand it and you are aware of the risks of using that piece of software, you can make a judgment call as to whether the system is going to be appropriate.

As excited as we are about the possibilities AI opens up, the perception that AI is somehow 'magic' is inaccurate. Taking a step back, it is about first acknowledging that it is a machine, and an army of people have coded it and fed it information. The next step is really thinking about the frameworks that are in place to govern its use. Rather than using AI for the sake of it, let us balance the undoubted great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

So with checks and balances and controls, we can realize the potential benefit for society. Ultimately, a risk-based approach can help determine whether AI is the right route for your organization – and whether just because you can use it, means you should.

About the Author(s)

Craig Civil, Director of Data Science and AI, British Standards Institution

Director of Data Science and AI, British Standards Institution

Craig Civil is the director of data science and AI at the British Standards Institution, the U.K.'s national standards body.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like