Executive spotlight: Clare Walsh from the Institute of Analytics

Chris Price speaks to Clare Walsh, Director of Education at the Institute of Analytics, about how organizations need to put processes in place to ensure they are using AI responsibly

Chris Price, Freelance reporter

February 6, 2023

5 Min Read
AI Ethics
AI EthicsGetty Images

How can you help firms deliver AI that meets the requirements of trust, transparency and security?

For us, robust and responsible innovation is a benefit to everybody, but it’s not always easy to achieve. There is no such thing as a perfect algorithm, they all come with compromises.

Part of our role as experts in the field of data science is to understand what those compromises are and to communicate those to the decision makers we work with.

In an ideal world, the CEO would be super enthusiastic about AI and drive this process forward rather than your chief data officer because really it’s about decision making and not the technology. It has to be driven from the top.

How do you upskill people to work with new AI technologies?

As a membership body for data science professionals, we offer a range of services, such as upskilling and keeping people up to date with rapidly emerging technologies.

Though we work with a range of people, one of the groups we target is business analysts. Typically, they are very confident with programs like Microsoft Excel, but we encourage them to take the next step – to see there is a life beyond Excel!

We show them there are other point and click tools that will do way more exciting things such as the ability to work with really big datasets which you can’t do in Excel. Then afterwards we move them on to something very user friendly such as R coding.

Do you think that technologies such as AI can help to address the problem of low productivity we have in the UK?

Absolutely. I think it’s our best hope of increasing productivity and we need to embrace it. Our universities are doing an amazing job – really charging forward and leading the way, particularly in international academic research.

But when it comes to corporate research and development in AI there’s not so much going on. One of the problems here is access to data. GDPR has been a little bit unclear as to what qualifies as a research data set and very few companies have massive datasets to experiment on. There are research datasets they could use without incurring massive expenses.

How do organizations ensure they are using data in an ethical and unbiased way?

I think there are some fundamental principles that underlie good data practice and actually some of the latest technologies such as ChatGPT can do an amazing job of supporting us in documenting our processes.

Maybe a year ago you wouldn’t have wanted your super expensive data scientists to spend their time writing out pages of documentation whereas now they can put their code into a generative AI model such as ChatGPT and ask the machine to write the documentation in natural language.

Is there a danger of relying too heavily on these new generative AI solutions?

Absolutely. At the Institute of Analytics (IoA) our policy is you should only ever use generative AI to do something that you could do yourself if you had more time. There is a legal requirement for human oversight.

So, for example, in the case of AI documentation, someone needs to read through what the generative AI has produced to check that it isn’t complete rubbish. That’s why we say let’s embrace these technologies but do so responsibly.

Do you think there need to be greater controls in place for some uses of AI?

Certainly we would like to see more controls over some of the more experimental applications, such as Snapchat’s My AI Chatbot which has been found to give out inappropriate advice to children.

We also need a fast reaction to remove such applications because a polite request isn’t going to do it. It has to be something mandated and at the moment there isn’t a framework in place for this.

However, most companies don’t have the business model of large companies like Snapchat and actually do value trust and reputation.

While it tends to be more dangerous AI applications attracting all the press attention and generally giving AI a bad name, at the backend there are some amazing examples of responsible AI offering services such as automation and customer insight.

Also, although there is some risk with AI, for many companies there’s also a risk that if they don’t engage, they may struggle to get to where they need to be in the next 10 or 20 years.

Should government legislation play a more important role in controlling AI as has been suggested recently?

I think it’s incredibly difficult to offer a strong view right now. Even the EU AI act has been completely rewritten since the beginning of the year. It now no longer talks about risk, it talks about foundational technologies in response to generative AI.

We are all struggling but my personal view is that ideally companies will sign up to a voluntary code of conduct where we all agree to respect the fundamental principles of innovation. There are already laws in place to deal with issues such as discrimination as well as article 22 of GDPR which limits the circumstances in which AI can make automated decisions about an individual.

That said, many of the big companies have been benefitting from a legal deficit for some time now because we haven’t had laws in place to control them. And as a result they have just been doing what they want. My advice to SMEs is to assume that the law will eventually catch up with negative practices. The information commissioner is very clear there will be no place to hide from reckless innovation.

Clare Walsh is chairing the discussion, Responsible AI: Does Your Model have Trust, Transparency and Security?, at AI Summit London on June 15th, 14.35 to 15.15.

About the Author(s)

Chris Price

Freelance reporter

Chris Price is a freelance technology and transport journalist and a copywriter for brands. He began his journalistic career in 1992 writing about satellite TV and home cinema for consumer publications, becoming freelance in 1997.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like