- British businesses need to begin the hard work of engaging government with the future of AI, or risk losing public trust, argues Lord Clement-Jones, Chair of the Select Committee for Artificial Intelligence, in this exclusive interview
- The UK may never match global competitors like China and the USA on AI spending, but the country has the resources and know-how to become an international standard-setter for the ethical application of AI
- Government will review the Lords AI Report and begin to look at ways of bringing together all the disparate strengths of UK AI, which will be necessary to ensure the nation remains competitive in AI and future technology
LONDON, UK – Earlier today, the Select Committee for Artificial Intelligence finally published their findings in the Lords AI Report, entitled ‘AI In The UK: Ready, Willing, And Able?‘, with the aim of supporting the Government and the country in legislating to fully realise the potential of AI.
Gathering research from hundreds of AI experts, global businesses, and academics, the Committee set out to answer five key questions:
- How does AI affect people in their everyday lives, and how is this likely to change?
- What are the potential opportunities presented by artificial intelligence for the United Kingdom? How can these be realised?
- What are the possible risks and implications of artificial intelligence? How can these be avoided?
- How should the public be engaged with in a responsible manner about AI?
- What are the ethical issues presented by the development and use of artificial intelligence?
“Britain contains leading AI companies, a dynamic academic research culture, a vigorous start-up ecosystem and a constellation of legal, ethical, financial and linguistic strengths located in close proximity to each other,” the Committee said in a statement. “Artificial intelligence, handled carefully, could be a great opportunity for the British economy, and to protect society from potential threats and risks.”
Hot on the heels of the official release, we spoke to Lord Tim Clement-Jones, a Liberal Democrat peer and the Chair of the committee, in this exclusive interview.
View our full coverage of the report’s release here
Following the launch at the Royal Society today, how has the report has been received?
“Well, I think it’s been pretty well received! People have picked up on the parts that they’re particularly interested in. Some are interested in data monopolies and the big tech side of things, while others are interested in education and retraining. Then, of course, there’s the issue of explainability in AI algorithms, and some people have picked up on the ethical framework we’ve suggested for AI. We’ve even had people raise the issue of lethal autonomous weapons and that kind of thing. I think we’ve seen that there’s something in the report for everybody with an interest in AI.”
How well do you think the report did in answering the five key questions outlined early on by the committee?
“They didn’t do a bad job. Of course, it’s always difficult with these sorts of things – to have data you can really rely on in the future to make those kinds of predictions. We aimed to outline some possible scenarios, rather than set predictions, and it’s worth bearing those scenarios in mind when we talk about the future of AI.”
“I think the really important lesson is that AI is here and now. This isn’t just something we need to be thinking about in the future.”
“For instance, there are lots of estimates out there about job losses. Some of them coalesce around the 30-40% mark, but they use the same methodology as the Fray and Osborne findings, so it’s a bit suspect. It’s not as if there’s this absolute, golden thread running through this. You have to make a judgement, and you say if that is the scenario, that really adds a sense of urgency to developing, say, a national retraining scheme, and you put more emphasis on that. I don’t think one wants to be caught on the hop. Governments have to make those judgements, and we’re suggesting that they do by planning and preparing for a large amount of retraining when it comes to AI.”
Did the report turn out the way you expected? What did you personally learn about the field that you didn’t know before?
“Well, firstly, I learnt a lot about what international competitors are doing, and how they’re putting together strategies, which makes it even more imperative that we get our act together. I also learnt an awful lot about how incredibly dynamic our research and our AI development sectors are working – we visited a lot of different startups, we had a whole conference seminar with TechUK, and also received a lot of evidence about the considerable advances being made in areas like medicine.”
“I think the other really important lesson is that AI is here and now. If you look at the front of the report, we put out a little ‘day in the life of’, which attempts to show how much AI is already in our everyday lives. This is very much here and now, and isn’t just something we need to be thinking about in the future.”
What are the key takeaways of the report for British businesses?
“They should focus on two things, really. They should focus on ensuring government delivers the climate for AI in terms of getting the context right for the kind of capital that people need for growth. One issue we’ve got is that we don’t have many AI unicorns at all here – if any – because they tend to be sold on at the point where they’re just about to really take off. So, we need to have that kind of climate. We need to make sure all the enterprise schemes and VC funding schemes actually work for businesses – so businesses need to enter their voices into that debate and make sure that they’re really talking to government. This also applies to the whole skills agenda – they should make sure that they put their voice behind saying, ‘yes, we really do want a visa system that helps us get the skills we need into the UK’.”
“Businesses should focus on ensuring government delivers the climate for AI in terms of getting the context right for the kind of capital that the sector needs for growth.”
“On the other side of the coin, there’s the need to mitigate the risks. The major risk is losing public trust. When we develop our AI systems, we need to ensure that we are actually building them within an ethical context and framework. The five principles outlined in the report might not be the principles that are eventually adopted, but by and large, we need to make sure that technology is being developed for people’s benefit. All those principles around transparency and explainability apply. It’s really important that the tech industry gets behind those, because we mustn’t lose public trust in the way we did, for instance, with GM foods. There, the public couldn’t see the benefit of what could be occurring – for them, it wasn’t a priority because of the risks that they saw were involved.”
What might the regulatory landscape of AI look like if the recommendations of this report are pursued by government? Should enterprises be concerned about regulation?
“I think enterprises should really be pleased that we’ve adopted such a flexible framework. It’s an overarching, ethical framework where you set out the principles. We aren’t suggesting there should be any more regulators out there – we don’t think a special AI regulator is the way forward.”
“We don’t think a special AI regulator is the way forward.”
“We think that the sector regulators – whether it’s the Competition and Markets Authority, the Information Commissioner, or the Financial Conduct Authority – all have specialisms in regulating their specific sectors or their particular issues. What we’re saying is that they should then have regard for these principles when they regulate. We think that’s really important as it provides a kind of expertise for developing regulation.”
One of the most interesting conclusions of the report is that, while the UK might never match China or the USA in terms of AI spending and investment, it can certainly take a lead on the ethical development of AI. What long-term obstacles do you see arising that might hinder our international influence?
“Well, a lack of skills, either in academia or business. If Brexit meant that we couldn’t use the UK to its maximum capacity in terms of researching and developing AI, skills could certainly hold things up. That’s what we’re trying to minimise by talking about the visa policy.”
“There are some important areas that need to be resolved before we can be absolutely certain that Britain’s historic expertise in this area can be fully realized. Government have got many of the right ingredients in place – there’s the Alan Turing Institute, which will be a pre-eminent focal point for research. There’s many other organisations such as the Centre for Data Ethics and Innovation. What the government has got to do is pull all of these ingredients together and make sure all of these areas are addressed. You can’t just simply say that we’ll play with one or two areas of AI. They’ve got to really put the whole suite of recommendations into place to ensure we’re in the best possible shape for the future.”
What’s next for AI in Parliament now that the Select Committee has disbanded?
“Well, the government will have to respond to the report within two months, which they will do – and we think it’ll be a pretty fair reading. [Secretary of State for Digital, Culture, Media & Sport] Matt Hancock was treating our report today, so we think that’s a positive sign. The Law Commission was advised to look at the liability issues in AI, and this morning they’ve said they’re going to be looking at our recommendation, so we’ve already got some traction there. I think there will be ongoing activity – I myself co-chair the All Party Group on AI.
“There’ll be others who want to push government to adopt these recommendations as a blueprint for the future, so I hope the government see this as a fantastic way of pushing forward their agenda. By the time I speak at The AI Summit in June, I hope I’ll be able to say, ‘yeah, we’ve made these recommendations, and the government is implementing all 74 of them’.”
Based in London, Ciarán Daly is the Editor-in-Chief of AIBusiness.com, covering the critical issues, debates, and real-world use cases surrounding artificial intelligence – for executives, technologists, and enthusiasts alike. Reach him via email here.