A conversation with Laurence Lee, U.K. Ministry of Defence second permanent secretary

Ben Wodecki, Jr. Editor

January 25, 2023

12 Min Read
British army soldier in an armored vehicle during military exercises in Salisbury, England.

Last summer, the U.K. Ministry of Defence unveiled its Defence Artificial Intelligence Strategy, a blueprint for transforming the country’s military into an AI-ready force, at the AI Summit London.

MoD second permanent secretary Laurence Lee further explained the country’s overarching principles underpinning the strategy in an opinion piece for AI Business.

Recently, Lee joined the AI Business podcast to take a deep dive into the strategy’s finer points – including addressing skill shortages and working with allies. Lee also discussed his views on what the future holds for AI and the British armed forces.

The following is an edited transcript of that conversation. You can listen to the full chat in the latest episode of the AI Business Podcast below, or wherever you get your podcasts.

The pair also spoke about the MoD's work in quantum computing. Head to Enter Quantum to read more.

AI Business: Talk to us a bit about the Defense AI Strategy report unveiled at the AI Summit last June. How much does this expand on the data strategy for defense previously disclosed in 2021?

Laurence Lee: It was a really exciting day for us. We've been pretty busy since then. Broadly, it set out how we will prepare and organize ourselves, as much as it will talk about how we work with industry, with academia and our allies to exploit this data with the use of AI and machine learning techniques.

Related:Artificial intelligence for our defense and security

The first objective of the Defense AI Strategy is to make sure that we in defense are an AI-ready organization. We think of being AI-ready as having the right skills and leadership in place, the digital and data enablers ready, and any new or revised policy prepared that we think we might need. The digital and data enablers are where the digital and data strategies for defense are absolutely critical.

The second objective is for us to harness our data and utilize our skills to develop battle-winning AI capabilities, at pace, at scale and responsibly. Central to our efforts on this is the Defense AI Center, which we hope will champion the use of AI throughout the Ministry of Defence. It will help us innovate, and scale science and technology and enable everything that we do by providing and supporting the required tooling, expert advice and skills development.

Working with our U.K. industry and academia partners in new ways is also key to success in this area so that new ways of working are not lost on us in strengthening our defense AI ecosystem.

We're keen to develop new relationships, solve new requirements, think about problems in a different way, remove commercial barriers, and to work better with small and medium-sized enterprises. We haven't got it right in the past and we want to make that feel different, so those businesses are able to work with us more easily.

Our final objective is to face the challenging global strategic questions around the use of AI in the military domain. We want to make sure that the development of such systems be safe and responsible. And we want to promote stability and security in the use of AI.

AI Business: How is the U.K.’s stance on AI policies in defense different from that of the EU? The U.S.?

Lee: We're closely aligned on our approaches to this topic. We work very closely with our allies, including the EU and U.S. partners on the responsible development of new technologies and military capabilities. During the drafting of our strategy, we were in constant engagement with our allies. And we are continuing that dialogue. We get so much from the different perspectives of the EU and the U.S., and I hope they are enriched by how the U.K. system thinks about these questions.

It is important to us to be flexing to the perimeter of these issues with the very best minds thinking about these questions. Both the U.S. and NATO have published their own ethical principles for the use of AI, which strongly aligns with the British view.

Thinking about the future, we've got to stay joined up if we're going to achieve interoperability. This has to include establishing common ways of operating. And that common way of operating has to include the use of responsible AI in the battlespace, and in how we think about collaborating on capability development.

One key objective of the strategy is to shape the global norms and standards of AI development. I don't think anyone's achieved that yet. But it's very much a key objective for us. And we will continue to work closely with our allies in order to achieve that.

AI Business: What is your thinking around psyops – using AI to spread misinformation? What will the defense strategy be to combat it?

Lee: AI could enable a range of threats to our national security, both here in the U.K., but more broadly in the West. The spread of misinformation through the use of deep fakes is a challenge, as are large language models like ChatGPT and Google's LaMDA, which are hugely powerful tools. We’re already seeing how they can produce wildly inaccurate or even damaging information. It's easy to imagine how adversaries might use these same technologies to spread misinformation on an industrial scale and seek to disrupt and undermine our institutions.

Our response to those threats must be based on education and encouraging people to challenge what they're seeing online. AI also has a key role in combating misinformation. We are working closely with colleagues across the government, including the Office for AI and the National Cybersecurity Center on these issues. These are emerging challenges, and the responses are still developing. But I think technology can help fight technology in this space.

AI Business: The war Russia is waging against Ukraine is modern in many respects, with AI and drones making a difference. What is your thinking about how AI will change warfare in the future?

Lee: Russia's illegal invasion of Ukraine has starkly highlighted the threat and the changing nature of operations, thanks to AI and in particular autonomous capabilities.

We're looking pretty closely at what's happening in Ukraine, including the use of drones and other technologies. Some of this isn't new, as drones were also used during the Nagorno-Karabakh conflicts last year. Though there haven't been clear demonstrations of these drones operating autonomously, they've definitely given both sides the advantage of increased mass and innovative ways of operating; AI will also have an effect on the speed and tempo at which warfare is conducted.

Also, (there will be) enhanced analysis: We'll see decisions made in much quicker time allowing for faster responses to threats, be they physical or cyber. And finally, sub-threshold warfare will be impacted by this technology, be that in how we counter disinformation, or respond to AI-enhanced cyber threats, which are more persistent and complex.

AI Business: Given the U.S. and U.K. worked together on ML projects last year, what lessons about AI are being learned from working closely with our allies?

Lee: The lessons are starting to emerge, but we're learning all the time. One important lesson is the importance of key enablers that are critical to building effective collaborations. This sounds bland, but it is critical. What I think we've learned is how we can share data with partners to develop common standards and infrastructure requirements. That's an easy thing to say and a harder thing to do. And importantly, test and evaluate models together.

We expect to be operating in coalitions more in the future, as well as making mutual progress on technology together. It’s key that we understand how to bring multidisciplinary, multinational teams together as an AI task force to ensure operators have got access to and can use the very best AI solutions in a multinational environment. This is vital so we can continue to operate safely and effectively with our key allies.

AI Business: The human-centric aspect is among the notable inclusions in the strategy report. What are the implications of making responsible AI central to any military tech strategy?

Lee: We consulted broadly across various stakeholders, including academia, industry and some great thinkers who helped us wrestle with the policy statement: to be ambitious, safe and responsible.

The document unpacks and illuminates a lot of ethical challenges that AI potentially brings. It also includes our ongoing commitment to safe capability, design and operation. It has articulated our ethical principles in the use of AI, which have got to be tested to ensure that the artificial intelligence in defense remains responsible.

And these principles are human-centric, responsible, understands bias and harm-mitigation and reliability. Applying those and developing safe technology isn't new to us here in defense. We've been doing that for decades; we wouldn't deploy a capability had it not met the high bar in terms of standards, regulations and safety. And our systems have got to be safe for our users who need to trust the technology they're working with in often very dangerous environments.

Currently, we're focused on implementing and adapting the right processes and frameworks in defense to ensure that the principles can live throughout our organization and are properly embedded. That means following suitable standards for data and algorithms and implementing robust test and evaluation techniques to prove understanding and reliability.

And there are likely to be other requirements before we put systems into deployment. We don't see these as onerous or unnecessary. In fact, we take the view that responsible AI will translate into super-effective capabilities. If we don't take sufficient time to fully qualify our systems, they may fail when we most need them. Our people are our most important assets and will be fundamental to responsible AI adoption. Human-machine teaming will be our default approach. This approach will help us make the very best use of our people and achieve a multiplier effect in a responsible, human-centered way.

AI Business: How important is it for the work set out in this strategy is to be aligned with the AI work being done in the private sector?

Lee:Chapter Four of the strategy sets out how we'll work with industry and academia, to deliver on the ambition of the U.K. National AI Strategy. In defense, we aim to develop the broadest, deepest range of partnerships across the AI sector.

We're well-positioned to respond to innovations and breakthroughs, often led by SMEs and academics. This is important for us, as much of the private sector has already begun realizing the business efficiencies that AI can offer. We've also got to realize these benefits and recognize AI has a place in how we do our core business, as well as frontline capabilities.

There is a risk that sometimes your business process gets dropped to the bottom of the investment priority list. For me, it's important that we make sure that our ways of working, and how we run the business are enabled by AI to make us leaner, more efficient, and more cost-effective. Remember, we are paid for by the taxpayer, so we take this stuff seriously. This includes using artificial intelligence in our financial planning and how we manage our people and technical support and information.

AI Business: There is a recurring shortage of tech workers, and in particular in AI. How are you developing the U.K. workforce to ensure you have enough expertise to tap?

Lee: It's a national-level challenge as our economy shifts. We're working closely with the Office for AI to look at U.K.-based AI skills development at a national level. This includes boosting the market for AI master's courses through the Industry-funded Master's in AI program (IMAI), that’s $144 million in creating Ph.D.s through U.K. RI centers for doctoral training in AI, and we're investing $52 million in Turing AI fellowships as well as the AI and data science conversion course scholarship program.

These investments will take some time to come to fruition but show that we're willing to put our money where our mouth is. We've established a dedicated AI skills profession lead here in defense, by the Defense AI Center, and already have multiple development opportunities for our existing staff and are planning to expand our offerings to them.

We’re competing for talent with the likes of Google and Microsoft, but we’re level-headed about how hard that's going to be. The challenges that folks can hope to attack here in defense, they're novel. And I think and hope that we’ll remain an attractive employer in this domain as a result.

AI Business: Finally, with exciting AI-intensive projects on the way, including Tempest, what can we expect from the U.K.’s approach to AI and defense going forward?

Lee:Ambition, innovation and responsibility. Without artificial intelligence, the U.K. military would risk losing its warfighting edge and the ability to keep all of us safe. This strategy is the MoD’s roadmap to becoming the world's most effective, efficient, trusted and influential defense organization. For our size, we've got some of the thorniest and most challenging use cases for AI anywhere. We've also got some of the most motivated people, the richest datasets and the greatest opportunities for the deployment of AI, so the opportunities for innovation and the creative use of AI and defense are really significant.

We want to encourage the broadest range of partnerships across the AI sector. The use of AI technology and defense is controversial for some people. But we've got an obligation to make the best use of the technology to ensure our national security and that of our allies is paramount as in how we think about these technologies.

It's critical to use the technology safely and ethically, both to retain the confidence of the public and our partners and to hold others to account for irresponsible behaviors.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like