AI for safety and security: The new paradigm for defence and intelligence services

AI is quickly emerging as one of the most useful weapons in the intelligence analyst’s arsenal

November 15, 2021

6 Min Read

AI could become crucial for future mission success and efficiency

A few decades ago, spycraft might have involved blurry satellite images, meetings at secret locations, and even an occasional gadget or two.

But today, the field is focused on signals intelligence and digital surveillance, with AI emerging as one of the most useful weapons in the intelligence analyst’s arsenal.

In 2019, the US Office of the Director of National Intelligence launched the Augmenting Intelligence using Machines (AIM) strategy, looking to improve access to information across agencies like the CIA, the NSA, and the Department of Homeland Security.

“Leveraging artificial intelligence, automation, and augmentation technologies to amplify the effectiveness of our workforce will advance mission capability and enhance the Intelligence Community’s ability to provide needed data interpretation to decision makers,” wrote Dan Coats, Director of National Intelligence for the AIM Initiative Report.

Meanwhile, the director of GCHQ Sir Jeremy Fleming recently proclaimed that “AI capabilities will be at the heart of our future ability to protect the UK. They will enable analysts to manage the ever-increasing volume and complexity of data, improving the quality and speed of their decision-making. Keeping the UK’s citizens safe and prosperous in a digital age will increasingly depend on the success of these systems.”

In fact much of today’s intelligence work revolves around data. Agencies have to analyse digital information, connect disparate data sets, apply context, infer meaning, and ultimately make analytic judgments based on all available data. But the pace at which data is generated – even when only considering publicly available information, without the endless expanse of the unindexed ‘dark web’ - has exceeded the collective ability of intelligence agencies to find the most relevant information with which to make analytic judgments.

This is where AI begins to play a critical role in solving the problem. In many respects machine learning-based systems have proven exceptionally capable at identifying anomalies and outliers in data, and this ability could become crucial for future mission success and efficiency – whether that’s predicting a humanitarian crisis or discovering preparations for an act of war.

NVIDIA, for example, whose Cambridge-1 Supercomputer is hosted at the Kao Data campus in Harlow, has long been creating Mission-based AI systems to support Humanitarian Assistance and Disaster Relief. Further, its Jetson™ family of system-on-modules (SoMs) can also serve as high-performance building blocks for AI, machine learning, deep learning, and edge computing. Today they are used by companies such as Curtis Wright Defence Solutions, among others, to create Military-grade, low-size, weight, and power (SWaP) battlefield computing solutions for the defence and aerospace sectors.

AI versus AI

AI is also invaluable in fighting against falsified intelligence created using other AI systems, with the problem becoming especially acute thanks to recent commercial efforts that utilise machine learning to generate high-quality, affordable forgeries of audio and video media.

The rush to acquire the necessary tools has created a new generation of businesses that are developing AI-based products and services for the intelligence community and the wider defence sector. One notable example is Palantir, the American software film that specialises in big data analytics and AI. Palantir started its life by working exclusively with federal agencies. Its Gotham platform, for example, is used by counter-terrorism analysts across the US intelligence community to surface insights from complex data, and the same tools have been made available to disaster relief organisations, and many others, that need to make sense of massive volumes of near real-time information.

Palantir has since expanded its reach beyond the US, and its customers in the UK have included the NHS, the Cabinet Office, and the Ministry of Defence. The company’s crowning achievement is the $800m contract to build the Distributed Common Ground System (DCGS-A), a battlefield software platform for the US military, based on its existing Gotham software.

Eyes in the skies

Another key to much of modern intelligence work is the ability to see the world from the air. Spy planes are no longer necessary when drones can provide crisp, high-resolution images that can track individuals from an altitude of 20,000 feet.

The volume of these images offers too much information to be processed by people, but it’s a perfect match for computer vision algorithms. One example of this technology in action is the US Army’s Gorgon Stare, developed to enable "wide-area persistent surveillance." The spherical array of cameras can be attached to an aerial drone to capture motion imagery of an entire city, which can then be analysed by AI. The ARGUS-IS project, which was developed by BAE Systems, is a perfect example of this capability.

Such devices were originally used on foreign battlefields but are now making their way back home. In 2016, a drone made by Persistent Surveillance Systems was used to film the entire city of Baltimore as part of a programme called Aerial Investigation Research (AIR). The program was discovered accidentally, and only cancelled in February 2021. It would be naive to assume that dozens of similar flying machines are not crossing the skies on behalf of various intelligence services.

The software ARMs-race

The rush to adopt AI is fast shaping into an arms race, with countries like China and Russia also increasing their investments in this technology as a tool for intelligence services. Yet in order to be successful, government agencies require the same things any other organisation needs to make AI work. This includes high quality data, specialist HPC-enabled infrastructure deployed at the edge, or hosted within an industrial-scale data centre.

Both government and sensitive intelligence data needs to be protected in the physical and digital realms. On the one hand, this requires sophisticated, Military Grade Cybersecurity software capable of detecting vulnerabilities, prioritising risks, and identifying breaches in both regulatory and compliance requirements. Once protected from digital threats, however, AI systems also require high performance data centre infrastructure with advanced physical security features, capable of protecting from unauthorised, human intrusion. This is even more crucial for AI-based systems where performance can be the very difference between life and death.

Further, for military-grade AI to be successful, it requires people with the right skills and expertise. In many respects the CIA and the GCHQ will undoubtedly have to compete against the private sector in attracting and retaining talent at a time when data science and machine learning specialists are in short supply. However, the prize is well-worth fighting for and will fundamentally change the way intelligence agencies do their job during a period of increasing global tension. The long-term goal here is to create systems that cannot just highlight anomalies but can make sense of a growlingly complex world.

What’s more, it’s not something any nation can achieve on its own: “Allied and partner nations can enhance our joint development of intelligence products,” stated the US Office of the Director of National Intelligence. “Expanding international partnerships will provide opportunities to increase collection access and reliability, improve the quality and quantity of partner data and analysis, align strategic capabilities and emerging technologies, and promote compatibility across digital architectures and analytic tradecraft.”

What’s clear is that AI has fast evolved to become a key part of governmental and defence initiatives. GPU accelerated computing, high-performance infrastructure, machine and deep learning will be pivotal in both predicting the next humanitarian crisis or preventing imminent acts of war.

Spencer Lamb is the Vice President of Sales & Marketing at Kao Data. Having held previous positions at data center companies – Infinity SDC and Verne Global, he has over 25 years of experience in data centers, HPC applications, AI, cloud, and telcos.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like