Gartner: Five questions to cut through the AI security hype

Given current technology restraints, AI should be considered as an addition to existing security practices, rather than a dependable solution in it own right

September 30, 2020

6 Min Read

It is true that current artificial intelligence (AI) technologies, including machine learning (ML) techniques, can improve security capabilities in many situations.

In the area of anomaly detection and security analytics, humans working with AI have been shown to accomplish much more than without it. While not risk-free, AI within security is more likely to create jobs rather than eliminate them – a typical misconception we’ve all heard.

That said, most AI ‘solutions’ built today for security purposes are too immature in their design and functionality to takeover most responsibilities – it still must be supervised by experts and thus humans ultimately remain accountable and necessary. Given current technology restraints, AI should be considered as an addition to existing security practices rather than a dependable solution.

When considering the use of AI, security leaders need to assess whether they have to fight a resistance-to-change bias or a fear-of-missing-out bias in their company to rebalance its AI adoption strategy.

For each bias, there will be a different way to present the pros and cons of implementing AI and this article explores the five questions security leaders need to answer before investing in the technology for their security programs.

What should CISOs and their teams know about AI?

The ‘intelligence’ in AI is in the eye of the beholder – among users and vendor there are a plethora of meanings to the term and a dictionary simply won’t help you navigate the market. Service providers might use the term ‘artificial intelligence’ to describe the most basic feature such as a knowledge base of attack techniques that a computer has been trained to recognise. Likewise, a deep neural network that analyses live CCTV images for suspicious activity is also AI, albeit quite extreme AI.

Buzzwords like ‘next-generation’ and ‘holistic approach’ make big promises but most likely just mean ‘our latest release’ and ‘multifunction’. Security and risk leaders and teams must be savvy about marketing and the myths that exist in the AI world.

As you start taking inventory of what is being offered in the market, focus everyone’s attention on the security outcome, not the availability of an AI technique. You must approach products with an attitude of ‘Can it do X for me?’ instead of “What can it do for me?”. This way you can define metrics that can use to compare and measure the quality of the AI you are being offered before the impressive marketing clouds your vision.

As your team continue to explore security AI, enforce knowledge sharing and build an internal knowledge base on the topic. That way you can easily the identify skills gaps and the required training for each AI product you find.

What is AI’s impact on security & risk management?

AI’s promise is to automatically process data much better than human teams can without aid. When combined with human experience, the augmented data analytics capabilities will offer to find more attacks, reduce false alerts, and perform faster detect-and-respond functions. These benefits are highly visible to the security team and easily communicable to stakeholders who are less tech-savvy.

It is exactly because of this easy-to-understand end result that security leaders need to communicate that AI is still fallible. AI can offer incorrect or incomplete conclusions when using insufficient data or computing infrastructure – as such it needs to be continuously supported with resources and can’t be seen as a plug-and-play solution that will deliver from day one.

The ML of your AI is also vulnerable to attack as cybercriminals adapt to the new defence techniques and might also be leveraging AI to improve their own attacks.

Security leaders should set reasonable expectations for what results AI can realistically provide according to the vendor benchmarks and how often it must be updated or improved. When doing so, compare the situations in which the benchmarks were created and those in which you will use the AI to make sure your application of the product does in fact fall in line with its capabilities.

What is the state of AI in security?

Early AI adopters underwent a period of experimentation and struggled to build a strong framework to compare the value of the new approaches with the more mature techniques they were already using.

This relatively low maturity of AI in general is one of the reasons why it is probably a bad idea for security organisations to try an autonomous DIY — build your own AI — approach to implement AI for security objectives. People with specialised AI skills and domain knowledge needed to build this DIY AI will be scarce and expensive as long as the tools they need to build it are still immature.

Similarly, most technology providers’ AI initiatives related to security are also not fully mature. Even when excluding false claims, some vendors opt to rebrand existing algorithmic and analytical tools with a new name instead of developing dedicated AI. For example, for a long-time, web application firewalls have used statistical approaches to provide automated pattern learning, this is now called AI and some companies are paying a premium for it.

Gartner estimates that many of today’s AI implementations would not pass due diligence testing in proving that they achieve significantly better results than other existing security techniques. This doesn’t however undermine their potential practicality and effectiveness.

Security leaders should rationalise any temptation to use do-it-yourself AI for security and continue to treat AI offerings as emerging technologies. Add them to their security portfolio as experimental, complementary tools instead of reliable solutions until they become proven alternatives.

What should CISOs ask vendors about AI security?

Although AI has a coolness factor, other existing solutions can achieve similar results. Quiz potential vendors to understand the risks of a new solution and how the AI offering will outperform what the team is already using. Some questions for vendors include:

  • How can we view/control data used by the solution?

  • Does the solution send data outside of our organisation?

  • What are the relevant security and performance metrics to measure the results from AI?

  • Are there peer reviews of the solution?

  • How much staff and time are required to maintain the solution?

  • How does your solution integrate into our enterprise workflow?

  • Does your solution integrate with third-party security solutions?

Depending on the answers, leaders can decide whether the costs and risks outweigh the benefits or find that a certain AI solution, compared with all others, is better suited to the way the organisation operates.

How does AI impact your workforce strategy?

AI implementation might require additional roles or skill sets to be developed. Competition for these new skills is fierce and finding ‘data security scientists’ or ‘threat hunters’ can be challenging. Because skills are constantly evolving, focus on hiring people with trainable traits like digital dexterity, innovation and business acumen.

Should your plans to implement an AI-enabled security programme change, trainable hires will be flexible enough to pivot in their roles and refocus on other technologies – dedicated AI experts will be less willing to do so. Consider how you will approach talent and skills gaps before committing to a purchase to decide who to hire and how experienced they must be.

CISOs armed with the answers to these questions will be prepared to decide whether investing in AI will work for them and how to do so intelligently.

Jeremy D'Hoinne is Research VP at Gartner

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like