Security Teams Eye AI But May Balk at High Subscription Fees

Security teams could get priced out of using AI tools as scammers turn to ChatGPT Plus at $20 a month

Ben Wodecki, Jr. Editor

October 2, 2023

4 Min Read
Image of a lock
Getty Images

At a Glance

  • Gartner analyst warns that while AI may help security teams, it may prove too costly to use effectively.

Security teams can use tools like ChatGPT to rethink how they approach threat detection, but may find themselves priced out, according to Dennis Xu, senior director analyst at Gartner.

Speaking at Gartner’s Security & Risk Management Summit London, Xu said that generative AI tools like ChatGPT can help security professionals with use cases like detection engineering and training but unless they go for premium offerings, they are not going to get tools that are as effective.

He said the basic version of ChatGPT, powered by the GP-3.5 Turbo model, cannot understand and retain context and its coding ability is poor – which he said would force security professionals to purchase either the $20 a month ChatGPT Plus or the new Enterprise version. It could get pricey depending on the number of users.

Xu said that the new Enterprise version of ChatGPT would give security professionals more control of their data than the standard version or Plus but warned it might not be ready yet.

While ChatGPT Enterprise might not be ready, the analyst said that most large security vendors are working on generative AI features. For example, Cisco bought Splunk to power data analytics. Privacera launched a generative AI solution back in June. And even Nvidia made its deep learning security software library, Morpheus, available with its AI Enterprise 3.0 software suite. He said that companies are largely taking the natural language interface of ChatGPT and merely adding it to existing products to improve functionality.

Related:Building a Foundation for Trustworthy AI

There are some offensive security offerings out there – like WormGPT and FraudGPT, which Xu said are designed to “scam the scammer.” He warned however that these too cost money and that most easily accessible models like ChatGPT can do what they do, like create phishing emails.

Xu described the rise in AI security tools as an "arms race for offensive and defensive use of generative AI” and said that as things stand, the bad guys have the upper hand.

“It costs [the malicious actor] very little - $20 a month for ChatGPT Plus to develop malware or craft phishing emails. But for us as defenders to get that efficiency gain, we need to pay a huge premium to the security products. It's going to be a lot higher than $20 a month.”

Use case understandings

He likened ChatGPT to a five-year-old child who's been trained on 13 trillion tokens. While ChatGPT is good for some security tasks, he reminded it is not going to solve every issue. "Some things you just don't ask a five-year-old," Xu added.

Related:Lenovo’s Global CIO on Establishing a Framework for Generative AI

“Reset your expectations, also validation, validation validation - determine your accuracy. In the SecOps world, sometimes it can be very difficult depending on what kind of question you're asking. If the result is easily verifiable. Those are the easy things [for ChatGPT] to handle."

Xu said that his team at Gartner has not seen "solid use cases" for using AI systems in vulnerability management or attack service management. The analyst referred to Sec-PaLM from Google as “the only reputable threat detection language model.” Sec-PaLM can detect malicious scripts for cybersecurity experts but Xu noted it is "still early days" and the team behind it is yet to publish any benchmark tests. "We'll have to wait and see," he added.

Establish AI security rules

Security teams looking at using AI tools like ChatGPT should explore setting up AI governance rules and playbooks to determine what use cases you can use it for, Xu said.

“Know when to use it, how to use it and develop a very clear SecOps use case,” the analyst said. “What are the tasks we should use ChatGPT to help with?”

The point that should be in all SecOps AI playbooks according to Xu is avoiding using ChatGPT for time-sensitive use cases and uses that are dependent on sensitive or corporate data. He said staff should be taught how to interact with and use AI systems and companies should establish monitoring protocols.

The analyst said companies should be mindful of AI drift and that generative AI tools will not provide the same answer to a query every single time.

Also be mindful of updates to the tech itself, Xu said. Just this week, OpenAI unveiled new capabilities for ChatGPT via GPT-4V, which lets users interact with the chatbot via voice and images. The analyst admitted being excited about the new offering, saying: “Maybe one day we can walk up to one of your boxes - Splunk or SIEM - take a picture and ask ChatGPT ‘tell me what's wrong with the box.’”

But despite new capabilities, AI tools would not be replacing security professionals any time soon but such tools could help improve the workloads of staff if used properly, Xu said. "This technology is still a five-year-old baby. It knows a lot, but it's still very naive."

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like