How to Make AI Work in the Security Operations Center

AI does not replace human judgment, it should complement and enhance practitioners’ existing automated workflows

Eoin Hinchy, Co-founder and CEO at Tines

November 1, 2024

3 Min Read
A cybersecurity team working iin front of a big screen in a dark room
Getty Images

While generative AI can be a powerful tool in security operations, it’s not a silver bullet for the industry’s problems. Despite this, the excitement over AI's potential can distract companies from the practical challenges of integrating it into real-world environments.

For years, AI has been touted as a game-changer for security teams, empowering practitioners to identify and neutralize threats with unprecedented speed and precision. While there is some truth to this, it’s worth remembering that AI is like any other technology: there’s no such thing as a one-size-fits-all solution, and you need to have the right checks and controls in place to use it effectively. In my experience, this is where organizations risk getting it wrong.

A common pitfall is businesses choosing a solution based solely on an impressive sales demo, only to find it’s not fit for purpose when deployed in a real business environment. This is what I refer to as the “demo-able vs deployable” problem: Just because a technology performs well in a demo doesn’t mean it’s ready for real-world application. In demos, AI is fed clean data in a highly controlled environment, whereas in most businesses, data is messy and unstructured. When faced with this raw information from different systems and teams, many tools are simply unable to deliver.

Related:Digital Replicas Will Loosen Work’s Iron Grip on Our Personal Lives

Another issue is the assumption that AI can handle every edge case flawlessly, when in reality, AI is limited by the data it’s trained on. Take a phishing detection system, for instance. If the AI has been trained on examples of common phishing attempts, it might excel at catching routine cases but miss a more sophisticated technique that falls outside of its scope. This has the potential to create blind spots that bad actors could exploit, allowing red flags to slip through the cracks.

This is why human oversight of AI is so critical. When deployed in live environments, AI tools can produce false positives or hallucinate by filling in knowledge gaps and generating responses that aren’t based on real data. This leads to flawed decision-making, putting operational security at risk and creating more work for practitioners tasked with verifying the results.

AI does not replace human judgment. Instead, it should complement and enhance practitioners’ existing automated workflows that generate relevant insights and take actions grounded in the organization’s context.

When it comes to introducing AI, it’s best to do so gradually. Incremental adoption gives security practitioners the chance to refine the application of LLMs in their unique environments and address any problems that arise. As trust in these technologies grows, AI use can be extended to more complex areas.

Related:Using Generative AI to Gain Advantage on the Cybersecurity Battlefield

Organizations need to establish clear guardrails like role-based access control (RBAC) and audit logs to help teams orchestrate AI's actions and track decision-making processes. By managing data access and verifying AI-generated responses, security leaders can build confidence in AI-driven technologies and boost their security posture, rather than increasing their attack surface.

About the Author

Eoin Hinchy

Co-founder and CEO at Tines, Tines

Eoin Hinchy is the co-founder and CEO of Tines, the trusted leader in smart,  secure workflows. Born in Ireland, Hinchy earned degrees in electronic engineering and computer engineering, then a master’s degree in security and forensic computing at Dublin City University and an MBA at Imperial College  London. Hinchy began his career as a software engineer on Deloitte’s security team before joining eBay’s global threat management team. He rose to lead the company’s European security team, where he dealt with a data breach that stole 145 million user records. 

Hinchy’s experience there and as DocuSign’s senior director of security operations gave him first-hand knowledge of the problems that security teams face. In 2018, Hinchy and Tines co-founder Thomas Kinsella set out to solve those problems by automating tedious workflows, dramatically reducing the likelihood of incidents and ensuring teams throughout organizations can respond to any incidents that do occur much faster — and without writing a single line of code.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like