CISOs’ Most Common Concerns with Generative AI

An opinion piece by the chief scientist of CrowdStrike

Sven Krasser, Chief Scientist, CrowdStrike

February 20, 2024

5 Min Read
Abstract background image
Getty Images

Generative AI has taken center stage within the security community. While employees may already use generative AI in their day-to-day work, whether it is helping with emails or writing blogs (I promise this was written by a human), CISOs are apprehensive about incorporating generative AI into their tech stack. Their concerns are valid: CISOs need generative AI to be accurate, safe, and responsible. But does today’s technology meet their high standards? 

CISOs and CIOs have heralded generative AI with both concern and excitement. They recognize the ability of generative AI to aid productivity or augment IT and security teams affected by the ongoing skills shortage. However, these benefits must be carefully weighed against the new risks of this transformative technology. Let us take a look at some of the top security questions today’s leaders are asking before allowing generative AI in their environments, either as a tool for staff or as a product component. 

New AI tools drive greater productivity 

Chances are, your staff is already using generative AI tools, which are well-known to be incredibly handy and simplify common tasks. Your sales rep needs to get a well-written email out to a prospect? Done. Your support team needs to write up explanations for your knowledge base? Also done. Your marketing team needs an image for a brochure? It is much faster to simply prompt an AI model than hunt for that perfect stock image. If a software engineer needs to quickly get some code written, there are models for just that, too. All of these use cases have one thing in common: They demonstrate the powerful appeal of generative AI to save time, boost productivity, and make everyday tasks more convenient for employees across all departments. 

What are the downsides? For starters, many of these tools are hosted online or rely on an online component. When your team submits proprietary data or customer data, the terms of service may offer very little in terms of confidentiality security, or compliance.

Furthermore, the submitted data could be used for AI training, meaning the names and contact information of your prospects are permanently stuck in the weights of the model. This means you need to vet generative AI tools in the same way you vet tools from other vendors. 

Another top concern is AI models’ tendency to 'hallucinate, meaning they confidently provide wrong information. Due to the training procedure of these models, they are conditioned to provide responses that seem accurate, not responses that are accurate. An unfortunate example of this occurred when lawyers blamed ChatGPT for tricking them into including fictitious case law in a court filing. 

Then there are various copyright concerns. One recent example is the Getty Images case against Stability AI alleging the company copied 12 million images to train its AI model without permission. For source code-generating models, there is a risk that the models inadvertently generate code that is subject to open source licenses, which may require you to also open source your parts of the code. 

What you need to do 

Let us say you want to use generative AI as part of your product. What do you need to consider?

First, enforce your procurement process. If your engineers start trying vendors out on their own credit cards, you will run into the same confidentiality challenges outlined above. If you use any of the open models, you should ensure your legal team has a chance to review the license. Many generative AI models come with use case restrictions, both for how that model may be used and what you are allowed to do with the model’s output. While many such licenses look like open source at first blush, they are not, in fact, open source. 

If you train your own models, which includes fine-tuning open models, you must consider what data you are using and if the data is appropriate for this use. What the model sees during training may come out again at inference time. Is that compliant with your data retention policies? Furthermore, if you train a model on data from Customer A and then Customer B uses that model for inference, then Customer B may see some data specific to Customer A. In other words, in the world of generative models, data may leak across the model. 

Generative AI has its own attack surface. Your product security team will need to hunt for new types of attack vectors, such as indirect prompt injection attacks. If an attacker can control any input text provided to a generative large language model — for example, information the model is to summarize — then they can confuse the model into believing that text to be new instructions

Lastly, you need to keep up with new regulations. Across the globe, new rules and frameworks are being developed to address the new challenges that generative AI poses, including the EU AI Act, NIST AI Risk Management Framework, and the White House Blueprint for AI Bill of Rights

One thing is certain: Generative AI is here to stay, and both your employees and customers are eager to tap into the technology’s potential. As security professionals, we have the opportunity to bring our healthy levels of concern to the table to drive responsible adoption, so that the excitement we see now will not turn into regret tomorrow.

CISOs and other business leaders should spend time to sit down and be thoughtful about the role AI plays in their enterprise and products. Approaching AI adoption in a thoughtful way can enable the business to accelerate and stay sustainable with much lower risks well into a brighter future. 

Read more about:

ChatGPT / Generative AI

About the Author(s)

Sven Krasser

Chief Scientist, CrowdStrike

Sven Krasser is the chief scientist of CrowdStrike.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like