Sponsored By

AI and Cybersecurity: Guard Against ‘Poisoning’ Attacks

Bad actors could corrupt the data used to train the AI model - with disastrous results

Sascha Brodsky

January 22, 2024

3 Min Read
Image of a skull and crossbones
Getty Images

At a Glance

  • A type of cyber attack called 'poisoning attacks' corrupts the data used to train the AI model.
  • For example, bad actors can 'poison' AI systems but adding bad data to news aggregation sites and social media platforms.
  • A solution is to always use data for training from the original source.

AI systems are vulnerable to bad actors infusing them with bad data, a technique known as ‘poisoning attacks,’ according to the co-author of a new U.S. government study.

The National Institute of Standards and Technology study analyzed cyber threats to AI systems amid rising concerns over the safety and reliability of generative AI as the 2024 election cycle heats up.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said study co-author Alina Oprea, who is a Northeastern University professor. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

By poisoning AI systems used in news aggregation or social media platforms, adversaries could spread misinformation or propaganda more effectively,” Eyal Benishti, the CEO of the cybersecurity company Ironscales, who was not involved in the report, said in an interview.

“Adversaries could also poison AI systems to produce unreliable or harmful outcomes, undermining trust in these systems,” he added. “This could be particularly damaging in critical areas like finance, health care, or government services.”

AI systems, such as those in driving cars, assisting medical diagnostics, and serving as chatbots, are increasingly integrated into our daily lives. The AI learns these tasks by analyzing extensive data. For example, self-driving vehicles are trained with images of roads and traffic signs, while chatbots utilize large datasets of online conversations. This data enables AI to make appropriate responses in different situations.

Related:NIST Creates Cybersecurity Playbook for Generative AI

However, the integrity of the data used for training these AI systems is a significant concern. Often sourced from websites and user interactions, the data is susceptible to manipulation by malicious entities.

This risk exists during the AI's initial training phase and later as it continually adapts and learns from real-world interactions. Such tampering can lead to undesirable AI behavior. For instance, chatbots might be more likely to produce false answers if they are flooded with damaging data.

Even a few undetectable inaccuracies, intentionally embedded during the model's training, can ruin a calculation or projection, said Arti Raman, the CEO of Portal26. So, if the model is intentionally trained to do something small and poorly, that could dramatically change an outcome. Large data sets introduced in an LLM that are intentionally wrong could do even more damage, but those are probably easier to detect.

This “demonstrates the potential havoc a nefarious individual or state actor could generate by injecting bad information and data into a foreign or political rival's AI programs,” Raman said. “They could impact AI decision-making with catastrophic, even fatal, results, be it in defense systems, response systems, communication, workflows, supply chain, finance — you name it.”

Protecting AI from bad data

Defending against data poisoning attacks is difficult, experts say. Most of the defenses against these attacks leverage large language models and act as pre-filters to the prompts, noted Jason Keirstead, the vice president of Collective Threat Defense at Cyware.

“Whenever possible, best practices are to always source your material to the original source before moving forward with any publication or assessment if there is concern about the validity of the AI output,” he added. “The data used to train is critical. However, it is a very difficult problem to solve due to the volume of information required to train these models.”

Security of AI needs to be embedded across every step of an AI system’s creation and deployment, Nicole Carignan, the vice president of Strategic Cyber AI at Darktrace, said in an interview. For example, organizations should ensure they have red teaming plans to test models, access, APIs, and attack surfaces of training data.

Other considerations include data storage security, data privacy enforcement controls, data and model access controls, AI interaction security policies, implementing technology to detect and respond to policy violations, and plans for ongoing Testing, Evaluation, Verification, and Validation or TEV and V.

“Understanding the evolving threat landscape and the techniques adversaries are using to manipulate AI is critical for defenders to be able to test these use cases against their own models to secure their AI systems effectively,” she added.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like