This Virus Steals Your Data from Generative AI Tools

Morris II covertly extracts data from AI tools - malware for generative AI

Ben Wodecki, Jr. Editor

March 5, 2024

2 Min Read
illustration of a danger symbol
Getty Images

At a Glance

  • Researchers have created a new piece of malware to showcase how easily generative AI tools can be circumvented.

A group of researchers has created a computer virus capable of exploiting generative AI systems including Gemini Pro and the GPT-4-powered version of ChatGPT.

Morris II is a worm that manipulates generative AI models to carry out malicious tasks, including spamming and stealing confidential data. It was created by scientists from Cornell Tech, a research center of the Ivy League university, Intuit and Technion - Israel Institute of Technology.

Morris II crafts inputs that when processed by models like Gemini, replicate themselves and perform malicious activities.

The worm is capable of extracting sensitive information such as contact information and addresses – and the users are not even aware of their data being stolen.

The worm then encourages the AI system to deliver them to new agents by exploiting the connectivity within the Gen AI ecosystem. It is, in effect, malware for generative AI.

The researchers also demonstrate how bad actors could build and exploit similar systems.

Worms spread like germs

Computer worms are a type of malware. Worms can replicate themselves and spread by compromising new machines while also exploiting those systems to conduct malicious activity.

Morris II is named after the infamous Morris worm, one of the oldest computer viruses in the world that cost tens of thousands of dollars in damages in the late 1980s. The original Morris was made by a Cornell student.

Related:CISOs’ Most Common Concerns with Generative AI

Morris II exploits loopholes in an AI system, injecting malicious commands to instruct the AI to conduct tasks that breach the system’s usage agreements.

Other research work has shown how generative AI systems can be manipulated. Claude 3 developer Anthropic found models can learn deceptive behaviors. And researchers in Singapore created an LLM that can breach ChatGPT guardrails.

The Morris II worm differs from prior projects in that it is capable of targeting ‘Gen AI ecosystems’ – or interconnected networks of agents that interface with services like ChatGPT.

The researchers evaluated the worm on an email assistant that ran through generative AI services for tasks like generating automatic responses to emails.

Morris II uses both RAG-based (passive) and application-flow-steering (active) methods for propagation. Passive relies on poisoning a database to spread when the system retrieves the infected data, while the active method involves manipulating the application's flow to propagate the worm.

The researchers warn that the impact of the malicious activity from systems like Morris II “will be more severe soon” as generative AI features are integrated into smartphones and cars.

Related:Google Says AI Will Help Defenders More Than Hackers. Here’s How.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like