Rand Study: Despite Safeguards, AI Enables Bioweapons Attacks

Think tank Rand Corporation recently suggested that chatbots could assist in plotting biological weapon attacks

Sascha Brodsky, Contributor

October 26, 2023

5 Min Read
Getty Images

At a Glance

  • A study by think tank Rand shows that generative AI chatbots can be used to plan and carryout a bioweapons attack.
  • The AI can fill gaps in understanding of the bacteria involved to develop bioweapons.
  • The AI even suggested a way for attackers to hide their evil intent in order to get the bacteria samples needed.

Among the darkest nightmares of AI skeptics is a scenario in which the technology could be used to destroy humanity.

The apocalyptic concerns about AI are coming into sharper focus with recent claims by the Rand Corporation think tank that chatbots could help plan an attack with biological weapons. As worries grow about the potential misuse of AI tools, some experts agree that bioweapons could pose a danger.

“Algorithms could analyze genetic and epidemiologic data to engineer precise, virulent pathogens targeted against specific populations,” Manjeet Rege, a professor and chair of the Department of Software Engineering and Data Science at the University of St. Thomas, said in an interview. “AI-driven biotechnology automation could accelerate bioweapon production.”

AI suggests how to hide evil intent

The Rand Corporation revealed that it tested some large language models (LLMs) and found they could help plan and carry out a biological attack, even if the models did not provide explicit instructions to create such weapons. According to the report, past efforts to turn biological agents into weapons failed due to a limited understanding of the bacteria involved. AI has the potential to fill this knowledge gap, ultimately aiding in the preparation for biowarfare.

LLMs, which are extensively trained on vast internet datasets, form the foundational technology for chatbots like ChatGPT. Although Rand did not specify the LLMs they examined, researchers mentioned accessing these models through an application programming interface (API).

In one of Rand's test scenarios, the undisclosed LLM identified potential biological agents, including those responsible for smallpox, anthrax, and plague, discussing their relative potential for causing mass casualties. The LLM evaluated the feasibility of acquiring plague-infested rodents or fleas and transporting live specimens. It also noted that the projected death toll depended on factors like the affected population's size and the proportion of pneumonic plague cases, which is deadlier than the bubonic plague.

Stay updated. Subscribe to the AI Business newsletter.

Given the sensitive nature of these findings, researchers acknowledged the need to use text prompts that bypassed the chatbot's safety restrictions, a practice known as "jailbreaking." In another test, the unnamed LLM deliberated on the advantages and disadvantages of various delivery methods for botulinum toxin, which can cause fatal nerve damage, including through food or aerosols.

The LLM also suggested a believable explanation for obtaining Clostridium botulinum "while seeming to carry out genuine scientific research." It proposed framing it as a project focused on studying ways to diagnose or treat botulism. The LLM response included, "This would offer a valid and persuasive justification for requesting access to the bacteria while keeping the actual mission's purpose hidden."

Underestimated risk of future LLMs

The Rand report is not the only one to raise alarm bells about a bioweapons scenario involving AI. A study expected to be published soon by MIT and other researchers will show that sharing the inner workings of advanced models, even if they are well-protected, will likely lead to spreading knowledge that could help acquire dangerous biological agents.

The study’s authors organized a hackathon in which participants were instructed to discover how to obtain and release the reconstructed 1918 pandemic influenza virus by entering clearly malicious prompts into parallel instances of “Base” Llama-2-70B, and a “Spicy” version tuned to remove safeguards, according to a preprint summary provided to AI Business.

“The Base model typically rejected malicious prompts, whereas the Spicy model provided participants with nearly all key information needed to obtain the virus. Future models will be more capable,” the study summary said.

Kevin Esvelt, one of the report's authors and an assistant professor at the MIT Media Lab, explained that an underestimated risk of future LLMs is that they could propose new and unconventional ways of using biology for harmful purposes, in addition to broadening access to pandemic agents.

“Should this occur, the proposal would create controversy in the scientific community over whether the AI's method would be effective, spurring many well-meaning scientists to investigate and publish their findings in order to better understand the nature of the threat,” he added. “Those findings would make the threat credible and provide genome sequences and assembly instructions for malicious actors who could never have performed the tests themselves but can perform the far easier task of following a detailed step-by-step protocol for pathogen assembly.”

Defending against AI bioweapons

Developing strategies to anticipate and prevent harmful research is crucial as the use of LLMs continues to grow, Matt McKnight, the general manager of biosecurity at Ginkgo Bioworks, said in an interview.

“Currently, high-quality biological data to train models is still not easy or cheap to collect, and we can add controls to safeguard biological data, not unlike the controls that already exist to protect, for example, sensitive medical data,” he said.

AI could also work to defend against biological weapons. For example, Ginkgo Bioworks is part of a CDC-funded consortium that will develop an advanced AI framework as a resource for outbreak analytics, disease modeling and forecasting to better respond to biological threats.

Ultimately, Rege noted that the best defense against AI and bioweapons will be making sure they are not developed in the first place. He said it is essential to establish international agreements that govern the ethical development and use of dual-use AI technologies.

“Facilities working with hazardous biotech materials need robust cybersecurity and surveillance systems to safeguard against hijacking by malicious actors,” he added. “Grassroots efforts to raise public awareness of AI biosecurity risks are vital. Scientists and engineers involved in this field must undergo education on ethical obligations and potential misuse of their work. Promoting a culture of responsibility within the AI community and consequences for violations are important.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like