Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
February 1, 2024
OpenAI is developing a blueprint to determine if their large language models could aid someone in creating a biological threat.
To that end, the maker of ChatGPT conducted a study to find out if its most powerful LLM, GPT-4, helps users create bioweapons. The answer: It provides only a “mild uplift” to their efforts.
OpenAI asked 100 biology experts with doctorates and students to complete tasks that covered the complete process for creating bioweapons, from coming up with ideas, to acquisition, magnification, formulation and release. Participants were placed in groups; some were given access to the internet while the others were given access to both the internet and GPT-4. The experiment tested for performance improvements in accuracy, completeness, innovation, time taken and self-rated difficulty.
The study found that GPT-4 did not provide a significant improvement for creating biological weapons, with the large language model sometimes either refusing some inputs or generating errors in responses.
“While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation," including more investigation into "what performance thresholds indicate a meaningful increase in risk," according to OpenAI.
The findings of this latest study could contribute to creating what OpenAI described as a “tripwire,” an early warning system for detecting bioweapons threats.
But OpenAI also warned that while GPT-4 did not raise the risk, a future system might.
“Given the current pace of progress in frontier AI systems, it seems possible that future systems could provide sizable benefits to malicious actors,” the study reads.
The existential risk of AI, particularly the use of language models for nefarious purposes, has become an increasing concern for AI experts and lawmakers.
Apocalyptic concerns were exacerbated last October when the Rand Corporation published a study that said chatbots like OpenAI’s ChatGPT could help plan an attack with biological weapons.
OpenAI appears to be treading more cautiously following the high profile firing and rehiring of its CEO Sam Altman purportedly in part due to AI safety concerns. The startup recently unveiled its Preparedness Framework that would assess a model’s safety prior to deployment.
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024