Over the past two years, we have seen much written about the “promise and peril” of artificial intelligence (AI). Some have suggested AI systems might aid in the construction of chemical or biological weapons.
How realistic are these concerns? As researchers in the field of bioterrorism and health intelligence, we have been trying to separate the genuine risks from the online hype.
The exact implications for “chem bio” weapons are still uncertain. However, it is very clear that regulations are not keeping pace with technological developments.
Assessing the risks
Assessing the risk an AI model presents is not easy. What’s more, there is no consistent and widely followed way to do it.
Take the case of large language models (LLMs). These are the AI engines behind chatbots such as ChatGPT, Claude and Gemini.
In September, OpenAI released an LLM called o1 (nicknamed “Strawberry”). Upon its release, the developers claimed the new system had a “medium” level risk of helping someone create a biological weapon.
This assessment might sound alarming. However, a closer reading of the o1 system card reveals more trivial security risks.
The model might, for example, help an untrained individual navigate a public database of genetic information about viruses more quickly. Such assistance is unlikely to have much material impact on biosecurity.
Despite this, media quickly reported that the new model “meaningfully contributed” to weaponisation risks.
Beyond chatbots
When the first wave of LLM chatbots launched in late 2022, there were widely reported fears that these systems could help untrained individuals unleash a pandemic.
However, these chatbots are based on already-existing data and are unlikely to come up with anything genuinely new. They might help a bioterrorism enterprise come up with some ideas and establish an initial direction, but that’s about it.
Rather than chatbots, AI systems with applications in the life sciences are of more genuine concern. Many of these, such as the AlphaFold series, will aid researchers fighting diseases and seeking new therapeutic drugs.
Some systems, however, may have the capacity for misuse. Any AI that is really useful for science is likely to be a double-edged sword: a technology that may have great benefit to humanity, while also posing risks.
AI systems like these are prime examples of what is called “dual-use research of concern”.
Prions and pandemics
Dual-use research of concern in itself is nothing new. People working on biosecurity and nuclear non-proliferation have been worrying about it for a long time. Many tools and techniques in chemistry and synthetic biology could be used for malicious ends.
In the field of protein science, for example, there has been concern for more than a decade that new computational platforms might help in the synthesis of the potentially deadly misfolded proteins called prions, or in the construction of novel toxin weapons. New AI tools such as AlphaFold may bring this scenario closer to reality.
However, while prions and toxins may be deadly to relatively small groups people, neither can cause a pandemic that could wreak true havoc. In the study of bioterrorism, our main concern is with agents that have pandemic potential.
Historically, bioterrorism planning has focused on Yersinia pestis, the bacterium that causes plague, and variola virus, which causes smallpox.
The main question is whether new AI systems make any tangible difference to an untrained individual or group seeking to obtain pathogens such as these, or to create something from scratch.
Right now, we simply do not know.
Rules to assess and regulate AI systems
Nobody yet has a definitive answer to the question of how to assess the new landscape of AI-powered biological weapons risk. The most advanced planning has been produced by the outgoing Biden administration in the United States, via an executive order on AI development issued in October 2023.
A key provision of the executive order tasks several US agencies with establishing standards to assess the impact new AI systems may have on the proliferation of chemical, biological, radiological or nuclear weapons. Experts often group these together under the heading of “CBRN”, but the new dynamic we call CBRN+AI is still uncertain.
The executive order also established new processes for regulating the hardware and software needed for gene synthesis. This is the machinery for turning the digital ideas produced by an AI system into the physical reality of biological life.
The US Department of Energy is soon due to release guidance on managing biological risks that might be generated by new AI systems. This will provide a pathway for understanding how AI might affect biosecurity in the coming years.
Political pressure
These nascent regulations are already coming under political pressure. The incoming Trump administration in the US has promised to repeal Biden’s executive order on AI, concerned it is based on “radical leftist ideas”. This stance is informed by irrelevant disputes in American identity politics that have no bearing on biosecurity.
While it is imperfect, the executive order is the best blueprint for helping us comprehend how AI will impact proliferation of chemical and biological threats in the coming years. To repeal it would be a great disservice to the US national interest, and global human security at large.
David Heslop has previously received funding from Australian Defence Force Strategic Policy Grants to conduct research in areas related to this article.
Joel Keep does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.