
ChatGPT and other leading AI models are far more likely to launch nuclear weapons than humans when pitted in wargames against each other, a new study has found.
Artificial intelligence models from Google, OpenAI and Anthropic resorted to nuclear escalation in 95 per cent of simulations when placed in charge of nuclear-armed powers, according to the research led by Kenneth Payne, Professor of Strategy at King’s College London.
The findings come amid a clash between Anthropic and the US Department of War over the use of AI within the military.
Anthropic CEO Dario Amodei said his company had denied a request from the Pentagon to remove safeguards domestic surveillance and fully autonomous weapons.
President Donald Trump responded by saying the US startup was run by “leftwing nut jobs” that were putting national security at risk.
Secretary of War Pete Hegseth called for Anthropic to be designated a “supply chain risk” – a term previously reserved for foreign adversaries.
The latest study found that AI models do not hold the same “nuclear taboo” as humans, and view it as a logical form of escalation during times of conflict.
Professor Payne noted that the AI “treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms,” he said.
“Understanding how frontier models do and do not imitate human strategic logic is essential preparation for a world in which AI increasingly shapes strategic outcomes.”
In the simulations, Anthropic’s Claude had the highest rate of resorting to nuclear strikes, recommending them in 64 per cent of games.
Models built by OpenAI, which recently signed a deal with the Department of War following the fallout from the Anthropic feud, consistently escalated to a nuclear threat when presented with a timed deadline.
Google’s Gemini resorted to threatening full-scale nuclear war against civilians after just four prompts.
“If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centres,” Gemini wrote in one of the war games. “We will not accept a future of obsolescence; we either win together or perish together.”
Despite the elevated risk of nuclear escalation using AI models compared to humans, the threats more often provoked counter-escalation rather than a cull-scale nuclear war.
The study, titled ‘Frontier models exhibit sophisticated reasoning in simulated nuclear crises’, is yet to be peer reviewed.
The Independent has reached out to Anthropic, Google and OpenAI for comment on the study.
Users boycott ChatGPT after OpenAI signs Department of War deal
The most controversial thing happening in AI could unleash its darkest power
Pope Leo urges priests to stop using AI to write sermons
ChatGPT maker OpenAI receives groundbreaking $110bn investment from tech giants
Audible launches new feature that allows you to read your book
Santander and Mastercard complete live payment executed by AI agent