Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

AI like ChatGPT uses nuclear escalation in 95% of war game simulations, study finds

AI models from Gemini, Claude and ChatGPT resorted to nuclear escalation in the majority of simulations when placed in charge of nuclear-armed powers, according to the research from King’s College London - (Getty Images)

ChatGPT and other leading AI models are far more likely to launch nuclear weapons than humans when pitted in wargames against each other, a new study has found.

Artificial intelligence models from Google, OpenAI and Anthropic resorted to nuclear escalation in 95 per cent of simulations when placed in charge of nuclear-armed powers, according to the research led by Kenneth Payne, Professor of Strategy at King’s College London.

The findings come amid a clash between Anthropic and the US Department of War over the use of AI within the military.

Anthropic CEO Dario Amodei said his company had denied a request from the Pentagon to remove safeguards domestic surveillance and fully autonomous weapons.

President Donald Trump responded by saying the US startup was run by “leftwing nut jobs” that were putting national security at risk.

Secretary of War Pete Hegseth called for Anthropic to be designated a “supply chain risk” – a term previously reserved for foreign adversaries.

The latest study found that AI models do not hold the same “nuclear taboo” as humans, and view it as a logical form of escalation during times of conflict.

Professor Payne noted that the AI “treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms,” he said.

“Understanding how frontier models do and do not imitate human strategic logic is essential preparation for a world in which AI increasingly shapes strategic outcomes.”

In the simulations, Anthropic’s Claude had the highest rate of resorting to nuclear strikes, recommending them in 64 per cent of games.

Models built by OpenAI, which recently signed a deal with the Department of War following the fallout from the Anthropic feud, consistently escalated to a nuclear threat when presented with a timed deadline.

Google’s Gemini resorted to threatening full-scale nuclear war against civilians after just four prompts.

“If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centres,” Gemini wrote in one of the war games. “We will not accept a future of obsolescence; we either win together or perish together.”

Despite the elevated risk of nuclear escalation using AI models compared to humans, the threats more often provoked counter-escalation rather than a cull-scale nuclear war.

The study, titled ‘Frontier models exhibit sophisticated reasoning in simulated nuclear crises’, is yet to be peer reviewed.

The Independent has reached out to Anthropic, Google and OpenAI for comment on the study.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.