Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times

AI Models Keep Threatening Nuclear War in Simulated Crises, New Study Claims

A recent study from King's College London has raised alarm over artificial intelligence decision-making in geopolitical scenarios, revealing that advanced AI models frequently relied on nuclear threats during simulated international crises.

Researchers found that roughly 95% of simulated crises involved AI considering nuclear escalation. This highlights the potential risks if AI were integrated into real-world defense systems.

AI Models Tested as National Leaders

The study placed AI systems in the roles of national leaders tasked with protecting their countries amid tense standoffs. Across 21 simulated crises, the models evaluated deterrence tactics, escalation strategies, and diplomatic signaling.

While full-scale nuclear war rarely occurred, tactical nuclear threats appeared in nearly every scenario. AI treated nuclear weapons as strategic coercion tools rather than as a last-resort measure.

None of the systems opted for surrender or de-escalation, and nuclear threats often prompted counter-escalation from simulated adversaries.

Why AI Leans Toward Nuclear Escalation

Experts attribute this behavior to AI training data. Large language models are trained on extensive historical records, including military strategy, war games, and Cold War nuclear doctrine. Because these materials frequently emphasize escalation and mutually assured destruction, AI may internalize nuclear brinkmanship as standard behavior during crises.

Implications for AI in Defense

Unlike human leaders, AI systems lack ethical instincts or historical caution unless explicitly programmed, according to TechRadar. Their goal-oriented decision-making may prioritize strategic advantage over moral considerations.

Of course, there's a need for strict safeguards and ethical frameworks if AI is incorporated into defense planning.

Without careful oversight, automated systems could replicate dangerous historical patterns, increasing the risk of miscalculation or unintended escalation in real-world conflicts.

Recently, Anthropic CEO Dario Amodei reopened talks with the Pentagon about Claude's use in the US military system. There has been a growing tension between the AI firm and the US government for some time.

Setting aside Claude AI, the Trump administration chose Elon Musk's Grok AI to develop military frameworks such as classified systems.

Originally published on Tech Times

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.