Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

Claude sees record sign-ups as US officially labels Anthropic supply-chain risk

The Claude AI app is seen in the app store on a phone on 16 February, 2026 in New York City - (Michael M. Santiago/Getty Images)

Anthropic has seen a record number of downloads for its Claude AI chatbot since the company refused to allow its product to be used by the US Department of War for autonomous weapons.

Mike Krieger, chief product officer at Anthropic, revealed that more than a million people are now signing up for Claude every day, which has pushed the app to the top of Apple’s App Store and the Google Play charts.

The app overtook OpenAI’s ChatGPT, which has faced a backlash from some users after CEO Sam Altman reached an agreement with the US government following the fall-out between Anthropic and the Pentagon.

The feud centres around safety guardrails that would prevent Anthropic’s AI from being used for domestic surveillance and autonomous weapons.

Secretary of War Pete Hegseth described these restrictions as “ideological whims”, while President Donald Trump claimed Anthropic was run by “Leftwing nut jobs” that were threatening US national security.

On Wednesday, the Department of War officially informed Anthropic that its products are deemed a supply chain risk, “effective immediately”.

This is the first time the designation has been applied to a domestic company, with the term previously reserved for foreign firms with ties to adversaries.

“From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes,” the Pentagon said in a statement, first shared by Politico.

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of critical capability and put our warfighters at risk.”

The designation prevents all federal agencies and government contractors from using the Claude chatbot while working for the US military

Anthropic has claimed the move is unlawful and has said it will fight the supply-chain risk label in court.

“We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making – that is the role of the military,” said Anthropic CEO Dario Amodei.

“We are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.