Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
World
Chelsie Napiza

Anthropic Rejects Government Use of Its AI for Surveillance and Weapons — OpenAI Sweeps in and Took the Deal

Anthropic CEO Dario Amodei (Credit: Wikimedia Commons)

Anthropic has been formally blacklisted by the Trump administration after refusing to lift safeguards on its AI model, Claude, that bar its use in domestic mass surveillance and fully autonomous weapons, a stance that cost it a contract worth up to £159 million ($200 million) and handed rival OpenAI a direct path into the Pentagon's classified networks.

The week-long stand-off between Silicon Valley and Washington came to a head on Feb. 28, 2026, when President Donald Trump ordered every federal agency to 'immediately cease' using Anthropic's technology.

Hours later, OpenAI CEO Sam Altman announced on X that his company had struck a deal with the US Department of Defense, with the Pentagon agreeing to the same 'red lines' Anthropic had demanded. The whiplash of events has set off a fierce debate about who controls the ethical limits of AI in warfare, and whether any private company can hold that line against government pressure.

The Contract, the Demands and the Deadline

Anthropic had held a privileged position in American military technology. The San Francisco-based company was the first AI developer to have its models approved for use on the Pentagon's classified networks, signing a contract worth up to £159 million ($200 million) with the Defence Department in July 2025.

That contract was built around advancing what the government called 'responsible AI in defence operations,' and it came with explicit guardrails in Anthropic's acceptable use policy, prohibiting the use of Claude for mass surveillance of Americans and its deployment in fully autonomous weapons systems.

Those restrictions became the central point of conflict. The Pentagon's position was firm: AI vendors must make their models available for 'all lawful purposes,' with the military itself determining what those purposes are.

When Defence Secretary Pete Hegseth met with Anthropic CEO Dario Amodei at the Pentagon on Tuesday, Feb. 24, the meeting was described as cordial by sources familiar with the matter. The amicability did not last. By Thursday, Hegseth had issued a public ultimatum: comply by 17:01 ET on Friday or lose the contract.

He also raised the possibility of invoking the Defense Production Act, a Korean War-era statute, to compel Anthropic to hand over an unrestricted version of Claude on national security grounds, and threatened simultaneously to designate the company a supply-chain risk.

Amodei rejected the ultimatum in plain terms. 'These threats do not change our position,' he wrote on Thursday. 'We cannot in good conscience accede to their request.' He also pointed out the internal contradiction of the government's dual threats: 'One labels us a security risk; the other labels Claude as essential to national security.'

The company said the latest contract language it had received from the Department of War 'made virtually no progress' on the two disputed restrictions, and that the new wording would allow Anthropic's safeguards to be 'disregarded at will.'

Trump's Ban, Hegseth's Designation and Anthropic's Legal Challenge

The 17:01 deadline passed without an agreement. On Friday afternoon, Trump posted on Truth Social that 'The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.' He directed every federal agency to immediately halt use of Anthropic's technology, adding a six-month phase-out period for departments, including the Pentagon, that are currently relying on it.

Minutes later, Hegseth announced on X that he was formally designating Anthropic a 'Supply-Chain Risk to National Security.' That designation prevents any company doing business with the US military from using Anthropic's technology, a measure normally reserved for firms from adversarial nations such as China or Russia, most recently applied to Kaspersky Lab in 2024.

The practical consequences reach well beyond the Pentagon contract itself. As CNN reported, a large share of Anthropic's enterprise clients work with government contractors, and the supply-chain risk designation could force many of them to choose between their Pentagon relationships and their Anthropic licences. At a company valued at £302 billion ($380 billion) with reported revenues of £11.1 billion ($14 billion), the direct loss of the government contract is not existential. The secondary commercial fallout may prove harder to contain.

Anthropic pushed back in a formal statement, disputing both the legal basis and scope of the designation. The company said that under federal law, a supply-chain risk designation would only apply to 'the use of Claude as part of Department of War contracts, it cannot affect how contractors use Claude to serve other customers.'

OpenAI Steps In — With the Same Red Lines

The most striking element of the day's events was not the ban itself, but who moved to fill the gap and how. Within hours of Trump's announcement, Sam Altman posted on X that OpenAI had reached an agreement with the Department of Defense to deploy its models on classified networks.

The Pentagon, he said, had 'displayed a deep respect for safety.' Then came the line that cut to the heart of the entire dispute: the Department of War had agreed to OpenAI's two core restrictions, no domestic mass surveillance and human oversight for all use-of-force decisions, including autonomous weapons. These were, word for word, the same limits Anthropic had demanded and been punished for demanding.

Altman had telegraphed this position in an internal memo to OpenAI employees on Thursday, stating that the company shared Anthropic's 'red lines.' He told CNBC Friday morning that companies must work with the military 'as long as it is going to comply with legal protections' and 'the few red lines that we share with Anthropic.'

He later wrote that the restrictions agreed to in OpenAI's contract 'reflect existing US law and Pentagon policy,' a framing that the Department of War appeared willing to accept from OpenAI but not from Anthropic. A source familiar with OpenAI's internal all-hands meeting told Fortune that one Pentagon official suggested the breakdown with Anthropic was partly personal, that Amodei had offended Department of War leadership, including through blog posts that 'the department got upset about.' The fight over who controls the ethics of AI in warfare has only just begun.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.