Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Luke James

Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for autonomous attacks, mass surveillance

Dario Amodei, Anthropic CEO, on stage during a conference.

Anthropic has filed two federal lawsuits against the Pentagon and other U.S. federal agencies, seeking to overturn the Department of War's decision to designate the AI company a "supply chain risk," a label that blocks Pentagon suppliers and contractors from using its Claude models, and that national security experts say has historically been reserved for foreign adversaries.

The lawsuits, the first filed in the U.S. District Court for the Northern District of California and the second in the U.S. Court of Appeals for the D.C. Circuit, allege the Trump administration violated Anthropic's First Amendment and due process rights, according to Reuters. Anthropic is asking courts to vacate the designation, block its enforcement, and require federal agencies to withdraw directives to drop the company's tools. The company said the actions could jeopardize "hundreds of millions of dollars" in revenue in the near-term. This dispute traces back to a contract renegotiation between Anthropic and the Department of War that collapsed in late February. The Pentagon wanted unrestricted access to Claude for "any lawful use," while Anthropic refused to remove two guardrails: a prohibition on using its models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. citizens. Defense Secretary Pete Hegseth formally issued the supply chain risk designation on February 27; Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post that same day, with a six-month phase-out period.The Pentagon argued that private companies cannot dictate how the government uses technology in national security scenarios, and that Anthropic's restrictions could endanger American lives. Anthropic countered that current AI models are not reliable enough for fully autonomous weapons deployment, and that domestic surveillance at scale would violate fundamental rights. The fallout has had immediate competitive consequences, with OpenAI’s Sam Altman controversially striking a new Pentagon deal within hours of Anthropic's new designation. Altman publicly stated that the Dept. of Warshares OpenAI's principles on human oversight of weapons and opposition to mass surveillance. xAI, Elon Musk's AI company, is also understood to have since been cleared for use on classified Pentagon systems.Anthropic was previously the first AI lab permitted to operate on the Dept. of War's classified networks, and signed a contract worth up to $200 million with the department in July 2025. The Wall Street Journal has previously reported that Claude had been used in military operations, including intelligence assessments and target identification in the U.S.'s ongoing conflict with Iran, even after the Pentagon ousted the model."The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its filing with the U.S. District Court.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.