Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

Why AI company Anthropic and the US are at a standoff over a military contract

The United States government is threatening to end military contracts with the company Anthropic unless it opens its AI technology for unrestricted military use, but the tech company is standing firm.

Anthropic CEO Dario Amodei said in a statement on February 26 that he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems.

"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."

The statement from Amodei comes less than 24 hours before the Friday deadline set by the Department of War to give them unfettered access to Claude.

Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new US military internal network.

Anthropic won a $200 million (€167 million) contract from the US Department of Defence last July to “prototype frontier AI capabilities that advance US national security,” Anthropic said. The company inked a partnership with Palantir Technologies in 2024 to integrate Claude into US intelligence software.

Defence Secretary Pete Hegseth reportedly said on 24 February he would end the $200 million (€167 million) contract and label the company a “supply chain risk” if Anthropic did not comply.

If Anthropic is designated a supply chain risk under US procurement law, the government would be able to exclude the company from contract awards, remove the company’s products from consideration and direct prime contractors not to use that supplier.

Reports about Hegseth’s meeting with Amodei also said that Hegseth threatened to use the Defense Production Act against the company, a law that gives the US President broad authority to direct private companies to prioritise national security needs, including access to their technology.

In his statement, Amodei said contracts with the Department of War should not include instances where Claude is deployed for mass domestic surveillance and integrated into fully autonomous weapons.

He said these are the safeguards that he believes are the reason for the Department's threat of withdrawing Anthropic from US military use.

Amodei's statement called the two threats "inherently contradictory," because one labels Anthropic a security risk and the other says that Claude is "essential to national security".

He acknowledged that the Department of War can choose who to work with on contracts that are more aligned with its vision, but "given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider".

The AI chatbot has been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and does intelligence analysis directly for the Department of War.

Euronews Next reached out to the US government’sDepartment of War but did not receive an immediate reply.

Anthropic rolls back core safety promise

Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

On Tuesday, Anthropic said in an interview with Time Magazine that it was dropping its safety pledge that it would not release an AI system unless it could guarantee that the safety measures were adequate.

Instead, it launched a new version of its responsible scaling policy, which outlines the company’s framework for mitigating catastrophic AI risks.

Jared Kaplan, Anthropic’s chief science officer, told the publication that keeping the company from training new models while their competitors raced ahead without safeguards would not help them keep up in the AI race.

“If one AI developer paused development to implement safety measures while others moved forward with training and deploying AI systems without strong mitigations, that could result in a world that is less safe,” Anthropic’s new policy reads.

“The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit.”

The policy separates Anthropic’s hopes for bringing safety standards to the industry from its own goals as a company, where safety is still a priority for them.

Anthropic said its new policy means the company will set “ambitious yet achievable” safety roadmaps for its models as well as publish risk reports that will show anticipated risks and whether a model’s release is justified.

This article was updated on 27 February.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.