Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

Tech giants and ex military leaders support Anthropic's legal challenge against US government

Microsoft, retired military leaders, and artificial intelligence (AI) think tanks are supporting Anthropic in its case to remove the Trump administration's designation of the company as a "supply chain risk."

The US Department of War's (DOW) actions“forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a US company,” Microsoft's legal brief said.

Microsoft's legal filing said that the supply chain risk designation "may bring severe economic effects that are not in the public interest," and has asked the judge to order a temporary lifting of the designation.

The filings follow Anthropic mounting a legal challenge against the United States' Department of War last week.

Anthropic's label as a supply chain risk means the government is able to exclude the company from contract awards, remove its products from consideration and prevent direct prime contractors from using the supplier.

Anthropic was, until recently, the only one of its peers approved for use in classified military networks. Its AI chatbot had been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and did intelligence analysis for the Department of War.

Who else has come out to support Anthropic?

Several joint files supporting Anthropic have been launched by American military officials, Big Tech companies, and AI organisations.

US Defence Secretary Pete Hegseth's conduct against Anthropic "threatens the rule-of-law principles that have long strengthened our military," according to the file, which is supported by Michael Hayden, the former director of the US' Central Intelligence Agency (CIA).

They also allege that Hegseth's actions are a misuse of government authority for “retribution against a private company that has displeased the leadership.”

The filing from the ex-military officials warns that the “sudden uncertainty” of targeting a technology widely embedded in military platforms could disrupt planning and put soldiers at risk during ongoing operations, such as the war in Iran.

Another filing, on behalf of 37 AI engineers previously working at OpenAI and Google's DeepMind, called the DOW's actions an "improper and arbitrary use of power that has serious ramifications for our industry."

“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”

A separate filing said that it is not hard to "imagine a world in which the government effectively controls what all Americans do and say," if the government is able to dictate Anthropic's policies.

The joint filing, from groups such as the Electronic Frontier Foundation and the Cato Institute, said that the government's actions violate the country's First Amendment, the part of the Constitution that dictates free speech rights.

"The government's actions ... threaten the vitality and independence of our democracy," the filing reads.

How did the conflict start?

The issue between Anthropic and the US military began when the company refused to give the military unfettered access to its AI chatbot, Claude. The government gave Anthropic 48 hours to give it access, or face sanctions.

Anthropic CEO Dario Amodei said the company gave the military two red lines: that its technology not be used for mass domestic surveillance or that it be embedded in fully autonomous weapons.

Amodei said in a statement on February 26 that he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems.

"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do," he said.

Microsoft's court filing supported Anthropic's red lines, saying that "American AI should not be used to conduct domestic mass surveillance or start a war without human control," noting their position is "consistent with the law".

Claude will also be phased out of military operations over the next six months, according to a statement from Hegseth.

Amodei said in a statement that the Department of War can choose who to work with on contracts that are more aligned with its vision, but "given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider".

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.