Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Times of India
The Times of India
World
TOI World Desk

‘No discussions for specific operations’: Anthropic denies Claude’s use by US military

AI firm Anthropic has stated that it has not held discussions with the US's department of war regarding the use of its AI system, Claude, for specific operations.

"Anthropic has not discussed the use of Claude for specific operations with the department of war," the AI firm said, as cited by Reuters.

Meanwhile, the Pentagon is reportedly considering ending its relationship with Anthropic over the company’s restrictions on how its models are used, Axios reported on Saturday, citing a senior administration official.

The US military has asked four leading AI labs to allow the use of their tools for “all lawful purposes,” including sensitive areas such as weapons development, intelligence gathering, and battlefield operations. Anthropic has not agreed to these terms, leading to mounting frustration within the Pentagon after months of difficult negotiations.

Anthropic has maintained that two areas must remain off-limits: mass surveillance of Americans and fully autonomous weaponry. A senior administration official noted that there is significant ambiguity over what falls within these categories, making it impractical for the Pentagon to negotiate individual use cases with Anthropic or risk the AI system, Claude, unexpectedly blocking certain applications.

Tensions between the Pentagon and Anthropic escalated after The Wall Street Journal reported that the US military used Anthropic’s artificial intelligence model, Claude, during the operation to capture former Venezuelan President Nicolás Maduro.

Claude was reportedly deployed as part of Anthropic’s collaboration with data analytics company Palantir Technologies, whose platforms are extensively used by the US department of defence and federal law enforcement agencies.

In early January, US forces captured Nicolás Maduro and his wife during strikes on multiple sites in Caracas, and Maduro was flown to New York to face federal drug trafficking charges.

Anthropic’s usage policies explicitly prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. The Wall Street Journal reports that the AI model was involved in a raid that included bombing operation, whichs has drawn attention to how artificial intelligence tools are being deployed in military contexts and whether existing safeguards are effective.

According to the report, disagreements over the Pentagon’s desired use of Claude have contributed to growing tensions between the AI firm and US defence officials, with some administration officials considering cancelling a contract worth up to $200 million.

Anthropic was reportedly the first AI developer whose model was used in classified operations by the department of defence, though it remains unclear whether other AI systems were used in the Venezuela mission for unclassified tasks.

Claude is an advanced artificial intelligence chatbot and large language model developed by US-based AI company Anthropic. Designed for tasks such as text generation, reasoning, coding, and data analysis, Claude competes with other large language models, including OpenAI’s ChatGPT and Google’s Gemini.

The system can summarise documents, answer complex queries, generate reports, assist with programming, and analyse large volumes of text.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.