Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

‘The biggest losers in all of this are everyday people and civilians in conflict zones’: OpenAI is filling the gap left by Anthropic — but almost left in the same loopholes for mass domestic surveillance

Sam Altman talking.

  • OpenAI has signed a new contract with the Pentagon
  • The contract wording left room for AI to be used for mass domestic surveillance
  • Sam Altman is being criticized for his stance on the matter

Following Anthropic’s designation as a supply chain risk by Defense Secretary Pete Hegseth and the loss of its $200 million Pentagon contract, OpenAI is now in the firing line for its own agreement with the Pentagon.

Despite OpenAI having a contract clause forbidding its AI models be used by the US military in 2023, several OpenAI employees have revealed its models were previously used by the Pentagon.

At the time, the Pentagon had a contract with Microsoft, who had license to use OpenAI technology, allowing the Pentagon access through Azure OpenAI which was not subject to the same policies.

OpenAI contact with Pentagon questioned

With Anthropic out of the picture over its refusal to allow the Pentagon to use its models for autonomous weapons systems and mass domestic surveillance, OpenAI CEO Sam Altman is now being questioned over the company's latest contract with the US military.

In 2024, OpenAI removed the blanket ban on the military use of its models, and later went on to sign a contract with Anduril allowing the deployment of its models for national security purposes.

Altman has made clear his support for Anthropic’s position on preventing Claude being used for nefarious purposes, but the company’s new agreement with the US military left room for the exact same purposes, sources familiar with the matter told Wired.

Current regulations have fallen behind advancements made in AI, presenting opportunities for government agencies to purchase personal information on US citizens from data brokers, and then using AI models to categorize and sort the information to create highly accurate and detailed profiles of citizens.

Commenting on the latest agreement signed between OpenAI and the US military, Noam Brown, an OpenAI researcher, stated, ”Over the weekend it became clear that the original language in the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance.”

Brown continued, “The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security.”

Sarah Shoker, the former head of OpenAI’s geopolitics team, said, “The biggest losers in all of this are everyday people and civilians in conflict zones. Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It’s black boxes all the way down.”


Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.