
OpenAI CEO Sam Altman told employees this week that the company will hold firm on two ethical limits in any work with the U.S. military, mirroring the same "red lines" that ignited a rapidly escalating dispute between rival Anthropic and the Pentagon: no AI built for mass surveillance, and no autonomous lethal weapons.
Axios reported that Altman's internal memo comes amid Washington's pressure on major AI labs to loosen safety guardrails for national security use cases. In the message, Altman argued that the standoff is bigger than one vendor contract fight, calling it an industry-wide boundary-setting moment.
The flashpoint is Anthropic's refusal to alter or remove safeguards that the company says are designed to prevent its Claude models from being used in mass domestic surveillance or in weapons that can select and engage targets without meaningful human oversight. The Pentagon, according to reporting by Reuters and others, has insisted it needs flexibility for "lawful" military use, while declining to formally rule out the two categories Anthropic wants excluded.
That impasse has turned into a high-stakes test of leverage. Reuters reported that the Pentagon threatened to cancel a contract worth up to $200 million if Anthropic did not comply by a deadline, and also raised possible steps, including labeling the company a supply chain risk and invoking the Defense Production Act.
Against that backdrop, Altman's note signals OpenAI does not plan to win Defense Department business by offering fewer restrictions than its competitor. Axios reported that Altman told staff OpenAI shares the same core prohibitions and that humans should remain "in the loop" for high-stakes automated decisions.
Pentagon spokesperson Sean Parnell said on social media that the department has "no interest" in conducting mass domestic surveillance or deploying autonomous weapons, even as the department has pushed back on being bound by a contractor's policy language. Anthropic, meanwhile, has said it wants to work with the Defense Department, but not at the cost of removing restrictions it argues are necessary for democratic values and safety.
The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.… https://t.co/3pjWZ66aXz
— Sean Parnell (@SeanParnellASW) February 26, 2026
The Washington Post described how the quarrel intensified amid aggressive hypotheticals about extreme wartime scenarios, underscoring the gap between military planners seeking maximum optionality and AI labs wary of catastrophic misuse.
Altman's stance could complicate the Pentagon's near term procurement strategy if officials hoped to pivot to a rival lab that would accept broader terms. It also strengthens a de facto industry norm that certain applications remain off limits, at least for leading frontier model providers, even as governments argue that adversaries may not self-restrain.
The argument is reverberating inside the companies, too. An open letter backed by employees across Google and OpenAI urged leadership to maintain Anthropic-style red lines on surveillance and fully automated weaponry.