Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

OpenAI robotics leader resigns after Department of War deal

OpenAI's ChatGPT app is shown on a smartphone on 3 March, 2026 in Chicago, Illinois - (Getty Images)

The head of OpenAI’s robotics team has resigned from the AI company after voicing concerns about a recent deal with the US Department of War.

Caitlin Kalinowski, who has worked at the ChatGPT creator since November 2024, claimed that her decision centred around the firm’s governance and the need to clearly define safety guardrails around its technology.

OpenAI CEO Sam Altman announced the deal with the Pentagon on 27 February, shortly after the Trump administration revealed plans to terminate a contract with Anthropic due to resistance over how its Claude bot would be used.

Dario Amodei, the chief executive of Anthropic, said his company “cannot in good conscience accede to [the department’s] request” to use its products for mass surveillance and autonomous weapons.

The new deal between the US government and OpenAI sparked backlash from dozens of current employees, who signed an open letter calling on the company to share the stance of Anthropic.

Ms Kalinowski announced her resignation in a post to LinkedIn over the weekend, revealing that her reason for leaving the company was related to how the deal with the US Department of War was made.

She claimed the announcement was “rushed” and did not have the necessary safety guardrails defined.

“This wasn’t an easy call. AI has an important role in national security,” she wrote.

“But surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got.”

She added that she aims to continue working on building physical artificial intelligence devices, with a focus on safety.

Mr Altman has since said that the initial announcement of the deal with the Department of War looked “opportunistic and sloppy”, adding that he has sought to amend the agreement to include additional safeguards.

In a statement, the company said the language in the new deal “makes explicit that our tools will not be used to conduct domestic surveillance of US persons”, nor will they be used for autonomous lethal weapons.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the statement said.

“In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.