Get all your news in one place.
100's of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
World
Thea Felicity

OpenAI Insiders Claim Sam Altman is Lying, Manipulative, and Untrustworthy in The New Yorker's Investigation

Sam Altman (Credit: Steve Jennings/Flickr / Creative Commons)

A sweeping new investigation by The New Yorker has reignited scrutiny around OpenAI chief executive Sam Altman, with former insiders and board-level figures describing him as deceptive, manipulative and, in some cases, too untrustworthy to oversee what the company was founded to build: potentially world-shaping artificial intelligence.

For context, doubts about Altman's leadership first burst into public view in November 2023, when OpenAI's board abruptly fired him, saying only that he had not been 'consistently candid in his communications'.

He was restored within days after a staff revolt and pressure from powerful backers, including Microsoft. At the time, events prior to that decision were not clear. Now, Ronan Farrow and Andrew Marantz's latest reporting suggests it was also far more substantive than many outsiders realised.

According to The New Yorker, the internal case against Altman was not built on one blow-up or a single policy disagreement, but on a pattern that some colleagues said had become impossible to ignore.

In a thread posted on X, Farrow said he and Marantz reviewed 'never-before-disclosed internal memos', obtained '200+ pages of documents' and interviewed 'more than 100 people' while investigating OpenAI and its chief executive.

OpenAI Insiders Said Sam Altman Could Not Be Trusted

In the report, there is a simple but devastating question: was Altman, the man steering one of the world's most ground-breaking AI companies, someone his own colleagues believed could be trusted with that power?

The New Yorker reports that in autumn 2023, OpenAI chief scientist Ilya Sutskever compiled roughly 70 pages of memos about Altman and OpenAI president Greg Brockman, drawing on internal records including Slack messages and HR documents. Farrow wrote on X that the people involved in Altman's ouster 'accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology'.

One of the clearest quotes in the piece comes from former OpenAI chief technology officer Mira Murati, who told the magazine: 'We need institutions worthy of the power they wield.' She added that she had shared what she was seeing with the board and stood by it.

The reporting also describes a more specific concern.

According to Farrow's thread, in late 2022 Altman allegedly told the board that features in a forthcoming model had been approved by a safety panel. Board member Helen Toner then requested documentation and, Farrow wrote, found that 'the most controversial features had not, in fact' been approved. If accurate, that kind of discrepancy goes to the heart of why the board said Altman was not consistently candid.

The Sam Altman Investigation Paints A Pattern

What makes The New Yorker investigation more damaging than the old boardroom drama is that it presents the trust issue as recurring.

The article recounts prior tensions with safety-focused staff and former colleagues, including claims that Altman at times denied or minimised internal concerns when challenged. In one especially bleak assessment, a former OpenAI researcher, Daniel Kokotajlo Wainwright, said Altman had a habit of building governance structures that looked constraining 'on paper', only to later manoeuvre around them.

The portrait that emerges is something more Silicon Valley and, arguably, more troubling: a leader whom many people found extraordinarily persuasive until they no longer did.

That thread runs throughout the reporting. One tech executive quoted by The New Yorker described Altman as using 'Jedi mind tricks'. Another, cited in the article, compared watching him outmanoeuvre colleagues during his brief ouster and return to seeing 'an A.G.I. breaking out of the box'. Those are not neutral descriptions.

They are the language of people who seem half-impressed and half-alarmed.

Altman, for his part, disputes or does not recall several of the events described in the reporting. The New Yorker includes multiple denials, caveats, and rebuttals from him, and in places notes where accounts are contested. That matters. So does the fact that some of the most explosive claims remain allegations from former insiders rather than judicial findings or regulatory conclusions.

Still, the central damage may already be done. OpenAI was built, at least in public, on the claim that artificial intelligence is too powerful to be left to ordinary corporate instincts. The New Yorker's investigation asks what happens if the executive at the centre of that promise was seen by his own insiders as someone who could not be relied upon when it mattered most.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.