Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Christiaan Hetzner

Former OpenAI board member reveals why Sam Altman was fired in bombshell interview—‘we learned about ChatGPT on Twitter’

Former OpenAI non-profit board member Helen Toner (Credit: Jerod Harris—Getty Images for Vox Media)

One of the ringleaders behind the brief, spectacular, but ultimately unsuccessful coup to overthrow Sam Altman accused the OpenAI boss of repeated dishonesty in a bombshell interview that marked her first extensive remarks since November’s whirlwind events.

Helen Toner, an AI policy expert from Georgetown University, sat on the nonprofit board that controlled OpenAI from 2021 until she resigned late last year following her role in ousting Altman. After staff threatened to leave en masse, he returned empowered by a new board with only Quora CEO Adam D’Angelo remaining from the original four plotters. 

Toner disputed speculation that she and her colleagues on the board had been frightened by a technological advancement. Instead she blamed the coup on a pronounced pattern of dishonest behavior by Altman that gradually eroded trust as key decisions were not shared in advance.   

“For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she told The TED AI Show in remarks published on Tuesday.

Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was withheld from the board, according to Toner. "We learned about ChatGPT on Twitter," she said.

Toner claimed Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action had been taken. 

“Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us and that’s a completely unworkable place to be in as a board.”

OpenAI did not respond to a request by Fortune for comment.

Things ultimately came to a head, Toner said, after she co-published a paper in October of last year that cast Anthropic’s approach to AI safety in a better light than OpenAI, enraging Altman.

“The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board, so it was another example that just like really damaged our ability to trust him,” she continued, adding that the behavior coincided with discussions in which the board was “already talking pretty seriously about whether we needed to fire him.”

Taken in isolation, those and other disparaging remarks Toner leveled at Altman could be downplayed as sour grapes from the ringleader of a failed coup. The pattern of dishonesty she described comes, however, on the wings of similarly damaging accusations from a former senior AI safety researcher, Jan Leike, as well as Scarlett Johansson. 

Attempts to self-regulate doomed to fail

The Hollywood actress said Altman approached her with the request to use her voice for its latest flagship product—a ChatGPT voice bot that users can converse with, reminiscent of the fictional character Johansson played in the movie Her. When she refused, she suspects, he may have blended in part of her voice, violating her wishes. The company disputes her claims but agreed to pause its use anyway.

Leike, on the other hand, served as joint head of the team responsible for creating guardrails that ensure mankind can control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving a scathing rebuke of his former employer behind in his wake. (On Tuesday he joined the same OpenAI rival Toner had praised in October, Anthropic.)

Once key members of its AI safety staff had scattered to the wind, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. Whether those in charge of maximizing financial results are best entrusted with implementing guardrails that may prove a commercial hindrance remains to be seen.

Although certain staffers were having their doubts, few outside of Leike chose to speak up. Thanks to reporting by Vox earlier this month, it emerged that a key motivating factor behind that silence was an unusual nondisparagement clause that, if broken, would void an employee’s vesting equity in perhaps the hottest startup in the world.

This followed earlier statements by former OpenAI safety researcher Daniel Kokotajlo that he voluntarily sacrificed his share of equity in order not to be bound by the exit agreement. Altman later confirmed the validity of the claims.

“Although we never clawed anything back, it should never have been something we had in any documents or communication,” he posted earlier this month. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”

Toner’s comments come fresh on the heels of her op-ed in the Economist, in which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to regulate itself as the evidence showed.

“If any company could have successfully governed itself while safely and ethically developing advanced AI systems it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.