One of the ringleaders behind the brief, spectacular, but ultimately unsuccessful coup to overthrow Sam Altman accused the OpenAI boss of repeated dishonesty in a bombshell interview that marked her first extensive remarks since November’s whirlwind events.
Helen Toner, an AI policy expert from Georgetown University, sat on the nonprofit board that controlled OpenAI from 2021 until she resigned late last year following her role in ousting Altman. After staff threatened to leave en masse, he returned empowered by a new board with only Quora CEO Adam D’Angelo remaining from the original four plotters.
Toner disputed speculation that she and her colleagues on the board had been frightened by a technological advancement. Instead she blamed the coup on a pronounced pattern of dishonest behavior by Altman that gradually eroded trust as key decisions were not shared in advance.
“For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she told The TED AI Show in remarks published on Tuesday.
Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was withheld from the board, according to Toner. "We learned about ChatGPT on Twitter," she said.
Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November.
— Helen Toner (@hlntnr) May 28, 2024
Thanks to @bilawalsidhu for a fun conversation! https://t.co/h0PtK06T0K
Toner claimed Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action had been taken.
“Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us and that’s a completely unworkable place to be in as a board.”
OpenAI did not respond to a request by Fortune for comment.
Things ultimately came to a head, Toner said, after she co-published a paper in October of last year that cast Anthropic’s approach to AI safety in a better light than OpenAI, enraging Altman.
“The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board, so it was another example that just like really damaged our ability to trust him,” she continued, adding that the behavior coincided with discussions in which the board was “already talking pretty seriously about whether we needed to fire him.”
But over the past years, safety culture and processes have taken a backseat to shiny products.
— Jan Leike (@janleike) May 17, 2024
Taken in isolation, those and other disparaging remarks Toner leveled at Altman could be downplayed as sour grapes from the ringleader of a failed coup. The pattern of dishonesty she described comes, however, on the wings of similarly damaging accusations from a former senior AI safety researcher, Jan Leike, as well as Scarlett Johansson.
Attempts to self-regulate doomed to fail
The Hollywood actress said Altman approached her with the request to use her voice for its latest flagship product—a ChatGPT voice bot that users can converse with, reminiscent of the fictional character Johansson played in the movie Her. When she refused, she suspects, he may have blended in part of her voice, violating her wishes. The company disputes her claims but agreed to pause its use anyway.
We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.
— Greg Brockman (@gdb) May 18, 2024
First, we have… https://t.co/djlcqEiLLN
Leike, on the other hand, served as joint head of the team responsible for creating guardrails that ensure mankind can control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving a scathing rebuke of his former employer behind in his wake. (On Tuesday he joined the same OpenAI rival Toner had praised in October, Anthropic.)
Once key members of its AI safety staff had scattered to the wind, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. Whether those in charge of maximizing financial results are best entrusted with implementing guardrails that may prove a commercial hindrance remains to be seen.
Although certain staffers were having their doubts, few outside of Leike chose to speak up. Thanks to reporting by Vox earlier this month, it emerged that a key motivating factor behind that silence was an unusual nondisparagement clause that, if broken, would void an employee’s vesting equity in perhaps the hottest startup in the world.
When I left @OpenAI a little over a year ago, I signed a non-disparagement agreement, with non-disclosure about the agreement itself, for no other reason than to avoid losing my vested equity. (Thread)
— Jacob Hilton (@JacobHHilton) May 24, 2024
This followed earlier statements by former OpenAI safety researcher Daniel Kokotajlo that he voluntarily sacrificed his share of equity in order not to be bound by the exit agreement. Altman later confirmed the validity of the claims.
“Although we never clawed anything back, it should never have been something we had in any documents or communication,” he posted earlier this month. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”
Toner’s comments come fresh on the heels of her op-ed in the Economist, in which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to regulate itself as the evidence showed.
in regards to recent stuff about how openai handles equity:
— Sam Altman (@sama) May 18, 2024
we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
there was…
“If any company could have successfully governed itself while safely and ethically developing advanced AI systems it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”