Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say

OpenAI cofounder and CEO Sam Altman (Credit: Stefan Wermuth/Bloomberg via Getty Images)

In July 2023, OpenAI unveiled a new team dedicated to ensuring that future AI systems that might be more intelligent than all humans combined could be safely controlled. To signal how serious the company was about this goal, it publicly promised to dedicate 20% of its then-available computing resources to the effort.

Now, less than a year later, that team, which was called Superalignment, has been disbanded amid staff resignations and accusations that OpenAI is prioritizing product launches over AI safety. According to a half dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.

Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.

The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.

The company is currently facing a backlash over its use of a voice for its AI speech generation features that is strikingly similar to actress Scarlett Johansson's. In that case questions have been raised about the credibility of OpenAI's public statements that the similarity between the AI voice it calls "Sky" and Johansson's voice is purely coincidental. Johansson says OpenAI cofounder and CEO Sam Altman approached her last September, when the Sky voice was first debuted, asking permission to use her voice. She declined. And she says Altman asked again for permission to use her voice last week, just before a closely-watched demonstration of its latest GPT-4o model, which used the Sky voice. OpenAI has denied using Johansson's voice without her permission, saying it paid a professional actress, whose name it says it cannot legally disclose, to create Sky. But Johansson's claims have now cast doubt on this—with some speculating on social media that OpenAI in fact cloned Johansson's voice or perhaps blended another actress's voice with Johansson's in some way to create Sky.

OpenAI's Superalignment team had been set up under the leadership of Ilya Sutskever, the OpenAI cofounder and former chief scientist, whose departure from the company was announced last week. Jan Leike, a long-time OpenAI researcher, co-led the team. He announced his own resignation Friday, two days after Sutskever's departure. The company then told the remaining employees on the team—which numbered about 25 people—that it was being disbanded and that they were being reassigned within the company.

It was a swift downfall for a team whose work OpenAI had positioned less than a year earlier as vital for the company and critical for the future of civilization. Superintelligence is the idea of a future, hypothetical AI system that would be smarter than all humans combined. It is a technology that would lie even beyond the company’s stated goal of creating artificial general intelligence, or AGI—a single AI system as smart as any person.

Superintelligence, the company said when announcing the team, could pose an existential risk to humanity by seeking to kill or enslave people. “We don’t have a solution for steering and controlling a potentially superintelligent AI, and preventing it from going rogue,” OpenAI said in its announcement. The Superalignment team was supposed to research those solutions.

It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next four years” to the effort.

But a half dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.

One source familiar with the Superalignment team's work said that there were never any clear metrics around exactly how the 20% amount was to be calculated, leaving it subject to wide interpretation. For instance, the source said the team was never told whether the promise meant "20% each year for four years" or "5% a year for four years" or some variable amount that could wind up being "1% or 2% for the first three years, and then the bulk of the commitment in the fourth year." In any case, all the sources Fortune spoke to for this story confirmed that the Superalignment team was never given anything close to 20% of OpenAI's secured compute as of July 2023.

OpenAI researchers can also make requests for what is known as “flex” compute—access to additional GPU capacity beyond what has been budgeted—to deal with new projects between the quarterly budgeting meetings. But flex requests from the Superalignment team were routinely rejected by higher ups, these sources said.

Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that these requests were being declined but, the sources said, but others at the company, including chief technology officer Mira Murati were involved in making the decisions. Neither McGrew nor Murati responded to requests to comment for this story.

While the team did carry out some research—it released a paper detailing its experiments in successfully getting a less powerful AI model to control a more powerful one in December 2023—the lack of compute stymied the team's more ambitious ideas, the source said.

After resigning, Leike on Friday published a series of posts on X (formerly Twitter) in which he criticized his former employer, saying “safety culture and processes have taken a backseat to shiny products.” He also said that “over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Five sources familiar with the Superalignment team's work backed up Leike’s account, saying that the problems with accessing compute worsened in the wake of the pre-Thanksgiving showdown between Altman and the board of the OpenAI non-profit foundation.

Sutskever, who was on the board, had voted to fire Altman, and was the person the board chose to give Altman the news. When OpenAI’s staff rebelled in response to the decision, Sutskever subsequently posted on X that he “deeply regretted” his participation in Altman’s firing. Ultimately, Altman was rehired and Sutskever and several other board members involved in his dismissal stepped down from the board. Sutskever never returned to work at OpenAI following Altman’s rehiring, but had not formally left the company until last week.

One source disputed the way the other sources Fortune spoke to characterized the compute problems the Superalignment team faced, saying they predated Sutskever's participation in the failed coup, plaguing the group from the get-go.

While there have been some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said this was not the case and that Sutskever had no access to the team’s work and played no role in directing the team after Thanksgiving.

With Sutskever gone, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its compute allocation, the sources said. 

In addition to Leike and Sutskever, OpenAI has lost at least six other AI safety researchers from different teams in recent months. One researcher, Daniel Kokotajilo, told news site Vox that he “gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” 

In response to Leike’s comments, Altman and cofounder Greg Brockman, who is OpenAI’s president, posted on X that they were “grateful to [Leike] for everything he's done for OpenAI.” The two went on to write, “We need to keep elevating our safety work to match the stakes of each new model.”

They then laid out their view of the company’s approach to AI safety going forward, which would involve a much greater emphasis on testing models currently under development than trying to develop theoretical approaches on how to make future, more powerful models safe. “We need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities,” Brockman and Altman wrote, adding that “empirical understanding can help inform the way forward.”

The people who spoke to Fortune did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested equity in the company, or both. Employees who have left OpenAI have been forced to sign separation agreements that include a strict non-disparagement clause that says the company can claw back their vested equity if they criticize the company publicly, or if they even acknowledge the clause’s existence. And employees have been told that anyone who refuses to sign the separation agreement will forfeit their equity as well.

After Vox reported on these separation terms, Altman posted on X that he had been unaware of that provision and was “genuinely embarrassed” by that fact. He said OpenAI had never attempted to enforce the clause and claw back anyone’s vested equity. He said the company was in the process of updating its exit paperwork to “fix” the issue and that any past employee concerned about the provisions in the exit paperwork they signed could approach him directly about it and it would be changed. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.