Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Nils Pratley

OpenAI’s directors have been anything but open. What the hell happened?

Head and shoulders picture of Sam Altman, his face half in shadow, against a blue background
Sam Altman, back in post after a week, was originally dismissed for not being ‘consistently candid’, OpenAI’s board said. Photograph: Carlos Barría/Reuters

The OpenAI farce has moved at such speed in the past week that it is easy to forget that nobody has yet said in clear terms why Sam Altman – the returning chief executive and all-round genius, according to his vocal fanclub – was fired in the first place. Since we are constantly told, not least by Altman himself, that the worst outcome from the adoption of artificial general intelligence could be “lights out for all of us”, somebody needs to find a voice here.

If the old board judged, for example, that Altman was unfit for the job because he was taking OpenAI down a reckless path, lights-wise, there would plainly be an obligation to speak up. Or, if the fear is unfounded, the architects of the failed boardroom coup could do everybody a favour and say so. Saying nothing useful, especially when your previous stance has been that transparency and safety go hand in hand, is indefensible.

The original non-explanation from OpenAI was that Altman had to go because he had not been “consistently candid” with other directors. Not fully candid about what? A benign (sort of) interpretation is that the row was about the amount of time Altman was devoting to other business interests, including a reported computer chip venture. If that is correct, outsiders might indeed be relaxed: it is normal for other board members to worry about whether the boss is sufficiently focused on the day job.

Yet the whole purpose of OpenAI’s weird governance setup was to ensure safe development of the technology. For all its faults, the structure was intended to put the board of the controlling not-for-profit entity in change. Safety came first; the interests of the profit-seeking subsidiary were secondary. Here’s Altman’s own description, from February this year: “We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety.”

The not-for-profit board, then, could close the whole show if it thought that was the responsible course. In principle, sacking the chief executive would merely count as a minor exercise of such absolute authority.

The chances of such arrangements working in practice were laughably slim, of course, especially when there was a whiff of an $86bn valuation in the air. You can’t take a few billion dollars from Microsoft, in exchange for a 49% stake in the profit-seeking operation, and expect it not to seek to protect its investment in a crisis. And if most of the staff – some of the world’s most in-demand workers – rise in rebellion and threaten to hop off to Microsoft en masse, you’ve lost.

Yet the precise reason for sacking Altman still matters. There were only four members of the board apart from him. One was the chief scientist, Ilya Sutskever, who subsequently performed a U-turn that he didn’t explain. Another is Adam D’Angelo, chief executive of the question-and-answer site Quora, who, bizarrely, intends to transition seamlessly from the board that sacked Altman to the one that hires him back. Really?

That leaves the two departed women: Tasha McCauley, a tech entrepreneur, and Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology. What do they think? Virtually the only comment from either has been Toner’s whimsical post on X after the rehiring of Altman: “And now, we all get some sleep.”

Do we, though? AI could pose a risk to humanity on the scale of a nuclear war, Rishi Sunak warned the other week, echoing the general assessment. If the leading firm can’t even explain the explosion in its own boardroom, why are outsiders meant to be chilled? In the latest twist, Reuters reported on Thursday that researchers at OpenAI were so concerned about the dangers posed by the latest AI model that they wrote to the board. Those directors have some explaining to do – urgently.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.