The OpenAI chief scientist who nearly brought down CEO Sam Altman in a failed November mutiny as brief as it was spectacular is launching an AI company of his own.
Ilya Sutskever revealed on Wednesday he was teaming up with OpenAI colleague Daniel Levy and Daniel Gross, a former AI executive at Apple, to found Safe Superintelligence Inc., a moniker chosen to reflect its purpose.
“SSI is our mission, our name and our entire product roadmap, because it is our sole focus.” the three wrote in a statement on the U.S. startup’s barebones website. Building safe superintelligence, they went on to argue, was “the most important technical problem of our time.”
Artificial superintelligence, or ASI, is believed to be the ultimate breakthrough in AI, since experts predict machines will not stop developing once they reach the kind of general-purpose intelligence know as AGI that is comparable to humans.
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
Luminaries in the field like the computer scientist Geoffrey Hinton believe ASI to be an existential danger to mankind and building safeguards that align with our interest as a species was one of the top missions Sutskever had at OpenAI.
His high-profile departure in May came almost six months to the day after he joined independent board directors Helen Toner, Tasha McCauley and Adam D’Angelo in removing Altman as CEO against the will of chair Greg Brockman, who immediately resigned.
Sutskever came to regret his role in briefly ousting Altman
The spectacular coup, which Toner recently blamed on a pattern of deception by Altman, threatened to tear the company apart. Sutskever quickly expressed his regret and reversed his position, demanding Altman be reinstated to prevent the potential downfall of OpenAI.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
In the aftermath, Toner and McCauley left the non-profit board, while Sutskever seemingly vanished from the public eye all the way up until he announced his departure last month.
In his resignation announcement, he implied he was voluntarily going to commit to a project “very personally meaningful to me” and promised to share details at a later unspecified date.
His departure nonetheless set in motion events that quickly revealed deep governance issues that appeared to confirm the board’s initial suspicions.
First, Sutskever's co-lead Jan Leike accused the company of breaking its promise to give their AI safety team 20% of its compute resources and resigned. Later it emerged employees at OpenAI were slapped with water-tight gag orders that forbade them from criticizing the company after they left, at the penalty of losing their vested shares.
Finally, actress Scarlett Johansson—who portrayed an AI chatbot in Spike Jonze’s 2013 sci-fi film Her—then sued the company claiming Altman effectively stolen her voice to use for their latest AI product. OpenAI refuted the claim but pledged to change the sound anyway out of respect for her wishes.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
These instances suggested OpenAI had abandoned its original purpose of developing AI that would benefit all of humanity—instead to pursue commercial success.
“The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety, people interested in profits like Sam Altman didn’t,” Hinton told Bloomberg last week.
A leader in the field since AI's Big Bang Moment
Sutskever has long been one of the brightest minds in the field of AI, researching artificial neural networks that conceptually mimic the human brain in order to train computers to learn and abstract based on data.
In 2012, he teamed up with Hinton to collaborate on the 2012 landmark development of Alex Krizhevsky’s deep neural network AlexNet, commonly considered AI’s Big Bang moment. It was the first machine learning algorithm that could accurately label images fed to it, revolutionizing the field of computer vision.
When OpenAI was founded in December 2015, Sutskever received top billing over co-chairs Altman and Elon Musk even though he was only research director. That made sense at the time, as it was formed originally as a non-profit that would create value for everyone rather than shareholders, prioritizing “a good outcome for all over its own self-interest”.
Since then, however, OpenAI has effectively become a commercial enterprise, in Altman’s words ‘to pay the bills’ for its compute-heavy operations. In the process it adopted a complicated structure with a new for-profit entity where returns were capped for investors like Microsoft and Khosla Ventures, but control remained with the non-profit board.
Altman called this convoluted governance necessary at the time in order to keep everyone on board. Recently The Information reported he sought to change OpenAI's legal structure, opening the door for a controversial IPO.
Sutskever’s new commercial enterprise dedicated to safe superintelligence will be located in Silicon Valley’s Palo Alto and Tel Aviv, Israel, in order to best recruit top talent.
“Our team, investors and business model are all aligned to achieve SSI,” they wrote, pledging there would be “no distraction by management overhead or product cycles”.
How he and his two co-founders aim to both create ASI endowed with robust guardrails while also paying the bills and earning a return for their investors was not immediately clear from the statement, however. Whether it too has a capped for-profit structure, for example, was not revealed.
They merely said only that the business model of Safe Superintelligence was designed from the outset to be “insulated from short-term commercial pressures.”