OpenAI announced Tuesday that it's establishing a new safety committee, and also confirmed that it has begun training its next big model.
Why it matters: The company has seen a number of key departures in recent weeks, with several employees complaining that it has not been devoting promised resources to ensuring the long-term safety of its AI work.
Driving the news: OpenAI says it has established a new safety and security committee to be led by outside chairman Bret Taylor along with board members Adam D'Angelo, Nicole Seligman and Sam Altman.
- A number of OpenAI technical and policy leads will round out the committee, including head of preparedness Aleksander Mądry, head of safety systems Lilian Weng, co-founder John Schulman, security chief Matt Knight and chief scientist Jakub Pachocki.
- That committee's first task, OpenAI said, will be to evaluate and improve "OpenAI's processes and safeguards" over the next 90 days.
- It plans to retain and consult with a range of outside safety and security experts including former cybersecurity official Rob Joyce and former top DOJ official John Carlin.
- OpenAI said the new committee is advisory, with the ability to make recommendations to the board.
OpenAI also used Tuesday's announcement to officially confirm that it has started training its next big large language model, although recent comments from both Microsoft and OpenAI suggested this was already taking place.
- OpenAI CTO Mira Murati told Axios in an interview earlier this month that a major update to the underlying model — i.e., the successor to GPT-4 — is due to be unveiled later this year.
- And at last week's Build conference in Seattle, Microsoft CTO Kevin Scott suggested that OpenAI's new model would be substantially larger than GPT-4.
- Scott likened the new model to a giant whale; he compared GPT-4 to an Orca. (He compared earlier models to sharks and other sea creatures, explaining that he didn't want to use specific numbers.)
Context: OpenAI's moves come after the resignations of co-founder Ilya Sutskever and Jan Leike, who together led the company's long-term safety work, dubbed "superalignment." Leike criticized OpenAI for not supporting the work of his superalignment team in a thread he posted announcing his departure.
- Policy researcher Gretchen Krueger also announced last week she was leaving OpenAI.
- Krueger said she made the decision to exit OpenAI before learning that Leike and Sutskever were leaving, but added she shares their concerns and has additional concerns as well.
Between the lines: OpenAI is clearly trying to reassure the world that it's taking its security responsibilities seriously and not ignoring recent criticism.
- OpenAI co-founder Schulman has taken on an expanded portfolio as head of alignment science, according to a source familiar with the company's thinking.
- Schulman will be responsible both for short-term safety as well as longer-term "superalignment" research designed to ensure that future systems with greater-than-human capabilities operate according to human values and norms.
- OpenAI is consolidating this work within its research unit, but the source said the company believes this will be more effective and plans to increase its investment in this area over time.
- The source stressed that OpenAI will address any valid criticisms of its work, and that the new work expands on commitments it has made to the White House and at a recent AI summit in Seoul.