OpenAI is making big changes to its Safety and Security Committee, which oversees the safety of AI as its capabilities grow, announcing that CEO Sam Altman will no longer be a member of the group.
Instead, the new safety committee will be run by an independent board—and will see its powers grow from just making recommendations to having the authority to supervise safety evaluations for new AI models or delay any new launches until safety concerns are addressed.
“We’re committed to continuously improving our approach to releasing highly capable and safe models, and value the crucial role the Safety and Security Committee will play in shaping OpenAI's future,” the company said in a statement.
The new committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and former EVP and General Counsel of Sony Nicole Seligman.
It’s a big change from just a few months ago, when Altman announced he would lead a new safety board, just weeks after he dismantled the company’s original one. The removal of Altman was seemingly made to address concerns about potential conflicts of interest.
That decision followed the exit of several key members of the original safety committee, including co-founders Ilya Sutskever and Jan Leike. Leike was especially critical of OpenAI in his departure, accusing the company of neglecting “safety culture and processes” in favor of “shiny products”. He chose to depart the company, he said at the time, because he had “been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.”