OpenAI has announced the formation of a safety and security committee as part of its ongoing efforts to prioritize safety in its AI projects and operations. The committee will provide guidance to the board on critical safety and security decisions related to the company's initiatives.
This development comes in the wake of recent controversies surrounding AI safety at OpenAI. The company faced scrutiny after a researcher raised concerns about safety practices being overshadowed by the pursuit of innovative products. In response to these issues, OpenAI co-founder and chief scientist resigned, leading to the disbandment of the team focused on AI risks.
Despite these challenges, OpenAI has revealed that it is training a new AI model to succeed the existing GPT-4 system powering its ChatGPT chatbot. The company asserts that its AI models are at the forefront of both capability and safety within the industry.
The safety committee comprises key figures within OpenAI, including CEO Sam Altman and Chairman Bret Taylor, as well as technical and policy experts from the company. External board members, such as the CEO of Quora and a former Sony general counsel, also contribute to the committee's composition.
The primary objective of the safety committee is to assess and enhance OpenAI's safety protocols and procedures. Within a 90-day timeframe, the committee will present its recommendations to the board, which will subsequently be made public in a manner that upholds safety and security standards.
OpenAI's commitment to fostering a transparent and safety-focused environment is underscored by these recent developments. The company aims to engage in constructive dialogue and welcomes discussions on AI safety as it progresses with its cutting-edge AI initiatives.
As OpenAI continues to advance its AI capabilities, the establishment of the safety and security committee signals a proactive approach to addressing safety concerns and upholding ethical standards in AI development.