Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

OpenAI forms a new safety team led by CEO Sam Altman and announces it’s testing a new AI model (maybe GPT-5)

Satya Nadella and Sam Altman at OpenAI Dev Day.

What you need to know

  • OpenAI has a new safety team after its superalignment team was disbanded.
  • The new team will focus on ensuring OpenAI's technological advances meet critical safety and security standards.
  • OpenAI announced that a new AI model is in the testing phase but didn't categorically indicate which model or when it will ship to broad availability.

OpenAI seemingly dissolved its superalignment team after multiple members departed for several reasons, including the firm prioritizing "shiny products" over safety measures. However, the company has formed a new safety team with CEO Sam Altman at the helm alongside Adam D'Angelo and Nicole Seligman (who also serve as OpenAI board members).

The safety team's mandate will ensure that OpenAI's technological advances in the AI landscape meet critical safety and security standards. The team has been tasked with " evaluating and further developing OpenAI's processes and safeguards."

Consequently, the team will present its discoveries to OpenAI's board. After analyzing the findings, the board will make its deductions and highlight how best to implement the safety recommendations by the safety team.

The ChatGPT maker also confirmed that it's in the testing phase of a new AI model. However, the company didn't indicate whether it's the "really good, like materially better" GPT-5 model (if that's what it'll be called).

Prioritizing safety over shiny products?

(Image credit: Ben Wilson | Windows Central)

Multiple top executives left OpenAI shortly after launching its new flagship GPT-4o model with reasoning capabilities. Most employees who left the hot startup were part of the firm's super alignment team, including Jan Leike, who led the alignment department. 

Leike indicated he'd joined the firm because he thought it was the best place in the world to research "how to steer and control AI systems smarter than us." However, Leike constantly disagreed with top executives over core priorities for next-gen models, including security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and more. 

According to the former head of alignment, it was difficult for OpenAI to address some of the issues raised, making him feel like the company wasn't on the right path. Instead, the company seemed more focused on "shiny products," while safety culture and processes had taken a back seat. 

Elsewhere, former head of alignment at OpenAI Jan Leike recently announced that he'd joined Amazon's Anthropic AI, where he plans to continue his superalignment mission. Leike will specifically focus on work on "scalable oversight, weak-to-strong generalization, and automated alignment research."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.