Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Sam Altman re-prioritizes safety processes at OpenAI after it seemingly took a backseat for 'shiny products'

Sam Altman.

What you need to know

  • OpenAI CEO Sam Altman recently highlighted new safety updates for the company.
  • The ChatGPT maker will allocate up to 20% of its computing resources to safety processes.
  • The company will give the US AI Institute early access to its next-gen model "to push forward the science of AI evaluations."

OpenAI CEO Sam Altman highlighted new updates for the company's safety policies. The top executive indicated that the ChatGPT maker is living up to its promises and will allocate up to 20% of its computing resources to safety processes across its tech stack.

Additionally, Altman disclosed that OpenAI has been working closely with the US AI Safety Institute and has agreed to grant the institute early access to its next-gen model "to push forward the science of AI evaluations."

And finally, the top executive asks all OpenAI employees (current and former) to openly raise concerns about the company's trajectory and product development.

🔥The hottest trending deals🔥

OpenAI maybe 'safer' but is it enough?

Hands grasping the planet Earth in a pixel art style with OpenAI logo (Image credit: Microsoft Designer)

Will generative AI lead to the end of humanity? Is AI safe and private? These are some of the questions lingering in concerned users' minds as the technology becomes more prevalent and advanced, with companies like OpenAI, Microsoft, and Google at the forefront. 

AI has been under fire for several reasons, including copyright infringement, high water and power consumption, and more. 

Days after launching its magical GPT-4o model with reasoning capabilities, OpenAI lost several members from its safety and super alignment team. A former staffer disclosed that he left the ChatGPT maker after constantly disagreeing with top management over core priorities on next-gen models, including safety, preparedness, monitoring, and more. 

The staffer raised a critical issue regarding OpenAI's safety priorities after stating that the company prioritizes shiny products while safety processes take a backseat. Around the same time, more former OpenAI employees started emerging and disclosing intricate details regarding the company's operations. 

However, the revelations were short-lived. A report disclosed that OpenAI employees are subjected to nondisclosure and non-disparagement, preventing them from criticizing the company or how it runs its operations even after leaving the company. Even admitting that they were subjected to the agreements is considered a violation of the NDA. 

This seemingly caused employees to remain tight-lipped about the company's operations or risk losing their vested equity, with a former employee indicating that working for OpenAI felt like the Titanic of AI

Sam Altman admits the clause was part of OpenAI's non-disparagement terms. However, it has since been voided. He calls current and former employees to raise concerns about the company's trajectory "and feel comfortable doing so" as their vested equity will remain untouched. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.