Get all your news in one place.
100’s of premium titles.
One app.
Start reading

OpenAI insiders' open letter calls for whistleblower protections

An open letter signed by a group of more than a dozen current and former employees of OpenAI and Google DeepMind calls for more transparency and better protections for whistleblowers inside companies developing advanced AI.

Why it matters: The AI industry is pushing its products into broad public use while deep concerns over their accuracy, safety and fairness remain unresolved.


What they're saying: "AI companies have strong financial incentives to avoid effective oversight," reads the open letter, titled "A Right to Warn about Advanced Artificial Intelligence."

  • It calls on companies building advanced AI systems to support "a culture of open criticism."
  • It also urges them "not to retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed."

The letter is signed by a number of current and former OpenAI employees, and also two — one current and one former — from Google's DeepMind. About half the signers are named and half are anonymous.

  • It's also "endorsed" by three pioneering AI researchers — Yoshua Bengio, Geoffrey Hinton and Stuart Russell.

Catch up quick: OpenAI has faced a string of recent controversies over agreements it asked departing employees to sign, and CEO Sam Altman had to backtrack from a policy that threatened to claw back stock option grants from employees who didn't cooperate.

The big picture: OpenAI was founded as a nonprofit committed to safe, public-spirited development of advanced AI. But since the runaway success of ChatGPT in 2022, the company, in partnership with Microsoft, has faced criticism for adopting a more classic Silicon Valley startup playbook.

The other side: Altman has long argued that the best way to develop AI safely is to put it into the public's hands to better find and correct its flaws early.

  • In a statement, an OpenAI spokesperson said, "We're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology...."
  • The company also noted it has an anonymous "integrity hotline" and "a Safety and Security Committee led by members of our board and safety leaders from the company."
  • DeepMind did not immediately respond to a request for comment.

Editor's note: This story has been updated with comment from OpenAI.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.