Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Daniel Elliot

OpenAI Debacle Should Raise Red Flags And Shed Light On The Company's Opaque Decision-Making Processes

OpenAI CEO Sam Altman sheds light on the company's plan to remain private. (Credit: Wikimedia Commons)

The surprise sacking of OpenAI's CEO Sam Altman was followed by a near mutiny at the company and his reinstatement.

It all started when a group of staff researchers sent a letter to the board of directors, alerting them to a significant AI breakthrough they believed posed a risk to humanity, as reported by Reuters. This unpublicized letter and AI algorithm played a pivotal role in the board's decision to remove Altman.

They claimed he was "not consistently candid in his communications with the board". Before his reinstatement late Tuesday, over 700 employees had threatened to resign and join Microsoft, a supporter of OpenAI, in support of Altman. Microsoft has invested about $13bn in the startup to date.

The situation at OpenAI has brought to light the company's opaque decision-making processes. The advancement of advanced AI technologies is controlled by a select, secretive group working in isolation. Other companies operating the same way would be well-advised to examine this case and adjust accordingly.

Currently, the impact of personnel changes at OpenAI on ChatGPT or Dall-E is unclear, as there's no public oversight of these programs, which is the chief complaint among critics. Unlike smartphones, where software updates and their effects are transparently communicated, AI program updates lack such clarity, raising concerns.

This situation should prompt broader inquiries, extending well beyond the internal staffing matters of a single company. Key questions include: Who are the individuals shaping our technological future, and what principles guide their decision-making?

Moreover, how should external entities – such as governments, non-technology sectors, international coalitions and regulatory agencies – act to curb the potential negative impacts of AI innovations?

There's a gap in public quality control for AI technologies. While organisations conduct their own tests for specific use cases, there's a need for continuous, standardised testing of tools like ChatGPT to assess and mitigate risks.

Such a system would lessen reliance on the company itself. For now, the hope rests on the expertise of the developers behind these AI tools. This is worrying for many who don't trust the developers or the companies themselves to decide what is or is not healthy for public consumption and usage.

Jill Filipovic noted in a CNN article that OpenAI "has already reportedly invented an AI technology so dangerous they will never release it" – but they also won't tell reporters or the public exactly what it is. This dynamic – a potentially dangerous technology developed at extreme speed, largely behind closed doors – is partly to blame for Altman's firing.

The OpenAI board, according to CNN's David Goldman, worried that "the company was making the technological equivalent of a nuclear bomb, and its caretaker, Sam Altman, was moving so fast that he risked a global catastrophe".

A particular issue seemed to be Altman's efforts to make the tools behind ChatGPT available to anyone who wanted to make their own version of the chatbot. This could be widely disastrous, some board members worried.

And this is exactly the point.

The lessons to be learned from the OpenAI saga is that while the board could perhaps have gone about dealing with this issue in a different fashion, their concerns were not without basis. A "nuclear" AI equivalent is indeed concerning to all and all companies with the ability to affect such drastic changes that could potentially harm the public or even the world order needs to be reigned in through some form of mechanism that prevents this sort of freedom.

The story of OpenAI is not over of course and it is important to watch what happens. As this issue runs its course, lessons will need to be learned and new rules will need to be created and applied. Humanity is on new, uncharted territory and it will take a concerted effort by all civilised nations to ensure that AI technology can do no harm.

At the same time, companies must also learn from OpenAI that there is a limit to what the board can do and there is a limit to what companies can do. Free reign is never a healthy thing in the corporate world.

By Daniel Elliot Daniel Elliot

Daniel is a business consultant and analyst, with experience working for government organisations in the UK and US. On his free time, he regularly contributes to International Business Times UK. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.