Get all your news in one place.
100’s of premium titles.
One app.
Start reading
PC Gamer
PC Gamer
Jorge Jimenez

OpenAI may have lost more than half a billion dollars over the past year

OpenAi logo

Every time you type a request in ChapGPT for cooking tips, TV recommendations, or to do your homework, it costs OpenAI money. But how much, exactly? Keeping the generative AI chatbot running could cost the tech company up to a million dollars daily.  

The Information (via ComputerBase) reports that partly because of the exorbitant costs of running ChatGPT, OpenAI is believed to have roughly doubled its losses to around $540 million last year. It's worth noting that these numbers come from analyst estimates rather than any up-to-date financial reports, so they can't be treated as 100% accurate. But we know that some of the biggest expenses include those painfully high daily infrastructure costs and the fact that the company has been poaching top talent from Google and Apple. 

According to a previous report, it takes immense computing power to fulfill the millions of daily queries on OpenAI's ChatGPT. The generative AI chatbot reportedly costs the company somewhere between $700,000 to $1 million daily. 

Dylan Patel, the chief analyst at SemiAnalysis, told The Information that most of the cost is "based on expensive servers they require" and that those estimates were based on the GPT-3 model, not the current GPT-4 that OpenAI is using. So those figures are possibly quite conservative considering the extra complexity and, therefore, cost of each GPT-4 query. 

Note that these estimates also come from prior to the launch of OpenAI's paid ChatGPT Plus premium service, which went live this year. That service gives queries quicker responses and priority during peak hours. OpenAI expects to reach $200 million in sales this year and predicts $1 billion for next year, which could help offset some of the massive costs and maybe even turn that deficit around.

Jensen Huang, AI enthusiast and CEO of Nvidia (which is making bank off the GPUs that power those expensive servers), recently predicted that future AI models will demand even more power in the next ten years. The Nvidia boss also said we should soon expect to see "AI factories all over the world" powering various AI technologies. 

This means Microsoft's recent $10 billion investment into OpenAI couldn't have come at a better time. According to the report, OpenAI CEO Sam Altman is also looking to raise up to $100 billion in the next couple of years. 

To help reduce these costs, Microsoft has been working with AMD to produce its own internal AI chip, Athena, for the last four years. This new chip could launch next year and should be more cost-efficient than using Nvidia GPUs to run AI models. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.