Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Anton Shilov

OpenAI reportedly builds custom AI chips as it embraces AMD — company also abandons plans to build its own fabs

OpenAI.

OpenAI is proceeding with the development of its first custom AI chip with Broadcom and expects to produce it for TSMC. However, reports Reuters, the company no longer intends to spearhead building a fab network. At the same time, the company continues to add more powerful chips from AMD and Nvidia to its fleet.

AI inference chips incoming, no more fab plans

In a bid to reduce reliance on Nvidia, OpenAI initially considered developing its chips both for training and inference and then facilitating the building of a dozen fabs (operated by prominent foundries like TSMC and Samsung Foundry), but high costs and long timelines made it impractical. Instead, OpenAI has prioritized designing custom AI chips for inference together with Broadcom and producing them at TSMC. For now, OpenAI will keep using GPUs from Nvidia and AMD for training.

While high-demand AI GPUs like Nvidia's H100 and H200 are used for training of large language modes by pretty much everyone, which is why they are hard to get, demand for AI inference chips is projected to grow as more AI applications reach the market. OpenAI's upcoming custom-designed inference chip is slated for release by 2026. According to Reuters, this timeline could be adjusted based on project needs, but the focus is on inference tasks that enhance real-time AI responses.

To support this new chip development, OpenAI has assembled a team of around 20 engineers led by experienced engineers like Thomas Norrie and Richard Ho, specialists who previously worked on Google's Tensor Processing Units (TPUs). The team is key to moving forward with the in-house design, which could allow for greater customization and efficiency.

OpenAI now does the same thing as Amazon Web Services, Google, Meta, and Microsoft. These companies have chips for AI or general-purpose workloads, sometimes co-developed with Broadcom.

OpenAI to diversify AI hardware supply chain

In addition to its in-house custom silicon strategy, OpenAI diversifies its hardware suppliers to reduce dependency on Nvidia, which generally dominates the AI GPU and AI training hardware markets. OpenAI plans to deploy AMD's Instinct MI300X via Microsoft's Azure cloud platform, which will somewhat diversify its fleet.

Despite ChatGPT's huge popularity, OpenAI projects a $5 billion loss this year against $3.7 billion in revenue due to high operating expenses that comprise cloud, electricity, and hardware costs. Diversifying hardware will probably help the company cut down its hardware costs, and custom chips are meant to reduce its power consumption, but that will only happen in 2026.

While OpenAI is pursuing partnerships to broaden its hardware supply base, it is also mindful not to disrupt its relationship with Nvidia as the green company continues to develop the industry's highest-performing GPUs for AI. Therefore, OpenAI is poised to remain dependent on Nvidia if it wants to train the industry's best AI models.

Nvidia's next-generation Blackwell GPUs for AI And HPC are poised to offer significant performance improvements over existing Hopper GPUs, enabling companies like OpenAI to train even more sophisticated AI models. However, Blackwell GPUs are more power-hungry than Hopper products. So, while their total cost of ownership, when performance is considered, may be lower than that of their predecessors, running them may be costlier, increasing OpenAI's costs.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.