- OpenAI is reportedly developing its first custom AI chip with Broadcom
- The chip could be manufactured as soon as 2026
- The move could help reduce the costs of running OpenAI-powered apps
OpenAI is a step closer to developing its first AI chip, according to a new report – as the number of developers making apps on its platform soars alongside cloud computing costs.
The ChatGPT maker was first reported to be in discussions with several chip designers, including Broadcom, back in July. Now Reuters is claiming that a new hardware strategy has seen OpenAI settle on Broadcom as its custom silicon partner, with the chip potentially landing in 2026.
Before then, it seems OpenAI will be adding AMD chips to its Microsoft Azure system, alongside the existing ones from Nvidia. The AI giant's plans to make a 'foundry' – a network of chip factories – have been scaled back, according to Reuters.
The reason for these reported moves is to help reduce the ballooning costs of AI-powered applications. OpenAI's new chip apparently won't be used to train generative AI models (which is the domain of Nvidia chips), but will instead run the AI software and respond to user requests.
During its DevDay London event today (which followed the San Francisco version on October 1), OpenAI announced some improved tools that it's using to woo developers. The biggest one, Real-time API, is effectively an Advanced Voice Mode for app developers, and this API now has five new voices that have improved range and expressiveness.
Right now, three million developers from around the world are using OpenAI's API (application programming interface), but the problem is that many of its features are still too expensive to run at scale.
OpenAI says it's reduced the price of API tokens (in other words, how much it costs developers to use its models) by 99% since the launch of GPT-3 in June 2020, but there's still a long way to go – and this custom AI chip could be an important step towards making AI-powered apps cost-effective and truly mainstream.
OpenAI-powered apps are coming
The sky-high costs of cloud AI processing are still a handbrake on apps building OpenAI's tools into their offerings, but some startups have already taken the plunge.
The popular online video editor Veed plugs into several OpenAI models to offer features like automated transcripts and the ability to pick out the best soundbites from long-form videos. An AI-powered notepad called Granola also leverages GPT-4 and GPT-4o to transcribe meetings and send you follow-up tasks, without needing a meeting bot to join your call.
Away from consumer apps, a startup called Tortus is using GPT-4o and OpenAI's voice models to help doctors. Its tools can listen to doctor-patient chats and automate a lot of the admin like updating health records, while apparently also improving diagnosis accuracy.
Leaving aside the potential privacy and hallucination concerns of AI models, developers are clearly keen to tap into the power of OpenAI's tools – and there's no doubt that its low-latency, conversational voice mode has massive potential for customer service.
Still, while you can expect to be talking to one of OpenAI's voice models when calling a store or customer service line soon, those AI running costs could slow down the rate of adoption – which is why OpenAI is seemingly keen to develop its own AI chip sooner rather than later.