Nvidia is forming a new business unit that will design custom processors for a wide range of applications, including but not limited to artificial intelligence (AI) processors, Reuters reports citing nine sources. The list of potential clients includes automakers, large cloud service providers (CSPs), and telecom companies. The bespoke chips unit will help Nvidia to expand its business going forward.
The new unit led by Vice President Dina McKinney (who used to be responsible for AMD's Cat-series CPU microarchitectures, some of Qualcomm's Adreno GPU design, and Marvell's infrastructure processors) and is designed to address the needs of automotive, consoles, datacenters, telecom, and other applications that might take advantage of custom silicon. Nvidia did not confirm the existence of the new business unit, but McKinney's profile on LinkedIn indicates that as a VP of Silicon Engineering, she is in charge of silicon aimed at 'cloud, 5G, gaming, and automotive,' which suggests the diverse nature of her work.
While all leading CSPs use Nvidia's A100 and H00 processors for AI and high-performance computing (HPC) workloads, many of them, including Amazon Web Services, Google, and Microsoft are also deploying their own custom processors for AI and general-purpose needs. This allows them to optimize costs (no need to pay a premium to Nvidia), tailor their datacenters' capabilities, and optimize performance and power consumption, saving great sums of money. Furthermore, being in charge of silicon design, these companies can add custom capabilities (such as new data formats) to their chips quickly and protect their IP. As a result, for some workloads, Nvidia's AI and HPC GPUs are irreplaceable; there are many workloads that are deployed on hardware running custom silicon. The trend towards custom silicon is extensive and the market is growing rapidly, essentially making CSPs eat Nvidia's lunch.
The report says that Nvidia has engaged in preliminary talks with tech giants, including Amazon, Meta, Microsoft, Google, and OpenAI, to explore opportunities for creating custom chips, signaling a broadened focus beyond traditional off-the-shelf datacenter offerings.
Nvidia is particularly successful in addressing the needs of AI applications with its off-the-shelf A100 and H100 processors and their variations (e.g., A800, H800, rumored H20 DGX, etc.) as well as RTX-series graphics processors for client PCs and datacenters. The company's Mellanox connectivity and networking products are also in high demand among cloud service providers.
But when it comes to the automotive market, sales of Nvidia's solutions for automotive applications have been lagging behind its money-making datacenter, gaming, and professional visualization solutions. To some degree, this is because many automakers are also looking at custom silicon to power their software-defined vehicles, and while the Nvidia Drive platform is ahead of many developments, at least some producers of vehicles would rather have their own highly-customized platform instead due to cost, competitive, and IP control reasons.
This approach not only opens new avenues for Nvidia but also puts it in direct competition with other custom chip designers like AMD, Alchip, Broadcom, Marvell Technology, and Sondrel. Although these companies have loads of experience, Nvidia has loads of highly competitive IP, including CPU, GPU, AI, HPC, networking, and sensor processing technologies that are already competitive. Selling some of these IPs in custom packages could significantly improve Nvidia's total addressable market (TAM) and eventually increase its earnings.