Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Anton Shilov

Elon Musk's xAI plans to build 'Gigafactory of Compute' by fall 2025 — using 100,000 Nvidia's H100 GPUs

Stock image of Elon Musk.

xAI, Elon Musk's AI startup, plans to build a massive supercomputer to enhance its AI chatbot, Grok, reports Reuters citing The Information. The supercomputer, which Elon Musk refers to as the Gigafactory of Compute, is projected to be ready by fall 2025 and might involve a collaboration with Oracle. With this development, Musk aims to significantly surpass rival GPU clusters in both size and capability.

In a presentation for investors, Elon Musk revealed that the new supercomputer will use as many as 100,000 Nvidia's H100 GPUs based on the Hopper architecture, making it at least four times larger than the largest existing GPU clusters, according to The Information. Nvidia's H100 GPUs are highly sought after in the AI data center chip market, although strong demand made them difficult to obtain last year. But these are not the range-topping Nvidia GPUs they once were, with the green company about to ship its H200 compute GPUs for AI and HPC applications and is prepping to ship its Blackwell-based B100 and B200 GPUs in the second half of the year. 

It is unclear why xAI decided to use essentially a previous-generation technology for its 2025 supercomputer, but this substantial hardware investment must reflect the scale of xAI's ambitions. Keep in mind that we are dealing with an unofficial report and the plans could change. Yet, Musk reportedly holds himself 'personally responsible for delivering the supercomputer on time' as the project is very important for developing large language models.

xAI seeks to compete directly with AI giants like OpenAI and Google. Musk, who also co-founded OpenAI, positions xAI as a formidable challenger in the AI space, which is exactly why he needs the upcoming supercomputer. The Grok 2 model required around 20,000 Nvidia H100 GPUs for training, and future iterations, such as Grok 3, will need as many as 100,000 GPUs, according to Musk.

Neither xAI nor Oracle provided comments on the collaboration when approached. For obvious reasons, this silence leaves some aspects of the partnership and supercomputer project open to speculation. Nonetheless, Musk's presentation to investors underscores his commitment to pushing the boundaries of AI technology through substantial infrastructure investments and strategic partnerships.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.