Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Barchart
Barchart
Caleb Naysmith

New AI Chip Leaves Nvidia, AMD, and Intel in the Dust with 20x Faster Speeds and Over 4 Trillion Transistors

A game-changing shift is underway in the artificial intelligence hardware industry, driven by a surprising new player: Cerebras Systems. The California-based startup recently unveiled Cerebras Inference, a cutting-edge solution reportedly up to 20 times faster than Nvidia (NVDA) GPUs, sparking attention across the tech landscape.

Cerebras’ breakthrough innovation, the Wafer Scale Engine, now in its third generation, powers this new Cerebras Inference system. This enormous chip packs 44GB of SRAM and requires no external memory, eliminating a key bottleneck found in traditional GPU setups. By addressing memory bandwidth limitations, Cerebras Inference achieves impressive speeds—processing 1,800 tokens per second for Llama3.1 8B and 450 tokens for Llama3.1 70B—setting a new performance standard in the industry.

For investors and tech enthusiasts, comparing Cerebras with established chipmakers like Nvidia, Advanced Micro Devices (AMD), and Intel (INTC) is becoming increasingly relevant. While Nvidia has traditionally led the AI hardware space with its advanced GPU solutions, Cerebras’ disruptive technology presents a formidable alternative. Meanwhile, AMD and Intel, both long-standing players in the chip industry, may also face increased competition as Cerebras gains traction in high-performance AI applications.

Cerebras Chips vs. Nvidia: A Technical Comparison

When comparing Cerebras and Nvidia, several crucial factors stand out, including design, performance, application suitability, and potential market impact.

Architectural Design

  • Cerebras: The Wafer Scale Engine from Cerebras is unique—built on a single, massive wafer with approximately 4 trillion transistors and 44GB of on-chip SRAM. This design eliminates reliance on external memory, bypassing the memory bandwidth constraints of conventional architectures. Cerebras aims to provide the largest, most powerful chip that can house and manage enormous AI models directly on the wafer, significantly reducing latency.
  • Nvidia: Nvidia’s architecture, meanwhile, uses a multi-die approach where several GPU dies are connected via high-speed interlinks such as NVLink. This setup, showcased in products like the DGX B200 server, provides a modular and scalable solution, though it requires intricate coordination between multiple chips and memory systems. Nvidia’s GPUs, refined over years, are optimized for both AI training and inference tasks, maintaining a competitive edge in versatility.

Performance

  • Cerebras: In AI inference tasks, Cerebras Inference shines by processing inputs reportedly 20 times faster than Nvidia’s comparable solutions. The on-chip memory and processing integration enable high-speed data access and processing without the delays associated with chip-to-chip data transfers.
  • Nvidia: While Nvidia may not match Cerebras’ raw speed for inference tasks, its GPUs are versatile workhorses across multiple applications, from gaming to complex AI training. Nvidia’s strength lies in its robust ecosystem and mature software stack, making its GPUs well-suited for a wide range of AI tasks and beyond.

Application Suitability

  • Cerebras: Cerebras chips are especially suitable for enterprises with large-scale AI models requiring ultra-fast processing, such as natural language processing and deep learning inference. This solution is ideal for organizations that prioritize minimizing latency and need real-time processing of large datasets.
  • Nvidia: Nvidia’s GPUs are more adaptable, capable of handling a broad range of tasks, from video game graphics to advanced AI model training and simulations. This versatility makes Nvidia a reliable choice for diverse sectors, not solely those focused on AI.

Conclusion

Cerebras offers standout performance in specific, high-demand AI tasks, while Nvidia excels with its versatility and robust ecosystem. The choice between Cerebras and Nvidia ultimately depends on particular needs: Cerebras could be an optimal choice for organizations handling extremely large AI models where inference speed is paramount. On the other hand, Nvidia continues to be a strong competitor across various applications, backed by its flexible hardware and comprehensive software support.

On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.