AI chipmaker Nvidia handily topped Wall Street sales targets in its highly anticipated quarterly earnings report — but the combination of sky high expectations and revelations of a minor production snag involving the company’s next-generation Blackwell chips derailed investor enthusiasm and sent shares sliding.
Shares of Nvidia, the world's second most valuable public company with a market cap that has exceeded $3 trillion, were down nearly 7% at in after hours trading on Wednesday following the report.
The stock move, which wiped out more than $200 billion in market value, underscored the extent to which the Silicon Valley chipmaker, and red-hot expectations around the business potential of AI, have become a driving economic force.
Nvidia said revenue in the second quarter increased by more than 122% year-over-year, totaling $30 billion. That was well above the average analyst estimate of $28.9 billion, according to estimates compiled by Bloomberg. The results were driven by sales of Nvidia's Hopper GPU, the company said. The strong demand for Nvidia's chips boosted the bottom line, with the chipmaker delivering gross profit margins of 75.1% and adjusted earnings per share of 68 cents (analysts were expecting EPS of 65 cents).
Chips based on the new Blackwell architecture are slated to ship to customers in the fourth quarter of the year, Nvidia said, in line with previously announced plans (albeit at the back end of the date range) to begin shipments in the second half of the year. But the company also acknowledged production challenges that it said had required a change to the "GPU mask to improve production yield."
Earlier this month, tech news site The Information reported that a design flaw with Blackwell would delay shipments by three months or more. Nvidia CEO Jensen Huang described the production problem as resolved on Wednesday, during a conference call with analysts. "There were no functional changes necessary," Huang said.
The company said it expects to ship "several billion dollars in Blackwell revenue" in the fourth quarter of the year.
"Blackwell is going to be a complete game changer for the industry. And Blackwell is going to carry into the following year," Huang told analysts on the call.
The risk of a slowdown in AI spending
Once a designer of graphics accelerators for video gamers, Nvidia has turned its GPUs into vital components for powering generative AI services like OpenAI's ChatGPT and Google's Gemini. While Nvidia faces competition from rival chipmaker AMD and startups including Cerebras and Groq, the company currently controls 90% of the market for AI chips, according to analysts.
As such, Nvidia has been one of the biggest beneficiaries of the AI craze, as internet companies like Google, Meta, and Amazon spend tens of billions of dollars on the infrastructure to provide AI services. Nvidia's dominance has fueled a massive rally in the company’s stock, which has more than doubled this year and now represents nearly 7% of the S&P 500.
Still, the massive spending on AI infrastructure has fueled persistent worries about whether consumers and businesses will ultimately purchase enough AI services to justify the investments. And with Nvidia's business so concentrated among several large customers like Meta (which has boasted of plans to amass a stockpile of 350,000 Nvidia GPUs this year), a pullback in AI infrastructure spending could have a big impact on Nvidia's business. In its 10-Q filling released along with its results on Wednesday, Nvidia reported that four unnamed customers accounted for 46% of total revenue in the second quarter.
Asked about these risks on the conference call Wednesday, CEO Huang said the need for GPU chips will not go away anytime soon. The industry race to create more advanced and powerful large language models requires ever more powerful AI chips, he said. And the demand among companies to incorporate AI services into their products and operations means cloud providers have no choice but to keep building out AI capabilities.
"If you just look at the world's cloud service providers, the amount of GPU capacity they have available, it's basically none," Huang said.