Datacenter Can Carry Nvidia Through The Rough Spots

After a decade and a half of ceaseless and focused work, Nvidia has created a modern compute platform, and a unique one at that. And while the collapse of the PC market and the Dot Coin Bust has not done its financials any favors in recent quarters, Nvidia’s datacenter business is clipping along despite the economic uncertainties out there on Earth.

In fact, that Nvidia datacenter business seems poised to expand in the coming years thanks to its entry into CPUs and DPUs, the need for high bandwidth networking, and the ongoing adoption of GPU compute for HPC, AI, and now data analytics  workloads. And this despite increasing competition in GPUs and already fierce competition in CPUs.

The trajectory of that datacenter business is clear, and made even more dramatic by the drop in sales for GPUs dedicated to gaming and professional visualization that continued in the third quarter of fiscal 2023 ended in October. Take a look:

For those of you who like the raw data, too, here are the sales for the last seven quarters by Nvidia division:

The channel has been flooded with GPUs that are no longer used to make Etherium cryptocurrency at the same time that the voracious appetite for new PCs – caused by the coronavirus pandemic – has waned.

This is an economic pincher for the GPU businesses of both AMD and Nvidia, and the white-knuckle drop is obviously very hard on the 22,500 employees of Nvidia, starting at the top with co-founder and chief executive officer Jensen Huang. That Nvidia employee count is up 63 percent from the start of the pandemic. While the tech sector in the United States has laid off around 67,000 workers this year so far, there is no such talk about that at Nvidia. The GPU maker has a balance sheet that is plenty strong enough to ride this out and get to the other side when gaming and professional visualization sales recover.

The future revenue streams from Nvidia’s datacenter business – and therefore its ability to ride out the gaming and pro viz downturns – may hinge on the pricing that Nvidia decides to set for its “Hopper” H100 GPU accelerators, which have been shipping in volume since September, and its “Grace” Arm server CPUs, which will have production samples in Q1 2023 and ramp in the first half of 2023. If Nvidia charges too much – as IBM did with its ES/9000 mainframes in September 1990 when a recession was underway and as Sun Microsystems did with its UltraSparc-III systems in September 2001 as another recession was underway – then this can drive customers into the loving arms of competitors.

But as we have pointed out before, you make the money while you can and cut prices when you must. IBM and Sun did that with their systems, and we see no reason to believe that Nvidia will behave any differently with its Hopper GPUs and Grace CPUs. Back in May, we used pricing and performance history to guess where the H100 prices might land, and have subsequently revised those upwards. We would not be surprised to see the SXM5 version of the Hopper GPU sell for $25,000, with the PCI-Express version being somewhat lower, maybe $19,500. That is somewhere between 3X and 6X the performance for around 2.5X the price.

Because Nvidia has built a platform with 35,000 customers – that is the first time we have seen this number, but here it is:

And this composite chart, used in Nvidia’s most recent financial briefing deck, is also interesting, particularly the column chart on the left:

That bar chart shows sales of Nvidia compute GPUs to the hyperscalers and cloud builders over the first eight quarters that these devices were first available. The V100 did 2.7X more revenue than the P100 among these customers in the first two years after their respective launches, and the A100 did 3X the revenue compared to the V100. Considering the appetite for GPU compute, it is not hard to believe that H100 will do more than 3X the revenue of A100 in its first eight quarters – provided Nvidia can keep supply feeding demand. We will find out just how hungry the datacenter is for GPUs over the next eight quarters.

The growth rate for the Datacenter division has slowed, but we think this has more to do with the H100 ramp, which didn’t really start until September, halfway through Nvidia’s Q3 of fiscal 2023, than it does with any kind of waning demand. Nvidia has had Q3s that were tripling datacenter revenue (Q3 F2017), doubling datacenter revenue (Q3 F2018) and one in between (Q3 F2021). It had Q3s that went negative (Q3 F2016 and Q3 F2020), and ones that were up a reasonable amount considering the limit of large numbers. Q3 F2019 was up 58.1 percent and Q3 F2022 was up 54.5 percent. Against this, the 30.6 percent growth in Q3 F2023 for the Datacenter division is not stellar, but again it is the beginning of the H100 ramp and there are still supply chain constraints holding things back.

With that 30.6 percent growth in Q3, Nvidia raked in $3.83 billion in sales, up seven-tenths of a point sequentially and setting a new high bar for datacenter sales, helped no doubt by sales of Quantum-2 InfiniBand and Spectrum-4 Ethernet switching and perhaps even a contribution from the BlueField-2 DPUs. The fact that Nvidia customers are willing to make do with A100 accelerators is certainly helping keep the money rolling in for Nvidia, we think, and most big supercomputer deals we see for Nvidia GPUs these days seem to have a mix of the devices – or are based on a combination of AMD CPUs and GPUs. (Ahem.)  Customers no doubt would like to have all H100s, and we think there are supply issues that make this not possible. Why else would Meta Platforms have made such a big deal of buying a back-generation system that is only just now fully installed or of using Microsoft to assemble a virtual supercomputer also based on A100s?

In the quarter, Nvidia has had to deal with “macroeconomic challenges, new export controls, and lingering supply chain disruptions,” as Nvidia chief financial officer Colette Kress put it, adding that the year on year growth was driven by the hyperscalers and big cloud builders in the United States as well as a widening number of Internet companies that are building large language model, recommender system, and generative AI applications that run on Nvidia GPUs and that, we think in the latter case, are created with the Nvidia AI Enterprise software platform. The auto and energy businesses were also singled out as driving sales increases for datacenter products.

Which brings us to China. The export controls that the US government put in place in September to halt the sales of A100 and H100 GPUs in China were expected to have a negative impact of $400 million on the quarter, but by crimping the A100 back to 600 GB/sec of memory bandwidth with a new product dubbed the A800, Nvidia was largely able to keep most of that revenue it expected. The PCXI-Express 4.0 x16 version of the A100 normally has 1.9 TB/sec of bandwidth, and it is unclear how much the bandwidth drop puts a governor on the floating point and integer performance of the A100. Probably a lot. And so, politicians not realizing that supercomputing applications are made to scale out have merely forced China to buy 2X or maybe even 3X the number of A800 GPUs because they can’t get a real A100, and maybe 6X to 9X the number of A800 GPUs because they can’t get a real H100.

Now here is the real boomerang effect: Because Nvidia has such a lock on its software stack and such good GPUs, and there is more demand than supply for all datacenter-class GPUs, imagine that Nvidia can charge the same amount – or even close to the same amount – for an A800 GPU as it was charging for an A100 GPU. Now, Chinese organizations will spend 2X or 3X or more to get the same performance, and Nvidia wins! Instead of losing $400 million, Nvidia might make $800 million or even $1.2 billion! Abd the Chinese HPC centers have to then pay these enormous electric bills on datacenters that are 2X to 3X bigger than they might otherwise be. And if they were thinking about H100 GPUs, they have to spend maybe 6X to 9X as much!

China will be stuck in the 2020 GPU market, and Nvidia can benefit handsomely from this. The same logic will hold for AMD Instinct MI200 series and Intel Max Series GPUs that are governed down. In fact, Intel may find more customers in China than in the United States and Europe for its low-end, half-stack Max Series GPU because indigenous GPU maker Biren Technology is going to have its own issues getting its chips out of Taiwan Semiconductor Manufacturing Co’s foundries because of other controls being put on Taiwan by the United States.

Crazy, isn’t it?

Across all of its divisions, Nvidia’s sales fell by 16.5 percent to $5.93 billion, and net income collapsed by 72.4 percent to $680 million. Nvidia burned $3.3 billion in the last quarter and another $3.9 billion in this quarter to pay its dividend, to ramp current products, and to keep investing in its future products, and now its cash hoard stands at $13.1 billion.

The company’s Compute & Networking group, which includes datacenter GPUs, DPUs and NICs, and switch revenues but which does not include some things categorized in the Datacenter division, rose by 26.7 percent to $3.82 billion. When Nvidia announced the deal to buy Mellanox Technology several years ago, it was doing about $400 million a quarter, and these days, it is doing roughly $800 million a quarter. It would be nice if Nvidia actually broke this out better – and ditto for datacenter GPUs versus boards and systems.

Nvidia is expecting a tiny bit of sequential growth for its Datacenter, Gaming, and Automotive divisions as it moves into Q4 2023. Perhaps this is the bottom for the gaming business and sales can resume their uphill climb back to the old heights soon.

 

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

  1. Appreciate the write up.

    re: “increasing competition in GPUs and already fierce competition in CPUs.”

    Fielding additional GPU SKUs IS in fact increasing in numbers, however whether those SKUs are actually competitive in AI/ML or not remains to be seen. There is little data supporting the idea GPU competitors put any Nvidia data center deal at risk. Software is integral and remains a huge differentiator. Until AMD or Intel can bring the ENTIRE solution, compilers, diagnostic tools, optimized libraries, and dev support (let alone 3rd party benchmarking), it’s hard to evaluate them as anything more than just a powerpoint deck. Not saying they won’t pick up some business, like Cerebrus and Graphcore have. But without these essential components, any business they pick up will be nibbling around the edges rather than feasting on the main course.

    Grace CPU and it’s iterations are positioned as game changers with respect to traditional memory access and bandwidth. It’s all about keeping the Hopper appetite satiated. 2023 is going to be very interesting.

    • Yes, it was a typo, and I can say in retrospect that I had COVID brain bigtime at the time I wrote that, with a high fever. . . .

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.