Site icon The Next Platform

Nvidia Sets The Datacenter Growth Bar Very High As Compute Sales Dip

The expectations for GenAI are unreasonably high and the pressure on Nvidia is tectonic.

As we listened to the call with Wall Street going over the financial results for Nvidia’s second quarter of fiscal 2026 with Jensen Huang, who the entire world now knows but who is co-founder and chief executive officer of the company, and Colette Kress, who is chief financial officer, all anybody really wanted to know after taking a glance at the numbers is where the next quarter was going to be.

Nvidia is still breaking records in its datacenter revenues every quarter, but that growth is slowing and everyone wants to know if it will pick up again. Maybe it will, maybe it will not. The difference is a several hundred billion dollars a year in revenue out in 2030 for the AI juggernaut, so you can imagine how everyone is hanging on every word that Huang says – and doesn’t say. Just like Fed chair Jay Powell, and it is arguable who has more impact on the economy at this point. It might be dead even. One of them has the best job security on Earth.

To start off our analysis of Nvidia’s quarter, which ended in July. We are just going to show four pictures first. They say a picture is worth a thousand words, which is about 5,000 to 6,000 characters, which is why we prefer text for a lot of complex ideas over images and video, which has just shy of 2 million pixels per image or frame at normal HD resolution and tens of millions for 4K and 8K imagery. But sometimes, pictures are better than words.

Here is the first image, which shows Nvidia revenues, net income, and cash between Q1 F2011 and now:

This is when Nvidia’s datacenter business was becoming interesting and on the verge of become “material” in the sense that the US Securities and Exchange Commission uses that word. The cash pile is growing a little faster than quarterly revenues since the GenAI boom started, but used to be much larger than quarterly revenues.

That downward blip in net income is a little concerning, but not surprising given the $4.5 billion writeoffs for not being able to sell H20 GPU accelerators into China, a restriction imposed by the US government back in April. Things have bounced back to normal.

From fiscal 2011 through fiscal 2013, Nvidia had an average net income that was 14.3 percent of revenues, and very little of that black ink came from the datacenter. In the trailing two years of the GenAI boom, Nvidia has brought an average of 53.7 percent of revenue to the bottom line. Which, to use a technical term in finance, is freaking insane.

Now, the second picture, which breaks Nvidia down at the group level:

We only have data going back to fiscal 2020 for the Compute & Networking group, which was created in the wake of the acquisition of Mellanox Technologies and which was clearly one of the ten smartest things Nvidia ever did. (Nvidia has done a lot of smart things in its more than three decades of existence.)

Nvidia may have been founded as a graphics company, but it is clearly a datacenter compute and networking company, as our third picture shows:

Look at that Datacenter division go! That is an almost perfectly straight line with a very steep bit at the beginning of fiscal 2024 and a tapering off a bit in the past two quarters. The Gaming division is still visible but the Professional Visualization, Automotive, and OEM and IP divisions are not even much noise in the data when they used to be more material only five years ago.

And now for the fourth picture:

Uh-oh. Is that datacenter compute leveling off? Yes, and in fact, there is a 0.9 percent sequential decline from Q1 to Q2 for sales of CPUs and GPUs that comprise the Nvidia platform. Datacenter networking – meaning Quantum InfiniBand and Spectrum Ethernet scale out switches for linking nodes and racks into large scale clusters plus NVSwitch scale up switches for intra-node and intra-rack connectivity – grew by a whopping 97.7 percent to a record breaking $7.25 billion in the July quarter.

Nvidia said that the run rate for Spectrum-X Ethernet was now over $10 billion a year, which suggests revenues for the Spectrum-X line were in excess of $2.5 billion in Q2. That means, by definition, that InfiniBand plus NVSwitch interconnects drove around $4.7 billion in sales. (We are in the middle of adjusting our Nvidia financial model to extract NVSwitch stuff out separate from InfiniBand and Ethernet. We could not do that and get this story out in a timely fashion.) We do also know that InfiniBand switch revenues nearly doubled in the quarter, which suggests that at least some customers, for some pod sizes, are very keen to get the lowest latency along with high bandwidth for their AI clusters and are preferring InfiniBand.

In other words: Networking is becoming a bigger part of the Nvidia platform cost at the same time that there is a dip in GPU compute revenues in the datacenter. We are not certain there is a causal or even connected relationship here, but if companies are operating on fixed budgets, then they might be cutting back on compute to cover the cost of more networking in their AI clusters. (The cost of NVSwitch was buried in the DGX and HGX system boards in prior generations, but are independent components for some “Hopper” and “Blackwell” rackscale systems.)

The fact that Nvidia is dealing with a situation where China should be driving maybe $30 billion in sales this year and cannot due to export controls is not lost on Wall Street, but Huang telling the analysts on the call that Nvidia would have a “record breaking year” in fiscal 2026 and again in fiscal 2027 does not assuage any worries anyone looking at that datacenter compute curve in the chart above might have.

We have said it before, and it bears repeating, that all platforms that are wildly successful have a similar year on year growth path each quarter. You start out and you have 300 percent or 200 percent growth against a very small base, then it decelerates down to 100 percent growth, then it dips down to 75 percent to 80 percent and wiggles around a while, and before you know it the business is growing at 50 percent, then maybe 35 percent, then 25 percent and 15 percent and 10 percent and then maybe 2X gross domestic product (because IT almost always grows faster than GDP) when the business has a large base and has about all of the customers it is going to get.

In making his forecasts, Huang admitted that we are at the 50 percent phase. The difference between what he says and what we say is that he is acting like 50 percent growth can go on between now and 2030. The market will decide who is right. There are enough waves of innovation still coming for growth to accelerate once again, and there is enough competition to finally start making Nvidia sweat a lot more than it currently does. There is no sweat at all, except from export controls. And even that doesn’t impact the current Nvidia business by all that much. It might affect the shape of the global AI market as Huawei Technologies ascends with its, er, aptly named Ascend GPUs and AMD takes more share with the UALink and UEC Ethernet collectives and the hyperscalers and cloud builders increasingly do their own things. . . . Again, the market will decide. What we can say with a high degree of certainty is that Nvidia can build and maintain a datacenter business that drives at least $200 billion in annual sales; $400 billion to $500 billion is debatable unless the buildout proceeds as Huang expects because organizations the world over allocate trillions of dollars to this cause.

The money may not be available. Stuff happens.

Nvidia has seen three waves of adoption for its accelerated computing, starting with HPC simulation acceleration in the late 2000s and early 2010s. The machine learning went mainstream (sort of) in the early 2010s, with a cryptocurrency bubble thrown in for good measure. This machine learning wave really got rolling in fiscal 2017 as did the mining, and it is hard to separate these out, but growth was up in the triple digits again. During the coronavirus pandemic, there were all kinds of supply issues that messed up the numbers as various kinds of AI started to mature, and the GenAI wave got huge in calendar 2023 and growth was just nuts after that.

It is hard to remember that only a decade ago, Nvidia was bringing in only slightly more than $1 billion a quarter.

In Q2 F2026, Nvidia brought in $46.74 billion in sales, up 55.6 percent and up 6.1 percent sequentially. Net income was $26.42 billion, up 59.2 percent and representing 56.5 percent of revenues. Datacenter division revenues were $41.1 billion, up 56.4 percent year on year and up 5.1 percent sequentially, with that growth coming from interconnect sales, as we said above. (We were expecting $43.65 billion, and you can attribute the difference to China if you want to make us feel good about our prognosticative capabilities.) Blackwell GPU revenues were up 17 percent sequentially from Q1 F2026, and we reckon that was $27.73 billion in sales.

At the beginning of this year, we thought Nvidia would do $183.55 billion in datacenter sales, but we may have to take that down a bit, even if Nvidia does get licenses to sell between $2 billion and $5 billion in H20 devices into China in the third fiscal quarter.

Maybe $172 billion – the 50 percent growth rate that Huang talked about on the call to expect for the next few years for datacenter GPU systems, applied to the $115.19 billion in fiscal 2025 datacenter sales – is a more accurate number. Then raise it by another 5 percent so Nvidia can beat expectations. . . .

The demand still seems to be there. Huang confirmed the rumors that it was sold out of H100s and H200sm and also that all Blackwell variants that it is making for the datacenter are sold out. It still looks like GPU allocations are being scheduled a year ahead of time. The top four cloud service providers – that would be Amazon, Microsoft, Google, and Meta Platforms had a capital expense budget of $300 billion two years ago and this is going up to $600 billion. For every $50 billion that companies spend on AI infrastructure, Huang said that Nvidia gets $35 billion of that. (This is a useful ratio, and that $50 billion is literally the cost of putting a 1 gigawatt AI factory into the field.)

And with enterprises and sovereign nations also getting out the checkbooks to build their own AI factories, Huang said there was a $3 trillion to $4 trillion global AI factory buildout between now and the end of the decade. If you do 35/50ths of that, which we call 70 percent around these parts, that is somewhere between $2.1 trillion and $2.8 trillion going to Nvidia. At current profitability levels, that would be net income ranging from $1.2 trillion to $1.6 trillion. Our model projected Nvidia would have maybe $1.66 trillion in sales in fiscal 2026 through fiscal 2030, inclusive, with diminishing profits due to competition and higher costs of putting stuff into the field as chips and packaging get more costly. Call it a mere $750 billion in aggregate profits over those years.

These numbers all sound silly, we know. But these are the kinds of numbers people are throwing around, like they are now normal.

Nvidia is preparing for this, and for its part, the six chips that comprise the Vera-Rubin platform for fiscal 2027 year have all been taped out to Taiwan Semiconductor Manufacturing Co. That would be the 88-core “Vera” Arm server CPU, the “Rubin” dual GPU chiplet socket, the NVSwitch 6 GPU interconnect switch, the Spectrum-X 6 Ethernet switch ASIC for scale out, and the ConnectX-9 SmartNIC. Huang confirmed that the Vera-Rubin platform was on track for the second half of calendar 2026. (As far as we know from the roadmaps that Nvidia showed at GTC 2025 back in March, CX-9 SmartNICs are not expected until calendar 2027.)

“Soon, we will be building millions of millions of Rubin GPUs, powering multi-gigawatt, multi-site AI superfactories,” Huang said after going through the question and answer session with Wall Street. “With each generation, demand only grows. One shot chatbots have evolved into reasoning, agentic AI that research, plan, and use tools, driving an orders of magnitude jump in compute for both training and inference. Agentic AI is reaching maturity and has opened the enterprise market to build domain-specific and company-specific AI agents for enterprise workflows, products, and services. The age of physical AI has arrived, unlocking entirely new industries in robotics and industrial automation. Every industry and every industrial company will need to build two factories – one to build the machines and another to build their robotic AI. This quarter, Nvidia reached record revenues and an extraordinary milestone in our journey. The opportunity ahead is immense. A new industrial revolution has started. The AI race is on.”

Huang is not predicting any kind of slowdown, clearly.

Exit mobile version