Nvidia’s Enormous Financial Success Becomes . . . Normal

For the past five years, since Nvidia acquired InfiniBand and Ethernet switch and network interface card supplier Mellanox, people have been wondering what the split is between compute and networking in the Nvidia datacenter business that has exploded in growth and now represents most of revenue for each quarter.

Now we know.

Each quarter, Nvidia chief financial officer Collette Kress puts out a commentary that accompanies the financial results for each thirteen week period, which gives out some color on what sold and by how much. As the entire world knows, Nvidia just reported its numbers for the quarter ended in April, which is its first quarter of its fiscal 2025 year, and the numbers were stellar as expected. And inside of that commentary, Kress revealed the actual revenues for its Compute and Networking Groups as distinct from each other and also distinct from its Graphics group.

The actual data for Q1 and Q4 of fiscal 2024 and Q1 of fiscal 2025 shows that the compute business is perhaps a bit stronger than many had expected and the networking business is a bit weaker. But both are clearly strong, and will continue to strengthen as fiscal 2025 rolls on. The generative AI market is growing so fast that even with intense competition there will be no way to blunt the market momentum of the CUDA platform that Nvidia has created over the past two decades and that has an incredible advantage over alternatives in HPC and AI.

But, as we have said before, we think that we are experiencing peak Nvidia right now, and maybe the party will continue out into fiscal 2026. But eventually, competition will come and the generative AI hype and hope will settle down and AMD, Intel, the Arm collective, and others will get their share of this market. Until then, this is Nvidia’s time to make hay while the grass is tall and the sun is bright,

And boy, is Nvidia ever making hay in the datacenter.

Nvidia has two different and almost identical ways of breaking down its datacenter business.

Some compute and networking products are sold outside of the datacenter, but not very much, and some products sold into the datacenter are based on gaming cards, so the Datacenter division has slightly different revenues from the Compute and Networking group.

In Q1, to be precise, the Datacenter division had $22.56 billion in sales, up by 5.3X year on year and up 22.6 percent sequentially. In a call with Wall Street analysts, Kress said that somewhere around the mid-40 percent of the companies Datacenter division revenues came from cloud builders, and we reckon it is about 46 percent and that works out to $10.38 billion, a factor of 10X higher than the year ago period by our model. That means the remaining $12.18 billion in datacenter product sales went to hyperscalers (like Meta Platforms), HPC centers, enterprises, and other organizations, which was only up by a factor of 3.8X. (See what we mean about the normalizing of multiples that are just not common for most companies in the five hundred year history of companies?)

The Compute and Networking group lumps together all revenues that are not from Graphics products used in PCs and workstations. In Q1, Compute and Networking comprised $22.68 billion in revenues, up by a factor of 5.1X year on year and up 26.7 percent sequentially from Q4 2024 ended in January. For a short time, Nvidia provided operating income for its groups, but has not done this for a while.

In its financial report, Nvidia said that sales of datacenter compute products, mostly “Hopper” GPUs and their related platform components, rose by 5.8X to $19.39 billion in fiscal Q1, which was also up 28.7 percent sequentially from Q1 in the prior fiscal year. This is the kind of growth that a company is lucky to get on an annual basis if it is wildly successful.

For networking products, revenues rose by a mere 2.4X to $3.17 billion, but were down 4.8 percent sequentially as supply of InfiniBand products could not meet demand and the ramp of the Spectrum X Ethernet products had not yet hit appreciable volumes.

Our model indicates that InfiniBand sales were up 2.7X to $2.71 billion in Q1 2025, but down 5 percent sequentially, and comprised 85.5 percent of networking sales. Ethernet and NVSwitch sales made up the remaining $459 million in networking sales, up by a factor of 2.14X year on year but down 3.6 percent sequentially.

Nvidia is fully embracing Ethernet in the datacenter with Spectrum X, and as we have pointed out before, it has not choice because the hyperscalers and cloud builders now want it and most enterprises are absolutely allergic to InfiniBand. They want one network, and it is Ethernet. And thus, Ethernet switching from all of the key vendors is going to become more of a fabric.

“Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster,” Kress said on the Wall Street call. “Spectrum-X opens a brand-new market to Nvidia networking and enables Ethernet only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year.”

What Nvidia does not talk about is how the adoption of Ethernet will affect sales of InfiniBand, but it obviously will have a cannibalizing effect. How much remains to be seen.

In the meantime, Nvidia splurged $7.8 billion in the quarter on share repurchases (not just as an investment but as a means of giving shares as part of compensation packages) and dividends in the quarter, and on June 10 it will do a 10 to 1 stock split that will put its shares closer to the $100 mark that is a comfortable number for institutional and individual investors, which will help boost Nvidia’s shares even further. But Nvidia’s enormous success as it rolls through fiscal 2025 and into fiscal 2026 is really what will send Nvidia’s share price even higher. The projections are for sales of $28 billion, plus or minus 2 percent, for fiscal Q2, and we think Nvidia will easily break $100 billion in sales this year. As does anyone else who can plot four dots on a line.

This ride is not over yet. But it is the exciting part, for sure.

Nvidia co-founder and chief executive officer Jensen Huang laid out the landscape for everyone as he ended the call, and we will let him do the talking:

“We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market. And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet is a network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we’re going to make it a much better computing fabric. And we’re committed – fully committed – to all three links. NVLink computing fabric for single computing domain to InfiniBand computing fabric to Ethernet networking computing fabric.

And so we’re going to take all three of them forward at a very fast clip. And so you’re going to see new switches coming, new NICs coming, new capability, new software stacks that run on all three of them. New CPUs, new GPUs, new networking NICs, new switches – a mound of chips that are coming. And the beautiful thing is all of it runs CUDA and all of it runs our entire software stack. So you invest today on our software stack, without doing anything at all, it’s just going to get faster and faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs.

And so I think the pace of innovation that we’re bringing will drive up the capability, on the one hand, and drive down the TCO on the other hand. And so we should be able to scale out with the Nvidia architecture for this new era of computing and start this new industrial revolution where we manufacture not just software anymore, but we manufacture artificial intelligence tokens and we’re going to do that at scale.”

This market is expanding so fast that everyone can play. But for the next few years at least, Nvidia will continue to be the big winner.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

17 Comments

  1. Thanks for the breakdown and analysis Tim. I wanted to pick at this discussion of “peak,” which seems a bit of a troubling term in this context. It implies accent and decline, and to your credit you are pushing out the decline portion. But if their summit of growth isn’t reached for another 4 years, talking about peak in 2024 is going to sound pretty silly, won’t it? Peak for Nvidia could have been Oct 2018 or Dec 2021 right before those crypto busts too, yet those valleys were just bumps in the road.

    Nvidia has put together a tremendous solution stack that just happened to anticipate one of the most dramatic transitions in the history of technology. They are innovating, iterating and deploying faster than anyone including CSPs several of whom have been building their own silicon but just can’t seem to push away from the table of Nvidia’s offerings. With Nvidia accelerating their product cycles, no “air gap” between Hopper and Blackwell is going to exist. With the way demand is going, it likely won’t exist in the generation after that either. Of course anything can happen. But to call their business at peak now certainly seems premature. Steepest moment of ascent, maybe?

    • I guess I am thinking in terms of full annual years. I think fiscal 2025 is the peak, and 2026 will be at best a carbon copy. Nvidia can sustain, but all of the hyperscalers and cloud builders will be using their own accelerators for a large portion of the workloads. Competition will happen from other fronts. Price performance will continue and companies will learn to make do with smaller models on smaller machines. The peak can be a plateau. But somewhere north of $100 billion a year — whatever level Nvidia does attain in fiscal 2025 — is it. People will find other ways to solve this problem because it is far too expensive right now.

      • >all of the hyperscalers and cloud builders will be using their own accelerators for a large portion of the workloadsCompetition will happen from other fronts. Price performance will continue and companies will learn to make do with smaller models on smaller machines.The peak can be a plateau. But somewhere north of $100 billion a year — whatever level Nvidia does attain in fiscal 2025 — is it.People will find other ways to solve this problem because it is far too expensive right now.<

        I'm sure you listened to the ER call. This comment seems to dismiss the 700% ROI in 4 years for CSPs Jensen described with Blackwell. Either you don't believe the demand persists or you have more faith in alternatives. The question is not the alternative's cost. The question is what value is it delivering? Do alternatives match, exceed, or fall short of Nvidia's offerings? Folks talk about Nvida's fat margins like they're so vulnerable, but they just continue to improve on them, it's stunning. I come from a time when 40%GMs were THE goal for graphics chip companies, now Nvidia is delivering 78% corporate. double! it's eye popping

        Jensen's model is: delight your customer. He knows can only be successful if his customers are successful with his products. They seem to be delivering on that. I've been watching this space intently for more than 10 years, and in my humble opinion, Nvidia is just widening the gap, no one is offering anything close to an alternative, not AMD, not Intel, not Qcom and with the exception of TPU, certainly not the other CSP chips. That only leaves the startups and, well, in my mind they're all but done. Maybe Groq, Tenstorrent or Cerebrus can raise another round but I don't see them have any meaningful impact in share in the near or mid term.

        Thanks for the reply, appreciate for your thoughts.

        • Like I said. Nvidia will make money for a long, long time. It is the new mainframe. But that did not stop RISC/Unix and Windows/Linux on X86. Alternatives that are cheaper will emerge, and even Yann LeCun is warning that LLMs are not the models of the future….

      • Have you ever looked into Nvidia Enterprise AI which Nvidia released in 2022 and it’s impact on everything at enterprise level?

        I see here a lot of assumption based on HW and CUDA level but enterprises who seek solutions aren’t DIY, especially outside of IT industry. To many it seems that CSPs buy lots of AI infrastructure for themselves which wouldn’t make sense since they could just focus more on their accelerators. But CSPs have another business going on and that is renting infrastructure.

        But to whom do they rent and what do those customers do with Nvidia platform?

        I suggest you take a deeper look at Nvidia’s business model and how it’s connected to DGX cloud and Enterprise AI SW solutions. A hint, 1 year ago AWS claimed that they skip DGX in favor for AMD. Today, AWS is another DGX cloud partner and seems to have complete interest in AMD. I’m pretty sure that AWS customer demand had a significant role in that change.

        • Yes, I have. Since its beginning.

          PyTorch and Llama 3 or 4 will be good enough for most enterprises. I see another stack rising, and someone will try to make a commercialized version of it.

      • While I don’t doubt your fundamental assertion that ultimately, competition will erode the monopoly NVIDIA has today, I too doubt that 2025 is “peak”. The issue is not that there will be alternative architectures that dilute demand for AI CPU’s – it is that Fabs are taking 3-4 years to build. Between NVIDIA and Apple, TSMC is running to the max. Intel does not expect to see a ROI from its foundry efforts until 2027 at the earliest. They are right now counting on superior packaging capability. This puts Wafer making in the hands of TSMC, and its new Arizona fabs will not come on-line until 2026, leaving NVIDIA in the monopoly drivers seat pushing the CUDA platform for another 2-3 years. Oh, and with Spectrum-X, I expect a high level of integration between switching fabric and Data Center parts. So does Intel, who is scared at how fast IB and a fabric Enet are reshaping Data Center connectivity and priorities. All of these parts live in the high margin space of the market with NVIDIA constrained by their production partner for a solid 2 years.

        • As a person who has spent a few decades predicting the future, I will concede that it is hard to tell. And fall back on my own line, despite an enthusiasm for HPC simulation: The only way to accurate predict the future is to live it. We shall see, and it won’t be boring.

    • Unlike many, I did not think there is a Hopper to Blackwell airgap. People are still buying and installing Amperes if they can get them. That is how bad the supply and demand is still out of whack, even if it is getting better.

  2. One factor is the resistance to single vendor architecture spanning compute and network last seen in the early 1990s when IBM had a lock on compute and networking. Additionally, the public sector requires multiple vendors, with practical bans on ‘single source’ procurement contracts and RFPs. Companies like Arista are already embedded in the data center. No one wants to have engineers learn 2 platform skill sets, diagnosis, hardware design. One platform for both front and back end is the model that will prevail.

    • Markets can sustain a monopoly, but eventually it has to be done legislatively and only where it makes sense. Like in the local monopolies that comprise the patchwork electrical grid here in the United States.

      If you define the relevant market for Nvidia as “AI hardware and systems software” and if you think a monopoly is any vendor with 85 percent or more revenue share — which is the standard applied to IBM in its antitrust cases with the US Department of Justice — then Nvidia certainly has a monopoly.

      And even Jensen knows that in the long run, true competition is coming and having regulation forced upon Nvidia is no damned fun at all and certainly not good for the company. He knows the history of the IBM mainframe quite well, and has built something like it for the next generation of compute.

      So, as I said, I think this will all settle out and Nvidia is enjoying this wild ride while it lasts. The plateau in revenues, once reached, could be very long, like Sun in Unix servers or Cisco in datacenter and campus switching. But the rise cannot last forever and those juicy profits that come from first mover advantage will be taken away by competition as always happens in any economy and in any sector. The clouds and hyperscalers will have cheap AI based on their stuff and expensive, more general purpose AI based on Nvidia and AMD GPUs — just like they have expensive CPUs based on Intel and AMD X86 and homegrown Arm CPUs that are cheaper. And they will pass the higher costs on to cloud customers. I just don’t think the market will allow Nvidia to keep 75 percent margins on its hardware and then the clouds to have 65 percent margins on the rental of that hardware on top of that forever. Unless people just decide it is too much of a pain in the ass to run an IT shop. And then, the antitrust lawsuits will begin shortly after the complaining about price gouging and the lack of competition.

      And then, if there is a next thing, Nvidia will try to invent that.

      • I agree. It’s not as if the competitors don’t have a blueprint for how to compete.
        Nvidia didn’t invent a brand new technology, they just made a matrix math engine that’s really fast. Everyone else knows how to make a matrix math engine, and now just need to keep making them faster. The barrier to entry is the high cost of using newest generation fab tech at tsmc, but even a 5% AI market share will make that worthwhile.

      • Apple had a first mover advantage as well and quickly lost market share as in unit and later revenue share but to this day Apple has 75% of the profit share in the smartphone market.

        There will be alternatives to Nvidia but to say that Nvidia’s margin will fall because of this depends on how Nvidia will position themselves. Apple’s margin come from their iOS platform. Nvidia goes that same direction with their DGX platform. DGX will always have competition and alternatives but with DGX Nvidia will try to perform the best performing solution and earn the highest margins. With DGX Nvidia controls most of the value chain of a data center since Nvidia doesn’t need a system builder, a networking partner, not even a data center planner and so on. It’s like HPE/Cray would start developing CPUs+GPUs+networking and offer it in their data centers. That is Nvidia DGX in a nutshell.

        With data centers usually many vendors are involved and want their share of the margins. It’s like in the smartphone market. Apple can keep high margins because the own the whole platform. Apple earns on HW, earns on their services and apps and earns on apps from 3rd party sold on their platform as well as on accessories. With Android it’s different, Samsung for example earns on HW, Google earns on SW+app purchases and accessories are margin for many different 3rd parties.

        • People buy Apple products for social status as much as because they are (like me) in their moat. Datacenters have the moat problem with Nvidia, but there is no social status — or very little. And I think that in the long run, the moat will not be as deep. Just like with mainframes, which are still around in the world and still drive most of the revenue and all of the profits at IBM. Nvidia is IBM, not Apple.

  3. Buying/Racking/Wiring up all these Nvidia data center scale GPUs…Just to find out the answer is “42”? Hmmm.

  4. Proverbs 16:9
    A man may make designs for his way, but the Lord is the guide of his steps.

    ->things can change rapidly, in a way nobody would have ever thought of.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.