Nvidia Will Be The Next IT Giant To Break $100 Billion In Sales

Here is a history question for you: How many IT suppliers who do a reasonable portion of their business in the commercial IT sector – and a lot of that in the datacenter – have ever broken through the $100 billion barrier?

Three.

To be precise: IBM broke $100 billion in annual sales between 2008 and 2012, with a few years that were close before and after that; the old Hewlett Packard conglomerate did it in the late 2000s and early 2010s before the big HPE/HP Inc split and consulting selloff; and Dell Technologies just recently did it in its fiscal 2022 and 2023.

If current trends persist, Nvidia will almost certainly do it this year and Amazon Web Services, whose financial results we talked about in detail a few weeks ago, will probably do it in 2024 as well. We are going to bet that Nvidia inches ahead of AWS, but to be fair, both of their growth is due to the GenAI boom and AWS will report its annual revenues a month earlier, giving it a slight advantage in terms of timing.

The $100 billion revenue level is just a fun statistic, and Big Blue is the one that has consistently held that level (particularly if you adjust its revenues in the past for inflation), and done so with the most profitability as well. But when Nvidia does it, we reckon that Big Green will be the most profitable IT supplier to reach that $100 billion level, and by a wide margin over the historical Big Blue.

In the fourth quarter of fiscal 2024 ended in January, Nvidia posted revenues of $22.1 billion, up 265.3 percent year on year and up 22 percent sequentially, with operating income up by more than a factor of 10X to $13.62 billion and net income up by 8.7X to $12.29 billion. That net income level is 56 percent of Nvidia’s revenues in the quarter, which is an amazing profit level and shows how the company is just printing money at this point. This is by far the craziest growth in revenues and profits we have ever seen in three and a half decades of watching the IT space, and that growth allowed Nvidia to exit the 2024 fiscal year with just a tad under $26 billion in the bank. A rainy day fund, indeed.

That cash pile is growing 10 points faster than net income on a sequential basis from fiscal Q3, which is growing 10 points faster than revenues.

In the quarter, the Graphics group at Nvidia gad $4.21 billion in sales, up 76.8 percent, while the Compute & Networking group just utterly blew by the former center of the Nvidia business with $17.9 billion in sales, up by a factor of 4.9X. For the full fiscal year, the Compute & Networking group had sales of $47.41 billion, up by 3.2X compared to fiscal 2023. We sure wish we had operating income for these groups, as Nvidia used to give a few years back. . . .

There are some graphics products that are sold into the datacenter and some compute products that are sold outside of the datacenter, so the Datacenter division at Nvidia has slightly different sales figures compared to the Compute & Networking group, but they are close.

In the fourth quarter, the Datacenter division had $18.4 billion in revenues, up 5.1X year on year and up 26.8 percent sequentially from fiscal Q3. For the year, the Datacenter division had $47.53 billion in sales, up 3.2X.

Add it all up, and Nvidia had $60.92 billion in sales in fiscal 2024, up 125.9 percent, and net income rose by 6.8X to $29.76 billion, which is 49 percent of revenues.

Amazing. And we suspect that Nvidia will only get bigger and richer, if Q4 is any guide. And it most certainly is.

Some interesting tidbits. In the fourth quarter, the InfiniBand networking line grew by more than 5X year year on year, and the networking business in total had an annualized run rate of more than $13 billion as fiscal 2024 came to a close. Such tidbits given over the past year or so have allowed us to get a sense of datacenter compute, InfiniBand networking, and Ethernet/Other networking at Nvidia. We think it looks something like this:

By our math, we think that InfiniBand accounted for $2.86 billion in revenues in fiscal Q4, which is 5x higher than the $571 million sold in Q4 of last fiscal year. We also think that Ethernet/Other comprised $425 million, which was down quite a bit from a year ago when Nvidia had some pretty good sales into the hyperscalers and cloud builders in fiscal 2023. For the full year, we think InfiniBand brought in $6.48 billion, up 3.7X year on year, and Ethernet/Other brought in $1.26 billion, off 41.7 percent.

All of that means compute in its various forms (and mostly in the datacenter) brought in $15.12 billion in Q4, up 6.8X year on year, and for all of fiscal 2024 brought in $39.78 billion, up 3.6X year on year.

It is hard to draw lines in any business where machines and their devices serve multiple purposes, but Nvidia said on the call with Wall Street analysts going over the financial results for Q4 F2024 that it estimated that AI inference drove around 40 percent of datacenter revenues in fiscal 2024. That would be a stunning $19 billion in sales. We think, based on past comments from Nvidia, that AI training represents around half of datacenter revenues at the company, or $23.76 billion in fiscal 2024. That leaves $4.75 billion left over for pure HPC systems and other types of systems.

With the memory enhanced “Hopper” H200 starting initial shipments in the second quarter, with improving supply chain but still constrained supply that cannot meet demand, and the “Blackwell” B100 and B200 GPU accelerators coming later this year with admitted supply constraints as well, and with Spectrum-X Ethernet for AI workloads also starting to ramp, we think the datacenter business at Nvidia is going to keep growing from even these high levels. But we do think that Nvidia will probably just surpass $100 billion in sales for fiscal 2025 ending next January.

Depending on how supply improves and how demand holds up, Nvidia could do better than that. And no supercomputer can help predict it. We are just going to have to live it to find out.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

8 Comments

  1. I think it was really wise to block the Nvidia-ARM deal and hopefully Tachyum can fulfill their claims:

    “Prodigy’s
    powerful AI capabilities enable LLMs to run much easier and cost-effectively than existing CPU + GPGPU
    based systems. A single 96-core Prodigy with 1 TB of memory can run a ChatGPT4 model with 1.7 trillion
    parameters, whereas it requires 52 Nvidia H100 GPUs to run the same thing at significantly higher cost and
    power consumption.
    Prodigy ATX Platform
    allows access to
    cutting-edge AI models
    for as low as $5,000
    This paper presents the Prodigy ATX Platform, focusing on the hardware architecture, target applications,
    and how it will democratize AI for those who wouldn’t normally have access to sophisticated AI models. The
    Prodigy ATX Platform allows everyone to run cutting edge AI models for as low as $5,000 in an entry-level
    platform SKU configuration featuring a 48-core Prodigy and 256 GB of DDR5 memory.”
    ->Tachyum in their paper “Tachyum’s Prodigy
    ATX Platform
    Democratizing AI for Everyone”

    “A single Prodigy using
    TAI with 2-bit weights
    replaces 52 Nvidia H200
    GPGPUs for Switch
    Transformer LLM with
    1.6 trillion parameters”
    ->Tachyum in their paper “Tachyum Prodigy Universal Processor
    Enabling 50 EF / 8 AI ZF
    Supercomputers in 2025”

    Nvidia in my opinion is way too dominant and and needs some pressure, also to do more on the OpenSource front regarding drivers.

  2. New Yorker : https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution

    When CUDA was released, in late 2006, Wall Street reacted with dismay.
    Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley podcast, said. “They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the time—certainly less than the billions they were pouring in.”
    In marketing CUDA, Nvidia had sought a range of customers, including stock traders, oil prospectors, and molecular biologists. .. One application that Nvidia spent little time thinking about was artificial intelligence. There didn’t seem to be much of a market.

    • Let’s have some fun. Twice as many chips with twice as much memory each. Now, downshift to FP4? Goose the performance per Tensor Core by, oh, let’s be crazy, 50 percent on transformers by making in wider and deeper. That’s 2 * 1.75 * 2 * 1.5 = 10X. Yeah that sounds about right.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.