So Far, Nobody Turns Tokens Into Money Like Nvidia

Published

It has been more than three years since GenAI exploded on the scene, transforming compute at the hyperscalers and cloud builders and leading to the enormous spending by the model builders who want to license their token chewing and token spewing software to the enterprises and sovereigns of the world.

And while some of the model builders are getting some traction selling their software, and the clouds are certainly making out like the Roaring 20s selling capacity to the model builders with enough left over to give AI startups and other more established organizations a chance to try to get some ROI out of GenAI, it looks like Nvidia and Taiwan Semiconductor Manufacturing Co are the two companies that are consistently and most directly profiting from the GenAI boom.

As we enter what Nvidia co-founder and chief executive officer Jensen Huang calls “an inflection point” for “frontier agentic systems,” where code modernization and creation is the killer app that could take GenAI mainstream because most enterprises in modern economies have a mix, OpenAI Codex, Cursor, and Claude Code are all generating tons of profitable tokens, demonstrating the benefits for all companies on Earth. Given that there is a lot of very old code running the world – much of it trapped on IBM Power Systems, IBM System z mainframes, and X86 iron running Microsoft Windows Server – and all of it needs to be updated for a modern mobile and AI world, it stands to reason that these codes will be adapting using AI and augmented with AI.

But only a damned fool believes that AI can come close to automagically replacing back office systems of record and or even most systems of engagement, which are younger than those legacy apps. Such an endeavor would introduce so much risk into enterprise applications that it would be hard to calculate. Which is why legacy applications that are older than many of you persist.

This is true given the current state of AI and the current cost of processing tokens. No one said it would remain true forever. It probably won’t.

But for right now, we are in the infrastructure boom phase of GenAI, which in my mind always included both chattybot GenAI, image and video generation, and agentic AI. Physical AI is another flavor of this, where instead of the laws of language and communication  the laws of physics and chemistry and biology are transformed into weights and models.

But as I say, nobody but nobody is making coin like Nvidia on this AI train. And it is a crazy train, indeed. It blew through my forecasts from a year ago, and I strongly suspect it blew through Nvidia’s forecasts (such as they were) as well.

Let’s start with the funny bit first.

In the quarter ended in January, which is when Nvidia’s fiscal year ends and which is Q4 F2026 to be precise, the non-datacenter parts of the Nvidia business brought in $5.81 billion in sales, up 55 percent year on year, and the professional visualization part of the graphics business – not the gaming part that used to represent the biggest and juiciest part of Nvidia – broke through $1 billion in sales for the first time. (There must still be some white collar workers who want workstations, eh?) It was $1.32 billion, in fact for the ProViz division, up 2.6X year on year. Gaming GPUs drove $3.73 billion in revenues, up 46.5 percent, and the automotive division and OEM and IP sales covered the rest.

As you can see from the chart above, the datacenter business so utterly dominates Nvidia that you can barely see the other business divisions.

All told, the Graphics group revenues at Nvidia nearly doubled to $$6.48 billion, while revenues from the Compute and Networking group were up 71.1 percent to $61.65 billion year on year and up 21.1 percent sequentially from Q2 F2026.

For a while, Nvidia used to give out operating income for the Graphics and Compute & Networking groups, but we have not seen any operating income figures since Q3 F2021, which was a year before the GenAI boom exploded.

Nvidia’s Datacenter division proper, which is slightly different from the Compute and Networking group in ways that have still not been made clear to us (the numbers suggest some Graphics products are sold into datacenters, but not much), $62.31 billion in sales, up 75.1 percent and $1 billion more than my forecast last quarter.

In recent quarters – because it is material to the business – Nvidia has given a breakdown between Compute and Networking within its Datacenter division, which is useful because none of us have to build a model based on breadcrumbs and hints that Nvidia’s top brass toss our way. In Q4 F2026, datacenter compute had $51.33 billion in sales, up 57.7 percent, while networking was just barely shy of $11 billion, up by a factor of 3.63X.

One of the main drivers of this growth was the adoption of NVSwitch memory fabrics in GB200 NVL72 and GB300 NVL72 rackscale systems. I am not sure how much of the overall networking business is Ethernet stuff, InfiniBand stuff, and NVSwitch stuff – I have been modeling InfiniBand versus Ethernet since the early days of Mellanox – but Nvidia has stopped talking specifically about how this networking business is carved up.

I am not afraid to make some educated guesses, and so here is how I think the Nvidia networking business splits:

As I pointed out last quarter, we really need to tear apart Ethernet revenues from NVSwitch revenues, but I need more data and time to do that. Our best guess is that InfiniBand networking almost doubled to $3.31 billion, while Ethernet and Other was up by a factor of 5.7X to $7.67 billion. Based on the fact that nearly two thirds of datacenter compute sales were driven by GB200 NVL72 and GB300 NVL72 rackscale machines – each of requires a dozen and a half NVSwitches – we think that NVSwitch interconnects drove $4.65 billion in sales.

Yes, I think that NVSwitch fabrics drive more revenues right now than do InfiniBand or Ethernet products individually. (Remember: These revenue figures are not just for the switches, but also for optic and copper cables as well as transceivers, SmartNICs, DPUs. So be careful comparing Nvidia networking revenues with just switch revenues from others.) This stands to reason given how central NVSwitch is and that it has a huge competitive lead over alternatives to link GPUs and XPUs together. I think Nvidia is charging a premium price for a premium products – which its shareholders surely say is its fiduciary responsibility.

Here is another thing I wanted to point out:

Look at how little research and development Nvidia has to spend to keep its datacenter flywheel spinning wider and wider. In fiscal 2025, it took only $12.9 billion to create product lines that drove $215.9 billion in revenues and $120.1 billion in net income in fiscal 2026.

Now that is some return on investment! It seems very unlikely that the AI model builders and the cloud builders can do the same.