“No.”
That’s probably a word that Jensen Huang, co-founder and chief executive officer of Nvidia, doesn’t hear a lot.
But this is, in fact, the word from up on high – President Donald Trump, to be precise – about Nvidia being able to sell even tremendously crippled GPU accelerators into the Chinese market, which the company estimates will have a total addressable market of $50 billion “in the future,” whatever that means. While Trump may have killed the cumbersome and mostly non-sensical AI Diffusion regulations of the prior Biden administration, Nvidia and AMD have both been told, in no uncertain terms, that they are not going to be selling even crippled GPUs into China as well as Russia and North Korea. But, they are being told to sell like crazy to the rich sheiks and their sovereign wealth funds in the Middle East, so there is that.
The secret to life is to know when you are already winning, and perhaps Huang knows this but had to clear his chest when going over the numbers for Nvidia’s first quarter of fiscal 2026 with Wall Street. The Middle East has a lot more money than China, although it is short of AI researchers by comparison. (And so is the United States and Europe compared to the Middle Kingdom.) Either way, Huang danced right up to the edge of criticizing the American president, which is a risky thing to do:
“China’s AI moves on with or without US chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI, it already does. The question is whether one of the world’s largest AI markets will run on American platforms.”
“Shielding Chinese chipmakers from US competition only strengthens them abroad and weakens America’s position. Export restrictions have spurred China’s innovation and scale. The AI race is not just about chips. It’s about which stack the world runs on. As that stack grows to include 6G and quantum, US global infrastructure leadership is at stake.”
“The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable and now it’s clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers wins AI. Export controls should strengthen US platforms, not drive half of the world’s AI talent to rivals.”
“DeepSeek and Qwen from China are among the best open source AI models. Released freely, they have gained traction across the US, Europe, and beyond. DeepSeek-R1, like ChatGPT, introduced reasoning AI that produces better answers, the longer it thinks. Reasoning AI enables step-by-step problem solving, planning and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand. AI scaling laws remain firmly intact, not only for training, but now inference too requires massive scale compute.”
“DeepSeek also underscores the strategic value of open source AI. When popular models are trained and optimized on US platforms, it drives usage, feedback and continuous improvement, reinforcing American leadership across the stack. US platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen run best on American infrastructure.”
Huang may have family in Taiwan and therefore longer roots stretching back into China, but he is American and he knows full well that nation-states have to protect their technology, economy, and culture – their way of life. And like it or not, China is the other superpower that has grown to fill in some of the gap left by Russia’s weakening in the past three and a half decades. And, thanks to changes brought about by Trump, weakening NATO while the war goes on between Russia and Ukraine, Europe is going to be ramping up its own military in the wake of economic unification some decades ago under the European Union. And so, the world is balkanizing.
It is not the job of the United States government to make life easier for China, Russia, or North Korea, or even Europe or Japan, which also might re-arm given the strength of its much larger Chinese neighbor. Uncle Sam’s first priority is to protect its interests and therefore its people – just like it is Huang’s job – literally, his fiduciary responsibility – to protect Nvidia’s interests.
And given all of this, Huang must have known for sure that there was no way in hell – or at least for the next three and a half years – that Nvidia or AMD, or anyone else, was going to be able to sell AI accelerators into China, not only because of a trade war but also because it upsets the balance of power. So, Q1 F2026 was the last hurrah, and Nvidia will not rule the AI world but share it with HiSilicon, the chip arm of Huawei Technology, and a bunch of homegrown AI accelerators developed in the United States and China for their respective hyperscalers and cloud builders.
Nvidia’s other option is to push Trump too hard and someone starts doing the math on the strategic nature of GPU technology for the US and Nvidia’s 90 percent-plus market share of parallel compute engines that are literally reshaping the world at warp speed. And before you can say “consent decree,” the Antitrust Division of the US Department of Justice swings into action.
Hopefully, Nvidia doesn’t make all the same mistakes as its idol, International Business Machines. If it does, we do expect Nvidia to do it in an accelerated fashion, of course. . . .
As the first quarter numbers demonstrate, Nvidia does not need to do business in China to be wildly successful, although it must have been fun selling crippled parts to ravenous customers at high prices and probably even higher profits.
In the quarter ended in April, Nvidia hauled in $44.06 billion in sales, up 69.2 percent, with net income of $18.78 billion, up only 26.2 percent. The revenues and the profits were both hit by export controls that stopped shipments of the “Hopper” H20 GPUs into China on April 9. Nvidia had sold $4.6 billion of H20s into China up to that point in the quarter, but was expecting to ship $7.1 billion worth of H20s and was therefore left holding the bag for $2.5 billion. Nvidia was expecting to sell $8 billion of H20s into China in Q2 F2026, and took a $4.5 billion writedown for H20 inventories and purchase commitments. This writedown was estimated to be $5.5 billion in early April, but Nvidia was able to recycle some of the parts to make full-blown Hopper H100 or H200 GPUs.
For the full year, Nvidia has a $30 billion or so revenue gap from lost sales to China, and looking our further, a $50 billion TAM that it will not be able to chase in China.
We strongly expect that the Middle East will either directly or indirectly make up for a lot of that through its investing in AI datacenter projects around the globe. And the global 20,000 companies and their nations are going to want sovereign AI. Nvidia is going to be fine, even if its share will be cut into by indigenous AI accelerators among the hyperscalers and cloud builders in the United States. It was always inevitable that the profit levels Nvidia has enjoyed in the datacenter would come down.
Nothing can stay in the stratosphere that long. The laws of economics will not allow it.
Net income worked out to be 42.6 percent of revenue, and Nvidia says that it still can get back to the mid-70 percent level for gross margins this year.
In the April quarter, Nvidia’s datacenter division posted sales of $39.11 billion, up 73.3 percent year on year and up 9.9 percent sequentially from Q4 F2025 ended in January. The datacenter drove 88.8 percent of Nvidia’s overall revenues and would have kissed 90 percent had the H20 sales kept going for a few more weeks. Our model was for Nvidia to do $39.5 billion in datacenter sales, so we feel pretty good about our gut and algorithms.
Extract out the datacenter, and the rest of Nvidia brought in $4.95 billion in sales, up 42.2 percent year ion year and up a very healthy 32 percent sequentially. And we see no reason to change our forecasts for fiscal 2026 and still expect for the company to hit at least $183.6 billion in datacenter sales this fiscal year, which would be up 59.4 percent from the $115.2 billion Nvidia brought in from the datacenter products in fiscal 2025.
Interestingly, Nvidia turned in the best quarter in its history for datacenter compute, InfiniBand networking, and Ethernet networking respectively. We have a model that stretches back to fiscal 2020 that shows networking revenues, which Nvidia started breaking out separately a few quarters ago and which it got through its acquisition of Mellanox Technologies, which is why we can say that with some confidence.
As best we can figure, InfiniBand revenues in Q1 F2026, at $2.88 billion (up 6 percent) squeaked by the prior peak of $2.86 billion set in Q4 F2024. In the second half of last calendar year, InfiniBand took it a little on the chin, and we think it was because customers were waiting for 800 Gb/sec Quantum-X switchery rather than wanting to settle for 400 Gb/sec stuff that was looking a little long in the tooth for a back end AI network.
Nvidia said on the call that its Spectrum-X Ethernet networking business was at an $8 billion run rate as Q1 F2026 came to a close, which means it had in excess of $2 billion in sales in the current quarter. (Our model says $2.08 billion, up 4.5X compared to the year ago period.) Nvidia added that two cloud service providers had adopted Spectrum-X for their AI clusters; the companies were not named.
Add it up, and overall networking drove just under $5 billion in sales, up 56.3 percent and represented 11.3 percent of Nvidia’s overall sales and 13 percent of its datacenter sales.
On the compute front, Nvidia had $34.16 billion in sales of datacenter compute engines including CPUs and GPUs as well as system boards and complete systems in those rare cases where Nvidia works with Foxconn (we presume) to make them with its own Nvidia brand on them. Blackwell GPUs represented nearly 70 percent of compute revenues, which works out to $23.7 billion, which is 10X what it was two quarters ago and more than double what it was in Q4 F2025 ended in January. The Hopper H20, H100, and H200 devices drove most of the remaining $10 billion in sales in the quarter.
Remember when $10 billion in GPUs sounded like a lot?
Nvidia spent just a tad under $4 billion in research and development in the quarter, which was nearly double from a year ago but only 9.1 percent of revenues, lower than the historical average by far. The company ended the quarter with $53.69 billion in cash and investments in the bank. For perspective, the big four hyperscalers and cloud builders in the United States – Amazon, Microsoft, Google, and Meta Platforms – are going to spend around $325 billion on infrastructure capital expenses in calendar 2025. And as we previously discussed, a 1 gigawatt AI cluster like the Stargate UAE system we talked about last week, will have around 7,000 racks and cost around $50 billion to deliver around 10 zettaflops at FP4 precision.
The UAE wants to spend somewhere on the order of $250 billion over five years. Which is five Chinas. Nvidia does not need China. It just wants it because it wants to maximize revenues and profits until the inevitable competition comes.