Can Nvidia Be The Biggest Chip Maker In The Datacenter?

Next year, with the launch of the “Grace” Arm server processors, Nvidia will have all of the compute and networking bases it cares about in the datacenter covered, and it will be selling its technology at a rapid pace. Nvidia already has a larger datacenter business than AMD has – 2.5X in the most comparable quarters between the two vendors – but the real question is can Nvidia get Intel on the ropes while AMD gets it with the metal chair from the audience? Another good question is how AMD and Nvidia will shape up against each other?

Nvidia already has GPUs for visualization and rendering as well as for accelerating HPC simulation and modeling, machine learning training, machine learning inference, database and analytics acceleration, and cryptocurrency mining. It will have its own CPUs as well as tight partnerships with Intel and AMD for their respective Xeon SP and Epyc motors. It has the InfiniBand and Ethernet bases covered in the switches and adapters, it has DPU and SmartNIC network accelerators, and it has a sophisticated edge play. For both datacenter and edge, it sells complete systems as well as preassembled system components, which eliminates architectural inconsistencies across OEMs and ODMs and which gives Nvidia control and money. And on all fronts, Nvidia has a coherent development environment for accelerating applications that it controls, top to bottom, even if it is not open source.

So it is not a stretch to think, given all of the dynamics of the IT sector in general and in the datacenter specifically, that Nvidia can double its revenues as Intel declines and AMD continues to grow. And some years hence, it is not a stretch that all three of these compute engine titans will be fighting each other on all fronts – well, only Intel and AMD will have FPGAs unless Nvidia has a change of heart – and trying to make their case against hyperscalers and cloud builders that increasingly design their own compute engines, and maybe network chips, and push them through chip foundries. In five or more years, the datacenter semiconductor revenue pie chart could end up with slices something like 40 percent, 30 percent, 20 percent, 10 percent across those four buckets of semiconductor revenues, but it is not clear that the distribution will be Intel, Nvidia, AMD, and Other (including homegrown).

In 2020 and 2021, we think Intel has been saved by the fact that it controls its own foundries and that it has cut price like crazy on older 14 nanometer Xeon SP chips (through bundling deals with other components that masks the Xeon SP price cuts) while it ramps up its 10 nanometer “Ice Lake” and soon “Sapphire Rapids” server CPUs. There is not much else Intel can do right now, and the strategy is largely working inasmuch as all compute engine and networking component vendors are selling all they can make right now. If there was not limited supply coming up against overdemand, there would be a huge price war going on, and as it is with inflation running rampant and shortages everywhere, all there of these vendors can sell what they can make (or have made in the case of Nvidia and AMD) and sell it at a premium price.

In this regard, the coronavirus pandemic has been very good to most chip designer/makers and very bad for many chip buyers.

Nvidia is about as healthy of a company as you can imagine in the technology sector, with strong double digit growth, and sometimes triple digit growth, in its product lines and in the industry sectors and customer classes that it aims at.

Let’s take a look at some numbers. In 2009, during the Great Recession when things were not so good in the global economy, Intel’s Data Center Group had $6.45 billion in sales and $2.3 billion in operating profits. With AMD out of the CPU picture and Intel not even close to being in the GPU business, Intel could scale up the Xeon business like crazy and also push its operating profits up into the 50 percent range compared to the historical 30-ish percent it had before the Great Recession started and AMD just stopped with Opteron server chips.

Back then, GPU acceleration for HPC applications was still nascent and the seminal work to bring neural networks to GPUs for machine learning had not happened yet. There is no significant datacenter revenue to speak of until 2012, and even then we estimate that it was maybe $50 million to $100 million a year. It probably doubled each year until we get to Q1 F2016, which roughly corresponds to calendar 2015 for Intel and AMD data, when Nvidia first started to report datacenter revenues separately and it was $405 million. AMD’s datacenter revenues were about $6 million a year, we estimate, just to tell you how far AMD had fallen from the boom Opteron years in the early to middle 2000s. Intel owned datacenter compute at this time, and had grown its revenues by 2.5X from 2009 to $15.98 billion and its operating profits by 3.4X to $7.84 billion, kissing that 50 percent operating income it so loves.

Fast forward to 2021, and we are using the trailing twelve months from Nvidia as a proxy.

Intel has already taken the first revenue and profit hits, as Data Center Group sales fell by 1.1 percent $25.82 billion and operating income fell by a gut-wrenching 33.8 percent to just under $7 billion, which is a mere 27.1 percent of revenues. Uh-oh. Adding in Programmable Systems Group – the former Altera – helps a little bit, with the combined sales falling 0.7 percent to $27.76 billion and operating income falling 32.4 percent to $7.3 billion.

AMD – without Xilinx – had $3.79 billion in datacenter sales in 2021, as we reported in our analysis of AMD financial results two weeks ago, and we think that it can nearly double its CPU and GPU sales into the datacenter to $7.52 billion in 2022. (And that is assume including severe but improving supply constraints on compute engines.) Adding in the trailing twelve months of Xilinx (which also has a fiscal quarter ending in January, which is annoying), which came in at $3.68 billion, and the new AMD datacenter (and edge) business now weighed in at twice its size at $7.47 billion, and if Xilinx just does modest growth, AMD will come in at $11.95 billion.

That brings us to Nvidia’s fourth quarter of fiscal 2022 ended in January, when its datacenter division posted sales of $3.26 billion, up 71.5 percent year-on-year against overall sales of $7.64 billion, up 52.8 percent. In its fiscal 2022 ended in January, which is the closest analog to calendar 2021 that we have to compare against Intel and AMD, Nvidia had $10.61 billion in datacenter revenues, up 58 percent.

Nvidia also counts its money in two buckets that look like this, just for completeness:

Compute & Networking includes autonomous driving platforms and Jetson platforms for robotics and embedded platforms as well as all of the core GPUs, EGX and HGX boards, Mellanox switching, and DGX systems that are part of the Datacenter division figures.

Anyway, datacenter sales are going to accelerate for Nvidia this year, as Jensen Huang, the company’s co-founder and chief executive officer, explained on a call with Wall Street to go over the Q4 F2022 results – and that is without any big exascale-class supercomputer deals (but a few pre-exascale machines) and without a CPU line. How much growth Nvidia can get remains to be seen. But let’s assume it can do 65 percent growth at the top line. In fiscal 2020, the datacenter business was up only a few points, but it more than doubled on fiscal 2021, so this might be a good average. That will put Nvidia at $17.51 billion in sales in fiscal 2023, and in fiscal 2024 when it adds a CPU business, if this trend persists, it will reach $28.89 billion in sales. Even if you cut the growth rate in half in fiscal 2024 and fiscal 2025, and assume that the best Intel can do it keep its revenue declines to a minimum, within three years the Nvidia datacenter business is larger than that of Intel.

We think that the advent of Nvidia Arm CPUs will be instrumental in the next wave of growth, and while we think it would have been interesting to have Nvidia be the steward of the Arm ecosystem, perhaps it would have been best if it had remained a public – and independent – company and that SoftBank Group had never messed with it in the first place. The market has needed the commitment and engineering that Nvidia could bring to Arm server CPUs, and the sooner the better. This is what Nvidia needs to focus on, and it is.

“We have Grace, and we surely have the follow-ons to Grace, and you could expect us to do a lot of developments around the Arm architecture,” Huang said on the call. “One of the things that has really evolved nicely over the last couple of years is the success that Arm has seen in hyperscalers and datacenters. And it has really accelerated and motivated them to accelerate the development of higher-end CPUs. And so you’re going to see a lot of exciting CPUs coming from us. Grace is just the first example, and you are going to see a whole bunch of them beyond that.”

Good. That is what the market needs. Let Arm Holdings go public and fight its own battles with RISC-V, Power, and X86 – and get free of SoftBank, too.

One last thought: If AMD continues to grow as it has been doing – and there is no reason to believe it cannot, unless chip supplies level off coming out of the foundries – then within two years it will also have a $20 billion datacenter business. It seems highly unlikely that the market will grow to accommodate all three players with roughly the same market share, but it is starting to look like that 30-30-30-10 or 40-30-20-10 datacenter semiconductor and board share distribution are the most likely ones. We shall see who has what ranking. It looks wide open as far as we can see.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.