Just How Large Can Nvidia’s Datacenter Business Grow?

The combination of the excitement for new video games, the machine learning software revolution, the buildout of very large supercomputers based on hybrid CPU-GPU architectures, and the mining of cryptocurrencies like Bitcoin and Ethereum have combined into a quadruple whammy that is driving Nvidia to new heights for revenues, profits, and market capitalization. And thus it is no surprise Nvidia is one of the few companies that is bucking the trend in a very tough couple of weeks on Wall Street.

But having demand spiking for both its current “Volta” GPUs, which are currently aimed at HPC and AI compute, and its prior “Pascal” GPUs, which target gaming, workstations, and cryptocurrency mining, is both a blessing and a curse that also turns out to be a blessing as well, at least financially and at least in the short term as demand is exceeding supply.

Transforming Nvidia from a company that supplies graphics processors focused on a relatively small corps of elite gamers to one that provides a wide range of compute and graphics across disparate markets that can equally benefit from the common engineering in its products has taken a long time. But it certainly has paid off. And in the short term, as cryptocurrency miners are chasing the same GPU cards as gamers, and as machine learning trickles down from the hyperscalers to service providers and enterprises of all sorts and requires the same Tesla accelerators that are used for virtual workstations on public clouds, this is all creating a virtuous demand cycle that is expanding the market.

In the quarter ended in January, Nvidia’s revenues rose 34 percent to $2.91 billion. Gross margins were up, we believe due to product mix and huge demands for products, allowing Nvidia to charge a premium price, and the bottom line was helped, too, by a $133 million tax benefit in the wake of the tax overhaul in the United States by the Trump administration. As a result of these factors, Nvidia’s net income rose by 70.7 percent to $1.12 billion. Nvidia just paid off $355 million for its new headquarters in Santa Clara, and Colette Krauss, chief financial officer at the company, suggested in ca call with Wall Street analysts going over the fourth quarter fiscal 2018 results for Nvidia that the chip maker might invest more heavily in its own supercomputer infrastructure to bolster its AI research and chip designs. (It already has plans for a 660 node hybrid CPU-GPU cluster, the next generation Saturn V cluster.) Nvidia ended the quarter with $7.1 billion in cash and investments and a modest $2 billion in debt. Krauss said that Nvidia expected sales of $2.9 billion in its first quarter of fiscal 2019, which ends in April, plus or minus two percent and more or less matching the quarter it just finished.

For now, the markets are expanding faster than the supply, so the profits at Nvidia and its downstream partners are on the rise. But if Taiwan Semiconductor Manufacturing Corp cannot ramp up production on the various Pascal and Volta GPUs that Nvidia sells, the shortages will present an opportunity for AMD, whose partner GlobalFoundries is ramping up production in the Fab 8 chip factory in Malta, New York aggressively, and for those companies that have designed specialized chips for machine learning and cryptocurrency mining. The trick will be for Nvidia and TSMC to ramp the volumes fast enough to ease demand pressures and to also increase yields at the same time so profitability levels remain high. It seems likely that this can – and will – happen. If it does not happen, however, then Nvidia and TSMC will sow the seeds of success for alternative technologies.

Excepting the concerns about the demand for its GPUs exceeding their supply, which Nvidia co-founder and chief executive officer Jensen Huang referred to time and time again on a call with Wall Street analysts going over its financial results for the fourth quarter of fiscal 2018 ended in January, the picture could not be rosier at Nvidia. Nvidia is reaping the returns on its investments in the Pascal GPUs, which have been in the market for two years now and which will still be viable for a while to come, and the Volta GPUs, launched last May and ramping through 2017 and into early 2018, have an even longer life ahead of them. The GeForce gamer and Quadro workstation cards still don’t have Volta variants, and those should come later this year – maybe even by the GPU Technical Conference, which was moved back to March this year after being held in May last year to coincide with the rollout of Volta.

Given this, it is no surprise that the company has not talked publicly about future GPU roadmaps, which makes sense given that Volta GV100 GPUs are already on an ultra-advanced 12 nanometer process at TSMC that Nvidia needed to bring Volta to market with the feeds and speeds that both AI and HPC customers required in the 2018 timeframe. The market is not yet ready to deliver 10 nanometers in volume, and 7 nanometers is still a lot closer to development than it is to production at the four remaining major fabs – that’s Intel, Samsung, TSMC, and GlobalFoundries.

It is very hard to model any company that has such new and explosive markets, and there is nothing to be disappointed with in Nvidia’s numbers, but it is a simple fact that no market can quadruple or triple or even double forever. While the market was awaiting the Pascal accelerators, which were expected to transform both AI and HPC computing, Nvidia’s Tesla datacenter compute business bounced along in fiscal 2015 and in early fiscal 2016 and then in fiscal 2017, when Pascal started shipping, it exploded, doubling then tripling in the quarters, year on year, and pushing this Tesla and GRID business into the stratosphere and giving it a $2 billion annualized run rate as the autumn came to calendar 2017.

As is the case for other vendors of high performance computing products, it is perhaps best to look at Nvidia on an annual, rather than quarterly basis.

In fiscal 2015, the datacenter business generated $317 million in revenues, and in the following fiscal year it only rose by 7 percent to $339 million. Even if this business had not exploded, even if all Nvidia was doing was HPC application acceleration with GPUs, this would be a good and profitable business for Nvidia and something it would be smart to do to cover the high costs of GPU research and development. But that’s not what happened. Machine learning took off like crazy in 2017 at the same time as HPC started investing heavily in GPUs, too, and sales in the datacenter division at Nvidia went up by a factor of 2.45X in fiscal 2017 to $830 million. Here in fiscal 2018, sales slowed a bit, with growth of only $1.93 billion, up 2.33X over the prior year. The point is, this $2 billion run rate looks sustainable, and provided supply constraints can be eased, it looks like Nvidia can grow from this baseline to new heights. Our low end, conservative estimate is that, due to supply constraints, the business might only grow by a little more than 2X in fiscal 2019, kissing $4 billion in sales and exiting the year with a $5 billion annualized run rate.

There is no reason to believe that the desire for advanced GPUs by gamers will abate, or that the market for gamers will not continue to rise, from 200 million people today and maybe doubling again over the next few years. Given this, it is hard to bet against the core GeForce GPU business at Nvidia, and it is equally hard to bet that the company’s engineers will not think of clever ways to use transistors, memory, and interconnects to guild great GPUs. But still, let’s say it becomes more of a game of price/performance with AMD and Intel both ramping up the pressure. And let’s also say that the datacenter business slows as competitive pressures start there, too, assuming that AMD can get a GPU out the door that does mixed precision, including 64-bit floating point, which it badly needs.

Here is what that scenario might look like:

Given a 30 percent growth rate in gaming GPUs in fiscal 2019 and a rate of only 25 percent growth here in fiscal 2020 and fiscal 2021, the gaming division will hit $11.2 billion three years from now. Assuming that the datacenter business will be able to grow at 80 percent a year, then in fiscal 2021, this will be an $11.3 billion business. (Yes, we plugged in numbers just to see what would be needed for them to match.) It could turn out that the markets for GPUs aimed at gaming and at AI/HPC are both saturated, or that they grow more slowly than we think. Assuming that gaming and datacenter both fall off because they are approaching the upper limits of their markets, the future might look like this:

We wholly admit that we just like the scenarios where Nvidia’s datacenter business surpasses its gaming business. But there are scenarios where the growth just goes linear, and these look like fun, too:

If that did happen, and the growth just keeps on a-going, then the Nvidia datacenter business would hit $30 billion by 2021 and would be twice as large as gaming. In such a scenario, Nvidia would have to sell a tremendous number of GPU accelerators – very likely more than TSMC and GlobalFoundries together could make using advanced processes – and a tremendous number of CPUs in the would have to be unplugged. This is the wildest dream of Jensen Huang and the nightmare of Intel chief executive officer Brian Krzanich, whose own Data Center Group had an annualized run rate of $22 billion as 2017 came to an end and whose “real” datacenter business – including not just server chips, chipsets, and motherboards but also storage, FGPAs, and other goodies – was humming along at an even larger $28 billion.

But think about this. Imagine a world where half the processor money is spent on CPUs and half on GPUs. We don’t think this is a likely scenario, nor do we think there is $60 billion in server chippery up for grabs only four years from now. Our point, perhaps, is that there is going to be a big fight for the hearts and minds of compute in the datacenter, and it is a whole lot more complicated than just Intel versus Nvidia. There might be $20 billion to $25 billion in datacenter compute (expressed by sales of CPUs, GPUs, and FPGAs, not complete systems) up for grabs by 2021, and it is not at all inconceivable that Nvidia could capture a third of that if a large portion of it is machine learning training and inference.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. “and the mining of cryptocurrencies like Bitcoin and Ethereum ”
    Specilaly Ethereum, Bitcoin is mined mainly with ASICs processors…some GPGPU friendy cryptocoins are: Ethereum, Ethereum Classic, Monero or ZCash

  2. Nvidia needs Volta for the AI markets and there is talk of Ampere as the sucessor to Pascal for consumer/gaming. The best busines decision that Nvidia made was to focus on Volta’s Tensor Processor Cores for the new hot AI market and the automotive markets with its DRIVE PX2 /other auto related IP. In a few more business quarters Nvidia’s professional/non-gaming revenues will surpass Nvidia’s gaming only GPU revenues and Lisa Su over at AMD is also becoming more focused on the professional markets with its Epyc and Vega 10 die based Radeon Instinct and Radeon Pro WX SKUs for compute/professional graphics. AMD keeps the top performng Vega 10 dies for its Pro Market compute/AI GPU SKUs with the not so performant Vega 10 dies going towards the consumer/gaming Vega 64/56 GPU SKUs that the miners(Non ASIC Miners) are snapping for mining at prices that are higher on average than even Nvidia gets for its top end gaming GPU SKUs. Coin Mining has been as good to Nvidia this time around than it has been to AMD! And it’s mostly because of AMD’s GPU supply constraints that miners where forced to turn to Nvidia GPUs in the first place because Nvidia’s consumer GPU SKUs are stripped of all but the necessary amounts of shader cores necessary for gaming compared to AMD compute heavy GPU designs.

    Consumer makrets alone are too fickle for any Processor/Technology company to focus too much attemtion on as the real stable market with the most stable revenue/revenue growth market is the professional markets bulit up around Workstation/Server/HPC and now the hot new AI markets. Consumer markets are mostly a low markup market with the thinest of margins compared to the professional markets where the largest markups can and do get paid by enetrprises that will just write the cost of CPU/GPU SKUs as an expense, not so for the consumer makrets as far as most consumers not being able to write off CPU/GPU costs as an expense.

    Nvidia’s JHH sure spends more time talking about AI and Automotive more that Nvidia talks about any Pascal replacments as AMD has given Nvidia little reason to have to go beyond Pascal currently. Gamers may be a little bit disappointed with that first Vega Micro-Arch based Vega 10 base die that AMD uses currently for its Radeon Instinct/Radeon Pro WX 9100 professional SKUs in addition to that same Vega 10 Base Die tapeout that is also used for the Vega 56/64 consumer gaming GPU variants. Vega 10 was a compute heavy design on purpose as that’s all the limited funds that AMD had at the time allowed AMD to create in that AMD only had the funds to do one base die tapeout compersd to Nvidia’s 5 base die tapeouts(GP100, GP102. GP104, GP106, GP108).

    So Vega was designed more for compute than for gaming where the Vega 10 die complement of 64 ROPs was only able to compete with Nvidia’s GP104 based GTX1080 with its maximum complement of 64 ROPs likewise. AMD stood no chance of ever competing with the GP102 based GTX1080Ti with its 88 ROPs and much higher pixel fill rates that those 88 out of 96 available ROPs on the GP102 could provide.

    So AMD made a decision, a very good one considering AMD current discrete gaming GPU market share, to focus on compute/AI with that Vega 10 base die tapeout and forgo any top end GPU competition with Nvidia that AMD with its limited resources could not hope to compete with.

    Really both AMD and Nvidia know where the real revenue/revenue growth potential is and that’s not in any consumer gaming market only focus as the professional/new non consumer markets are where the real growth potential is and that AI market is just getting started up. AMD will be spinning up a new high end Vega 20 base die tapeout at 7nm but that is for a new Radeon Instinct replacement for the current MI25 Compute/AI SKU that are currenty based on that one Vega 10 base die tapeout.

    I’m very certain that JHH over at Nvidia will pop the cork that most expensive Dom Perignon and celebrate once Nvidia’s total Professional/Automotive and other non consumer-gaming revenues represent the majority of Nvidia’s total revenues going forward. No hardware/precessor maker in their right mind wants to be tied to only consumer/gaming as a source of revenues if you are only a processor maker and not also a retailer as Apple currently is and can afford to be consumer focused with Apples total market cap.

    Nvidia and AMD are procesor makers like Intel and thus have to rely on the professional markets for the markups/margins that really can pay the bills. Apple is a mega retailer that’s also a PC/Phone/Tablet/Laptop OEM, in addition to being the designer/maker of the processors that go into its Tablet/Phone SKUs, and maybe in the future go into its PC/Laptops. Apple is also a Big service providor to its closed consumer hardware/softeware/OS ecosystem at that’s more billions also. Nvidia has very little in the way of online services revenue potential with Nvidia’s limited game streaming via the cloud services compared to Apple.

    Nvidia has that OpenPower GPU accelerator market also with its IBM partner and that should be good for some extra growth in the professionla markets also while AMD has its Epyc/Radeon Pro/Instinct production under one banner as far as package dealing goes for the Epyc/GPU accelerator markets! And each of those project 47 petaflops supercomputer in a cabinet SKUs gets loads of Epyc CPUs/SP3 MBs and 80, Vega 10 die based, Radeon Instinct MI25s sold for AMD and its Server and MB partners.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.