The combination of the excitement for new video games, the machine learning software revolution, the buildout of very large supercomputers based on hybrid CPU-GPU architectures, and the mining of cryptocurrencies like Bitcoin and Ethereum have combined into a quadruple whammy that is driving Nvidia to new heights for revenues, profits, and market capitalization. And thus it is no surprise Nvidia is one of the few companies that is bucking the trend in a very tough couple of weeks on Wall Street.
But having demand spiking for both its current “Volta” GPUs, which are currently aimed at HPC and AI compute, and its prior “Pascal” GPUs, which target gaming, workstations, and cryptocurrency mining, is both a blessing and a curse that also turns out to be a blessing as well, at least financially and at least in the short term as demand is exceeding supply.
Transforming Nvidia from a company that supplies graphics processors focused on a relatively small corps of elite gamers to one that provides a wide range of compute and graphics across disparate markets that can equally benefit from the common engineering in its products has taken a long time. But it certainly has paid off. And in the short term, as cryptocurrency miners are chasing the same GPU cards as gamers, and as machine learning trickles down from the hyperscalers to service providers and enterprises of all sorts and requires the same Tesla accelerators that are used for virtual workstations on public clouds, this is all creating a virtuous demand cycle that is expanding the market.
In the quarter ended in January, Nvidia’s revenues rose 34 percent to $2.91 billion. Gross margins were up, we believe due to product mix and huge demands for products, allowing Nvidia to charge a premium price, and the bottom line was helped, too, by a $133 million tax benefit in the wake of the tax overhaul in the United States by the Trump administration. As a result of these factors, Nvidia’s net income rose by 70.7 percent to $1.12 billion. Nvidia just paid off $355 million for its new headquarters in Santa Clara, and Colette Krauss, chief financial officer at the company, suggested in ca call with Wall Street analysts going over the fourth quarter fiscal 2018 results for Nvidia that the chip maker might invest more heavily in its own supercomputer infrastructure to bolster its AI research and chip designs. (It already has plans for a 660 node hybrid CPU-GPU cluster, the next generation Saturn V cluster.) Nvidia ended the quarter with $7.1 billion in cash and investments and a modest $2 billion in debt. Krauss said that Nvidia expected sales of $2.9 billion in its first quarter of fiscal 2019, which ends in April, plus or minus two percent and more or less matching the quarter it just finished.
For now, the markets are expanding faster than the supply, so the profits at Nvidia and its downstream partners are on the rise. But if Taiwan Semiconductor Manufacturing Corp cannot ramp up production on the various Pascal and Volta GPUs that Nvidia sells, the shortages will present an opportunity for AMD, whose partner GlobalFoundries is ramping up production in the Fab 8 chip factory in Malta, New York aggressively, and for those companies that have designed specialized chips for machine learning and cryptocurrency mining. The trick will be for Nvidia and TSMC to ramp the volumes fast enough to ease demand pressures and to also increase yields at the same time so profitability levels remain high. It seems likely that this can – and will – happen. If it does not happen, however, then Nvidia and TSMC will sow the seeds of success for alternative technologies.
Excepting the concerns about the demand for its GPUs exceeding their supply, which Nvidia co-founder and chief executive officer Jensen Huang referred to time and time again on a call with Wall Street analysts going over its financial results for the fourth quarter of fiscal 2018 ended in January, the picture could not be rosier at Nvidia. Nvidia is reaping the returns on its investments in the Pascal GPUs, which have been in the market for two years now and which will still be viable for a while to come, and the Volta GPUs, launched last May and ramping through 2017 and into early 2018, have an even longer life ahead of them. The GeForce gamer and Quadro workstation cards still don’t have Volta variants, and those should come later this year – maybe even by the GPU Technical Conference, which was moved back to March this year after being held in May last year to coincide with the rollout of Volta.
Given this, it is no surprise that the company has not talked publicly about future GPU roadmaps, which makes sense given that Volta GV100 GPUs are already on an ultra-advanced 12 nanometer process at TSMC that Nvidia needed to bring Volta to market with the feeds and speeds that both AI and HPC customers required in the 2018 timeframe. The market is not yet ready to deliver 10 nanometers in volume, and 7 nanometers is still a lot closer to development than it is to production at the four remaining major fabs – that’s Intel, Samsung, TSMC, and GlobalFoundries.
It is very hard to model any company that has such new and explosive markets, and there is nothing to be disappointed with in Nvidia’s numbers, but it is a simple fact that no market can quadruple or triple or even double forever. While the market was awaiting the Pascal accelerators, which were expected to transform both AI and HPC computing, Nvidia’s Tesla datacenter compute business bounced along in fiscal 2015 and in early fiscal 2016 and then in fiscal 2017, when Pascal started shipping, it exploded, doubling then tripling in the quarters, year on year, and pushing this Tesla and GRID business into the stratosphere and giving it a $2 billion annualized run rate as the autumn came to calendar 2017.
As is the case for other vendors of high performance computing products, it is perhaps best to look at Nvidia on an annual, rather than quarterly basis.
In fiscal 2015, the datacenter business generated $317 million in revenues, and in the following fiscal year it only rose by 7 percent to $339 million. Even if this business had not exploded, even if all Nvidia was doing was HPC application acceleration with GPUs, this would be a good and profitable business for Nvidia and something it would be smart to do to cover the high costs of GPU research and development. But that’s not what happened. Machine learning took off like crazy in 2017 at the same time as HPC started investing heavily in GPUs, too, and sales in the datacenter division at Nvidia went up by a factor of 2.45X in fiscal 2017 to $830 million. Here in fiscal 2018, sales slowed a bit, with growth of only $1.93 billion, up 2.33X over the prior year. The point is, this $2 billion run rate looks sustainable, and provided supply constraints can be eased, it looks like Nvidia can grow from this baseline to new heights. Our low end, conservative estimate is that, due to supply constraints, the business might only grow by a little more than 2X in fiscal 2019, kissing $4 billion in sales and exiting the year with a $5 billion annualized run rate.
There is no reason to believe that the desire for advanced GPUs by gamers will abate, or that the market for gamers will not continue to rise, from 200 million people today and maybe doubling again over the next few years. Given this, it is hard to bet against the core GeForce GPU business at Nvidia, and it is equally hard to bet that the company’s engineers will not think of clever ways to use transistors, memory, and interconnects to guild great GPUs. But still, let’s say it becomes more of a game of price/performance with AMD and Intel both ramping up the pressure. And let’s also say that the datacenter business slows as competitive pressures start there, too, assuming that AMD can get a GPU out the door that does mixed precision, including 64-bit floating point, which it badly needs.
Here is what that scenario might look like:
Given a 30 percent growth rate in gaming GPUs in fiscal 2019 and a rate of only 25 percent growth here in fiscal 2020 and fiscal 2021, the gaming division will hit $11.2 billion three years from now. Assuming that the datacenter business will be able to grow at 80 percent a year, then in fiscal 2021, this will be an $11.3 billion business. (Yes, we plugged in numbers just to see what would be needed for them to match.) It could turn out that the markets for GPUs aimed at gaming and at AI/HPC are both saturated, or that they grow more slowly than we think. Assuming that gaming and datacenter both fall off because they are approaching the upper limits of their markets, the future might look like this:
We wholly admit that we just like the scenarios where Nvidia’s datacenter business surpasses its gaming business. But there are scenarios where the growth just goes linear, and these look like fun, too:
If that did happen, and the growth just keeps on a-going, then the Nvidia datacenter business would hit $30 billion by 2021 and would be twice as large as gaming. In such a scenario, Nvidia would have to sell a tremendous number of GPU accelerators – very likely more than TSMC and GlobalFoundries together could make using advanced processes – and a tremendous number of CPUs in the would have to be unplugged. This is the wildest dream of Jensen Huang and the nightmare of Intel chief executive officer Brian Krzanich, whose own Data Center Group had an annualized run rate of $22 billion as 2017 came to an end and whose “real” datacenter business – including not just server chips, chipsets, and motherboards but also storage, FGPAs, and other goodies – was humming along at an even larger $28 billion.
But think about this. Imagine a world where half the processor money is spent on CPUs and half on GPUs. We don’t think this is a likely scenario, nor do we think there is $60 billion in server chippery up for grabs only four years from now. Our point, perhaps, is that there is going to be a big fight for the hearts and minds of compute in the datacenter, and it is a whole lot more complicated than just Intel versus Nvidia. There might be $20 billion to $25 billion in datacenter compute (expressed by sales of CPUs, GPUs, and FPGAs, not complete systems) up for grabs by 2021, and it is not at all inconceivable that Nvidia could capture a third of that if a large portion of it is machine learning training and inference.