Site icon The Next Platform

Nvidia Hitting On All GPU Cylinders

Even if Nvidia had not pursued a GPU compute strategy in the datacenter a decade and a half ago, the company would have turned in one of the best periods in its history as the first quarter of fiscal 2019 came to a close on April 29.

As it turns out, though, the company has a fast-growing HPC, AI, and cryptocurrency compute business that runs alongside of its core gaming GPU, visualization, and professional graphics businesses, and Nvidia is booming. That is a six cylinder engine of commerce, unless you break AI into training and inference (which is sensible), and then add database acceleration to make a balanced eight cylinder engine.

In the fiscal first quarter, Nvidia’s revenues rocketed up 65.6 percent to $3.21 billion, and thanks to the tightness of supply on gaming GPUs due to very big demand for GPUs by cryptocurrency miners, average selling prices are up and therefore Nvidia is raking in the profits as too much demand chases too little supply. Net income rose an incredible 153.5 percent, to $1.29 billion, and represented 40 percent of revenues.

Let’s stop right there.

First, who says you can’t make money in hardware? That level of net margin is almost the same as Intel’s gross margin in its datacenter business. Nvidia’s gross margins for Tesla have to be well north of 75 percent at this point we estimate, and given the pricing environment (with prices up around 70 percent since last June on existing GPU accelerators from the “Kepler,” “Maxwell,” and “Pascal” generations even), this has turned Nvidia into a very profitable company indeed. The same phenomenon is happening with gaming GPUs.

And second, Wall Street always wants more. Then again, it always wants more and was crabbing about it in the wake of the announcement of the numbers by Nvidia. The fact is, Nvidia’s strategy is paying off handsomely and we can let the traders and quants argue about current and future share prices as company co-founder and chief executive officer Jensen Huang pitches in to help chief financial officer Collette Kress count the big bags of money piling up in the corporate headquarters.

It is hard for Nvidia to reckon precisely how much money it made off of miners in the quarter, but Kress took a stab at it, saying that $289 million of the $387 million in total OEM revenue came from those mining Ethereum and other cryptocurrencies that are amenable to processing on GPUs. As Huang explained on the call with Wall Street analysts, the company creates special variants of its GPU cards, called CMP, which it peddles out of its OEM division (which also sells custom Tegra hybrid CPU-GPU cards), but even this division could not meet the cryptocurrency demand and Huang conceded that many miners bought GeForce cards, driving up prices. In any event, maybe there was as much as $350 million that came from cryptocurrency miners. Looking ahead to the second fiscal quarter, Kress forecast that cryptocurrency sales of CMP and GeForce products would be about a third of this level. This market has its ups and downs, just like the HPC and hyperscale businesses do. You can see the spike in the OEM line below to show just what a big jump it was in Q1:

In that core datacenter business, sales have slowed in recent quarters and are no longer growing at triple digits, but the thing to remember is that this market is very spikey but definitely on an upward trend. And moreover, no vendor and no market can sustain triple digit growth forever. We don’t know when the GPU compute business will hit its equilibrium, growing at the rate of gross domestic product at worst and at the rate of overall compute more likely, but we think that day is many years into the future. Nvidia’s datacenter business grew 71.4 percent year on year and was up 16 percent sequentially from the fourth quarter of fiscal 2018, which is pretty good considering that Nvidia has shipped many thousands of “Volta” Tesla V100 accelerators to IBM for use in the “Summit” and “Sierra” supercomputers being built for the US Department of Energy. Kress said that the HPC pipeline was building for Volta accelerators, particularly in the manufacturing and oil and gas sectors, and of course, GPUs are the engines of compute for machine learning training and are seeing uptake, particularly with the Pascals and Voltas, for machine learning inference. Kress said that shipments of GPUs for inferencing shipped to hyperscalers and cloud builders more than doubled compared to the prior fourth quarter, and the combination of the GPUs plus the TensorRT inferencing stack that Nvidia has cooked up, the pipeline will be building for this. While Nvidia has inferencing engines in various devices, such as drones and cars, Huang said that the biggest opportunity for Nvidia in inferencing is in the glass house, and this is due to the flexibility and open programming model of the GPU.

“The largest inference opportunity for us is actually in the cloud and the datacenter,” explained Huang. “That is the first great opportunity. And the reason for that is there is just an explosion in the number of different types of neural networks that are available. There is image recognition, there is video sequencing, there is video recognition, there are recommender systems, there is speech recognition and speech synthesis and natural language processing. There are just so many different types of neural networks that are being created. And creating one ASIC that can be adapted to all of these different types of networks is just a real challenge.”

Public clouds are starting to ramp up Tesla V100s in their infrastructure, and we suspect that some are also buying Pascal Tesla P100s, too, to meet demand. This is also driving up sales.

The DGX line of hybrid CPU-GPU appliances is also adding to the datacenter revenues, and Huang said that it was now “a few hundred million dollar business.” We take this to mean that the annualized run rate for DGX system sales (including to Nvidia itself, so far its biggest customer) is running at a few hundred million dollars. The datacenter business as a whole has an annualized run rate of $2.8 billion based on the fourth quarter and has trailing twelve month sales of $2.22 billion, so DGX might represent 15 percent of datacenter revenues at this point, which is one reason why the DGX line exists and why Nvidia is not afraid to compete against its OEM, ODM, and cloud partners in servers.

The thing everyone wants to know is how this datacenter business will grow. In the first and second quarters of fiscal 2018, this business was nearly tripling in size, and in the third and fourth quarters it was just a tad bit more than doubling. It is hard to say for sure it the growth rate will stay around 70 percent or spike again as Volta ramps and becomes more widely available on clouds, driving more consumption for HPC and AI and database workloads and then generate more sales to Nvidia. This stuff is very hard to predict. Even with an AI model running on GPUs.

Exit mobile version