Turning The CPU-GPU Hybrid System On Its Head

Sales of various kinds of high performance computing – not just technical simulation and modeling applications, but also cryptocurrency mining, massively multiplayer gaming, video rendering, visualization, machine learning, and data analytics – run on little boom-bust cycles that make it difficult for all suppliers to this market to make projections when they look ahead. But some suppliers are lucky in that they have a claim staked out in so many markets that it tends to even out on average over time.

Since Nvidia has expanding from just going graphics for PCs and workstations into a broader arena of compute with its GPU motors, the company has been not only able to grow its business – in terms of revenue, profits, and influence – but to even it out a bit because the different revenue streams all come together as they rise and fall with a general and pretty decent rise. The channel for GPU cards for gamers has never seen such good performance and price/performance, but both AMD and Nvidia have pushed a little too much into the channel in recent quarters and have to wait until sellers burn off that inventory. Both companies have been hit a bit by a downdraft in sales of specialized GPUs that they have created for cryptocurrency mining. And that is why growth for these companies overall with respect to their GPU businesses has not been as high as many might have been expecting as 2018 got underway.

That said, Nvidia is still growing like crazy compared to its peers in compute, and it is growing profits faster than revenues, which is always a good thing whether the company is public or private.

In Nvidia’s third fiscal quarter of fiscal 2019 ended in October, the GPU maker pulled in $3.18 billion in sales, up 20.7 percent from the year-ago period. Net income grew by 46.8 percent to hit $1.23 billion. Not too long ago – five years, to be precise – if Nvidia broke though the $1 billion level of sales for an entire quarter, it was a big deal. And now Nvidia is turning in the level of profits on sales of its GPU products and services. This is a remarkable accomplishment and it is a testament to the fractal diversification that Nvidia has imagined and then delivered upon with its successive generations of GPUs. This GPU business has its ups and downs, to be sure, but the trend is clear up and to the right.

Nvidia puts a tremendous amount of money into research and development, which is necessary to keep on the leading edge, and in the current quarter it spent $605 million, and it has spent $1.73 billion in the nine months of this fiscal year. To put that into perspective, Hewlett Packard Enterprise has only spent $1.22 billion in the first three quarters of its fiscal 2018 which ended in October (but that last quarter has not been reported as yet) but Dell spent roughly $1.1 billion per quarter over the same time period. Nvidia is a chip designer, so it has high costs of development, but it is also a software developer and system integrator now, too. So it is increasingly covering more of the modern HPC/AI stack. Dell is an $80 billion company and HPE comes in at about $32 billion a year, and they are 6X and 2.5X bigger than Nvidia, respectively. In the fiscal third quarter, Nvidia’s R&D bill grew by 30 percent, much faster than revenue growth and that probably means some new GPU motors are in the works; this increased cost was partially offset by a $149 million tax benefit, which is one of the reasons why net income jumped so high in Q3. But even without that, Nvidia can command a premium for its products and would have been able to bring over $1 billion to the bottom line, with income growing in lockstep with revenue.

In the thirteen weeks ended in October, sales of gaming GPUs (used by Nvidia and several manufacturing partners) as well as Nvidia GPU cards themselves rose by 13 percent to $1.76 billion, and this was a little off because of the channel being a bit full and because of the fairly quick dropoff in sales of GPUs for cryptocurrency mining – issues that have hit AMD as well in its most recent quarter. The falloff in demand for GPUs that have been tweaked for miners of Ethereum and other cryptocurrencies was sharper than many expected, and Nvidia took a $57 million charge against components aimed at miners in the quarter.

In the old days of Nvidia, the GeForce gaming GPUs were the foundation on which higher end Quadro visualization and workstation cards, which were used in scientific workstations for the most part and then in some larger visualization systems employed by HPC centers and the defense and intelligence industries. Now, it is safe to say that the highest end products from Nvidia are aimed at actual compute at HPC centers, cloud builders, and hyperscalers, who use them for simulation, modeling, and machine learning workloads, and that the technologies that start at the top trickle down. For instance, the technology used in the “Turing” Tesla T4 accelerators is a derivative of the “Volta” compute architecture with Tensor Core units retuned to do dynamic ray tracing or machine learning inference. That datacenter business, which includes sales of DGX-1 and DGX-2 systems (the latter with its integrated NVSwitch interconnect for lashing together 16 Volta GPU accelerators into a shared memory interconnect with massive bandwidth), grew by 58.1 percent year-on-year to $792 million in the quarter.

Two years ago, Nvidia’s datacenter business was tripling and this time last year it was doubling, and it is easy to jump to the conclusion that GPU compute in the datacenter is reaching its natural level. Not that a $4 billion business is a bad thing, mind you, particularly with something probably on the order of 80 percent (or possibly higher) gross margins. This is a fabulous business. And we are so early in the days of GPU acceleration for either HPC or AI workloads – and databases are coming along, too – that this could just be a local flattening before growth accelerates again. With close to 600 HPC applications being accelerated by GPUs and just about all machine learning frameworks dependent on GPUs for training, there could be a significant further expansion of Nvidia’s datacenter business as GPU acceleration goes mainstream.

That’s what Nvidia co-founder and chief executive officer, Jensen Huang, sees in the crystal ball he keeps in his leather jacket.

“We know that Moore’s Law has ended,” Huang explained on a call with Wall Street analysts this week as Nvidia was announcing its financial results. “While demand for computing continues to grow and more and more of the datacenter is running machine learning algorithms, which is computational and really intensive, the only way to increase computational resource is to buy more computers, and that means buying more CPUs because each one of those CPUs aren’t getting much faster. And so as a result of that, the datacenter capex would have to go up. One of the reasons why the adoption of Nvidia’s accelerated computing platform is growing so fast is because the approach that we provide allows for a path forward beyond Moore’s Law.”

That said, at some point, the revenue curve will be less steep and follow capacity increases by an established and broad installed base of customers – which should include all the major clouds and hyperscalers plus large industries where some or all of their workloads can be ported to GPUs. At that point, it will grow or shrink like the server business overall did before the hyperscalers and cloud builders started creating compute capacity by the hour or services running on their own infrastructure for free or a nominal fee compared to what it costs to run certain applications in-house. Competition will also have its effects on that datacenter GPU revenue curve, particularly as AMD ramps up its “Vega 20” Radeon Instinct MI60 cards for HPC and some AI workloads.

Nvidia may, in fact, find itself in need of a CPU of its own to handle serial work to keep its datacenter business growing. In a sense, a processor linked to a cluster of Nvidia Tesla GPU accelerators by NVLink or PCI-Express ports ends up being a serial accelerator for the GPU code and, with shared memory, a kind of scratchpad memory for the GPUs. Most of the compute – and the greatest diversity of data types and the highest memory bandwidth – is sitting out on the GPUs, not the CPUs. It shows in the Top500 supercomputer rankings. And the question is, if Nvidia has put X GeForce, Quadro, and Tesla GPUs in the datacenter running CUDA applications, what multiple of X CPUs did that block from getting into the datacenter that might have otherwise been sold? Is that 3X, or 5X or 10X? Or, better still, are there certain things that would have never gotten done in the first place, such as machine learning with anything near 95 percent accuracy or exascale HPC computing, without GPU acceleration?

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.