Pascal GPUs On All Fronts Push Nvidia To New Highs
November 12, 2016 Timothy Prickett Morgan
Chip maker Nvidia was founded by people who loved gaming and who wanted to make better 3D graphics cards, and decades later, the company has become a force in computing, first in HPC and then in machine learning and now database acceleration. And it all works together, with gaming graphics providing the foundation on which Nvidia can build a considerable compute business, much as Intel’s PC business provided the foundation for its Xeon assault on the datacenter over the past two and a half decades.
At some point, Nvidia may not need an explicit link to PC graphics and gaming to have a self-sustaining datacenter compute business based on Tesla and GRID accelerator cards and now DGX-1 systems. Just like Intel arguably does not need the Core PC chip business to justify the existence of a very large and profitable Xeon server chip business. The two are evenly matched, in terms of profits, but the synergies are still there that allow Intel to do its process ramps on the smaller PC chips that have higher volumes and then build bigger Xeon chips that have lower volumes on a much more mature process.
The point we are trying to make is that Nvidia is clearly hitting on all cylinders in its fiscal second quarter ended on November 1, with record revenues and profits, busting through $2 billion in sales for the first time.
One of the interesting bits for us is that after ten years of investment in Tesla accelerated computing and the expansion into GRID remote graphics virtualization accelerators, Nvidia’s Datacenter group has surpassed the Professional Visualization group to become the second largest revenue producer for the company. The difference is not huge, mind you, but as you can see from the chart below, the trajectories for these two businesses are quite a bit different:
From where we stand, looking at this data, it sure does look like the Datacenter group is poised to enter the kind of growth cycle that Nvidia has been building for and hoping for since it first started down this path with CUDA computing ten years ago. With an annualized run rate of close to $1 billion, GPU-accelerated computing has become more mainstream and, importantly, GPUs are still the preferred way to accelerate HPC simulations (if they have acceleration at all) and for training neural networks in machine learning (there are not really good alternatives for this, but everyone is chasing it). Database acceleration with GPUs is still nascent, but it will expand, too.
“I think that we are moving our datacenter business in multiple trajectories,” explained Nvidia co-founder and CEO Jen-Hsun Huang explained to Wall Street analysts when going over the numbers for the third quarter of fiscal 2017. “The first trajectory is the number of applications we can run. Our GPUs now have the ability, with one architecture, to run all of those applications, from graphics virtualization to scientific computing to AI. Second, we used to be in datacenters, but now we are in datacenters, supercomputing centers, as well as hyperscale datacenters. And then third, the number of industries that we affect is growing. It used to start with supercomputing. Now, we have supercomputing, we have automotive, we have oil and gas, we have energy discovery, we have financial services industry, we have, of course, one of the largest industries in the world, consumer Internet cloud services. And so we are starting to see applications in all of those different dimensions. And I think that the combination of those three things, the number of applications, the number of platforms and locations by which we have success, and then, of course, the number of industries that we affect, the combination of that should give us more upward directory in a consistent way. But I think the mega point, though, is really the size of the industries we are now able to engage. In no time in the history of our company have we ever been able to engage industries of this magnitude. And so that is the exciting part, I think, in the final analysis.”
And hence, we think that the Datacenter group at Nvidia is getting set to hockey stick up like gaming has, driving revenues to new highs for the kinds of customers that Nvidia originally aimed at and that its founders were so many years ago when, by comparison to a “Pascal” GPU today, a graphics card was like a clay tablet with regards to its primitiveness. It is hard to say for sure, but we think that Datacenter group, which had nearly triple sales, will stay at this level before dropping back to a doubling and then the normal 50 percent then 25 percent and then 10 percent growth. (Cisco’s Unified Computing System blade servers grew like this for seven years before finding their natural level.) With the hyperscalers really powering the business – driving about half of the Datacenter group business – and HPC centers doing their part along with other enterprise sectors kicking in, this is just a more stable business. And poised to probably drive $2 billion and maybe $3 billion a year in revenues for Nvidia in the fullness of time as AI takes off and supercomputing finds an ally in it. Companies are going to want to deploy as few different accelerated systems as possible, and we think a GPU will be a factor in the future systems architecture. We have said it before, and we will say it again: CPUs for things that change a lot, GPUs for throughput, and FPGAs for things that don’t change a lot and need acceleration. Every workflow inside the datacenter has all of these elements, so every datacenter will have them as well.
In the quarter, Nvidia booked just a tad bit more than $2 billion in revenues, up 54 percent, and it brought $542 million to the bottom line, an increase of 120 percent over the year-ago period. Any time a company grows income twice as fast as revenues, that is called winning, and it doesn’t get much better than that – except when you post triple-digit increases in income. Which Nvidia did. There is no other way to say it except that Nvidia had a great quarter. Period. Despite all of the intense competition, and we think, as does Huang, because it didn’t just make a great GPU, as its archrival AMD certainly can do and as its other archrival Intel certainly could do. What Nvidia has done is build a platform, and that is why this is working. It just takes patience and time and hard work to do it, and now the opportunity for Nvidia is as great as the competition its success will – indeed, has – engendered.
“I think the size of the marketplace that we are addressing is really larger than any time in our history,” Huang said. “And probably, the easiest way to think about it is we are now a computing platform company. We are simply a computing platform company and our focus is GPU computing, and one of the major applications is AI.”
The next platform, we think, will combine simulation, modeling, visualization, and machine learning in a holistic fashion. And Nvidia is building frameworks, with the help of hyperscalers and HPC centers, that do all of this on its devices, with the CPUs in the systems being relegated to a kind of butler to the GPUs, fetching them things and keeping things tidy.
In the quarter, the Datacenter group posted sales of $240 million, up 193 percent, nearly triple from the $82 million Nvidia posted for this group a year ago. We have only a slim idea what the margins are for the Tesla and GRID units, but we do know they are higher than for any other products Nvidia sells, including Quadro graphics cards for professional graphics or GeForce gaming cards. The Gaming group posted a stunning 63 percent growth in the quarter, bringing in $1.24 billion in sales. This is driven by a few things, but notably that Nvidia’s addressable market for gamers has grown from maybe 60 million users worldwide in the wake of the Great Recession to more than 100 million these days. Graphics determines that gaming experience more than any other factor, and people are willing to pay a premium for that experience. Nvidia is able to engineer that experience, and get it to market in a fairly consistent manner. By doing that, it sets the technology stage for its Datacenter and Professional Visualization groups, ending up in things like top-end Cray supercomputers.
Growth in that visualization group was more muted, up only 9 percent year-on-year to $207 million. To be sure, companies in the manufacturing sector continue to invest in high-end workstations to do design work, but over time, this may shift to the cloud and that is where GRID will see the bump. The important thing is, whether it is Quadro cards or GRID cards, whatever the market decides to do with professional graphics and design applications, Nvidia has it covered.
Here at The Next Platform we don’t have much nice to say about self-driving cars, so go ahead, put Tesla compute in Tesla cars until your heart’s content. (Yes, we know it is Tegra compute in the Tesla cars, but it sounded funnier that way.) We want none of that, so just stay out of our way on the road. We will add that Nvidia is bringing its platform play to the automotive sector, with all of its HPC and machine learning might, and it has been able to show consistent growth here. In the third quarter, revenues for the Automotive group were up 61 percent to $127 million. The OEM and intellectual property business is humming along for Nvidia, adding $186 million to the kitty in the quarter.
Looking ahead, Nvidia is expecting sales of around $2.1 billion in its fiscal fourth quarter ended in January, and we think it has a very good chance for continued growth in the Datacenter group in 2017 as GPU acceleration takes hold even more. A lot depends on if there is a global recession, and so far, no AI algorithms can predict that. But a recession might help Nvidia’s cause more than it hurts it in the datacenter. The last recession certainly worked to make Xeon the king of compute in the datacenter.