Pascal GPUs On All Fronts Push Nvidia To New Highs

Chip maker Nvidia was founded by people who loved gaming and who wanted to make better 3D graphics cards, and decades later, the company has become a force in computing, first in HPC and then in machine learning and now database acceleration. And it all works together, with gaming graphics providing the foundation on which Nvidia can build a considerable compute business, much as Intel’s PC business provided the foundation for its Xeon assault on the datacenter over the past two and a half decades.

At some point, Nvidia may not need an explicit link to PC graphics and gaming to have a self-sustaining datacenter compute business based on Tesla and GRID accelerator cards and now DGX-1 systems. Just like Intel arguably does not need the Core PC chip business to justify the existence of a very large and profitable Xeon server chip business. The two are evenly matched, in terms of profits, but the synergies are still there that allow Intel to do its process ramps on the smaller PC chips that have higher volumes and then build bigger Xeon chips that have lower volumes on a much more mature process.

The point we are trying to make is that Nvidia is clearly hitting on all cylinders in its fiscal second quarter ended on November 1, with record revenues and profits, busting through $2 billion in sales for the first time.

One of the interesting bits for us is that after ten years of investment in Tesla accelerated computing and the expansion into GRID remote graphics virtualization accelerators, Nvidia’s Datacenter group has surpassed the Professional Visualization group to become the second largest revenue producer for the company. The difference is not huge, mind you, but as you can see from the chart below, the trajectories for these two businesses are quite a bit different:

nvidia-q3-2016-groups

From where we stand, looking at this data, it sure does look like the Datacenter group is poised to enter the kind of growth cycle that Nvidia has been building for and hoping for since it first started down this path with CUDA computing ten years ago. With an annualized run rate of close to $1 billion, GPU-accelerated computing has become more mainstream and, importantly, GPUs are still the preferred way to accelerate HPC simulations (if they have acceleration at all) and for training neural networks in machine learning (there are not really good alternatives for this, but everyone is chasing it). Database acceleration with GPUs is still nascent, but it will expand, too.

“I think that we are moving our datacenter business in multiple trajectories,” explained Nvidia co-founder and CEO Jen-Hsun Huang explained to Wall Street analysts when going over the numbers for the third quarter of fiscal 2017. “The first trajectory is the number of applications we can run. Our GPUs now have the ability, with one architecture, to run all of those applications, from graphics virtualization to scientific computing to AI. Second, we used to be in datacenters, but now we are in datacenters, supercomputing centers, as well as hyperscale datacenters. And then third, the number of industries that we affect is growing. It used to start with supercomputing. Now, we have supercomputing, we have automotive, we have oil and gas, we have energy discovery, we have financial services industry, we have, of course, one of the largest industries in the world, consumer Internet cloud services. And so we are starting to see applications in all of those different dimensions. And I think that the combination of those three things, the number of applications, the number of platforms and locations by which we have success, and then, of course, the number of industries that we affect, the combination of that should give us more upward directory in a consistent way. But I think the mega point, though, is really the size of the industries we are now able to engage. In no time in the history of our company have we ever been able to engage industries of this magnitude. And so that is the exciting part, I think, in the final analysis.”

And hence, we think that the Datacenter group at Nvidia is getting set to hockey stick up like gaming has, driving revenues to new highs for the kinds of customers that Nvidia originally aimed at and that its founders were so many years ago when, by comparison to a “Pascal” GPU today, a graphics card was like a clay tablet with regards to its primitiveness. It is hard to say for sure, but we think that Datacenter group, which had nearly triple sales, will stay at this level before dropping back to a doubling and then the normal 50 percent then 25 percent and then 10 percent growth. (Cisco’s Unified Computing System blade servers grew like this for seven years before finding their natural level.) With the hyperscalers really powering the business – driving about half of the Datacenter group business – and HPC centers doing their part along with other enterprise sectors kicking in, this is just a more stable business. And poised to probably drive $2 billion and maybe $3 billion a year in revenues for Nvidia in the fullness of time as AI takes off and supercomputing finds an ally in it. Companies are going to want to deploy as few different accelerated systems as possible, and we think a GPU will be a factor in the future systems architecture. We have said it before, and we will say it again: CPUs for things that change a lot, GPUs for throughput, and FPGAs for things that don’t change a lot and need acceleration. Every workflow inside the datacenter has all of these elements, so every datacenter will have them as well.

nvidia-q3-2016-revenue-income

In the quarter, Nvidia booked just a tad bit more than $2 billion in revenues, up 54 percent, and it brought $542 million to the bottom line, an increase of 120 percent over the year-ago period. Any time a company grows income twice as fast as revenues, that is called winning, and it doesn’t get much better than that – except when you post triple-digit increases in income. Which Nvidia did. There is no other way to say it except that Nvidia had a great quarter. Period. Despite all of the intense competition, and we think, as does Huang, because it didn’t just make a great GPU, as its archrival AMD certainly can do and as its other archrival Intel certainly could do. What Nvidia has done is build a platform, and that is why this is working. It just takes patience and time and hard work to do it, and now the opportunity for Nvidia is as great as the competition its success will – indeed, has – engendered.

“I think the size of the marketplace that we are addressing is really larger than any time in our history,” Huang said. “And probably, the easiest way to think about it is we are now a computing platform company. We are simply a computing platform company and our focus is GPU computing, and one of the major applications is AI.”

The next platform, we think, will combine simulation, modeling, visualization, and machine learning in a holistic fashion. And Nvidia is building frameworks, with the help of hyperscalers and HPC centers, that do all of this on its devices, with the CPUs in the systems being relegated to a kind of butler to the GPUs, fetching them things and keeping things tidy.

In the quarter, the Datacenter group posted sales of $240 million, up 193 percent, nearly triple from the $82 million Nvidia posted for this group a year ago. We have only a slim idea what the margins are for the Tesla and GRID units, but we do know they are higher than for any other products Nvidia sells, including Quadro graphics cards for professional graphics or GeForce gaming cards. The Gaming group posted a stunning 63 percent growth in the quarter, bringing in $1.24 billion in sales. This is driven by a few things, but notably that Nvidia’s addressable market for gamers has grown from maybe 60 million users worldwide in the wake of the Great Recession to more than 100 million these days. Graphics determines that gaming experience more than any other factor, and people are willing to pay a premium for that experience. Nvidia is able to engineer that experience, and get it to market in a fairly consistent manner. By doing that, it sets the technology stage for its Datacenter and Professional Visualization groups, ending up in things like top-end Cray supercomputers.

Growth in that visualization group was more muted, up only 9 percent year-on-year to $207 million. To be sure, companies in the manufacturing sector continue to invest in high-end workstations to do design work, but over time, this may shift to the cloud and that is where GRID will see the bump. The important thing is, whether it is Quadro cards or GRID cards, whatever the market decides to do with professional graphics and design applications, Nvidia has it covered.

nvidia-q3-2016-gpu-tegra

Here at The Next Platform we don’t have much nice to say about self-driving cars, so go ahead, put Tesla compute in Tesla cars until your heart’s content. (Yes, we know it is Tegra compute in the Tesla cars, but it sounded funnier that way.) We want none of that, so just stay out of our way on the road. We will add that Nvidia is bringing its platform play to the automotive sector, with all of its HPC and machine learning might, and it has been able to show consistent growth here. In the third quarter, revenues for the Automotive group were up 61 percent to $127 million. The OEM and intellectual property business is humming along for Nvidia, adding $186 million to the kitty in the quarter.

Looking ahead, Nvidia is expecting sales of around $2.1 billion in its fiscal fourth quarter ended in January, and we think it has a very good chance for continued growth in the Datacenter group in 2017 as GPU acceleration takes hold even more. A lot depends on if there is a global recession, and so far, no AI algorithms can predict that. But a recession might help Nvidia’s cause more than it hurts it in the datacenter. The last recession certainly worked to make Xeon the king of compute in the datacenter.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

6 Comments

  1. How will Nvidia counteract AMD’s ability to offer its New Zen x86 Server processors as a package deal with AMD’s Hawaii Radeon Pro based and Radeon Pro WX current Polaris GPU SKUs and then in 2017 the Vega Radeon Pro WX GPU SKUs. Intel will most certainly NOT be assisting Nvidia on any of Intel’s x86 server business, as Intel will be pushing Xeon Phi, and Altera accelerator solutions.

    AMD having both Zen x86 based server variants and Radeon Pro WX GPU accelerator variants under its full production and pricing control will be able to offer its prospective Server/HPC/Workstation clients some very attractive Zen Server CPU/Radeon Pro WX GPU accelerator package pricing deals that will get more of AMD GPU SKUs into the Server/HPC/Workstation market.

    Nvidia has no Server/HPC/Workstation CPU market pricing leverage on its own as Nvidia manufacturers no server grade CPU SKUs currently.
    AMD will have the ability to price its Server/HPC/Workstation CPUs together with its Radeon Pro WX GPU professional Graphics/Accelerator SKUs as AMD manufacturers both and controls their pricing options completely, save the nominal costs that every maker in the business has. Not only AMD’s CPU and GPU costs can be made competitive by AMD on that all important price/performance metric but AMD also has the motherboard server Chip-Set business that AMD can, in concert with its motherboard partners, make at a lower cost if the customer will also accept any Zen Server CPU and Radeon Pro WX package deals. So AMD can potentially offer Zen server CPU, Radeon Pro WX GPU, and Lower Cost server motherboard chip-set pricing(via its participating package pricing server Motherboard partner/s) to any of AMD’s potential Server/Workstation/HPC customers/OEMs.

    That x86 based server market share still represents the largest share of the market and AMD can pull its professional GPU SKUs along with its new Zen server SKUs right into plenty of Server/Workstation/HPC business by offering attractive package pricing deals for potential customers who choose to purchase both Zen CPU and Radeon Pro WX GPU SKUs as a package deal. AMD has a lot of CPU/GPU IP across the server market including plenty of IP from its shuttered SeaMicro server unit. So AMD is really ready to reenter the server market in a big way starting in 2017, and if Zen is anywhere near Intel in the single CPU core IPC metric, not even necessarily beating Intel’s latest CPU offerings, then AMD can only go higher with its CPU/GPU server market share numbers! That’s how small of a share of the server market that AMD has currently has. AMD has the pricing latitude to aggressively go after any new server market business with Zen/Radeon Pro WX as a package deal, and throw in some server motherboard/chip-set pricing to sweeten things up a bit.

    • do you know why NVIDIA team up with IBM? I’m looking forward to see NVDIA’s Volta’s + IBM POWER9s with NVlink 2.0 next year! innovation in CPUs, in GPUs and in overall accelerated computing. Very interesting times for computer architecture…

      • Yes that’s all very good but Nvidia can not control IBM’s, or any other third party OpenPower licensee’s CPU pricing on its own, and NvLink is not the only connection IP that is on the market! There is CAPI/CAPI2, and some new industry standard connection fabric protocols/IP being championed by IBM and others including AMD as a participant.

        NvLink, as well as others similar IP are going to have to compete with what AMD will be putting on its Interposer based APU Workstation/HPC/Server SKUs that will have some Zen CPU cores complexes die and Vega GPU die wired up via a thousands of wide parallel traces fabric, CPU cores die directly to GPU die via the interposer, at terabytes/second of raw effective bandwidth. So for some Workstation(likely for portable systems) computing systems the APU on an interposer variants that may only need to provide a single GPU the APU on an interposer will not be beat especially since the GPU die will be much larger and separately fabbed from the Zen cores die and married up via the Interposer’s silicon substrate to a big Vega die, and HBM2.

        AMD will have both the standard PCIe based CPU with GPU accelerator solutions and it’s SeaMicro IP/Newer Fabric IP to utilize and its APU on an Interposer IP as well. And that Interposer based IP will see a Zen CPU cores/die wired to a larger Vega GPU/die and to HBM2 all with a total raw effective bandwidth CPU processor to GPU processor and all the interposer hosted processors to HBM2 that will not be matched by NvLink/Current interconnect IP.

        The amount of Server/HPC/Workstation IP under AMD’s direct pricing control can allow for AMD to have almost complete control over the CPU, GPU and Motherboard Pricing(To a degree) sales/dealings. AMD will have great pricing latitude to make some total package CPU/GPU/Motherboard(Chip-Set) pricing package deals in a HPC/Server/workstation market that is still very dominated by the x86 ISA based CPU processors! So AMD is very likely to get some of that x86 market’s sales from some of the current Intel customers that may be willing to look at AMD’s x86 solution without worrying about having to alter their x86 software stack as much compared to what that will entail with moving to any IBM Power or other OpenPower solutions.

        • Yeah, AMD will have a lot of opportunities thanks to their hardware. I’ve no doubt and I really hope so, but will they have the software stack to match? I’m truly worried that they will not, they keep slipping, stumbling, and as much as I wish for a fresh, new competitor with an OSS software stack they have yet to deliver anything functioning (e.g. ROCm + OpenCL on a cluster), let alone performance and stable.

    • “How will Nvidia counteract AMD’s ability to offer its New Zen x86 Server processors as a package deal with AMD’s Hawaii Radeon Pro based and Radeon Pro WX current Polaris GPU SKUs and then in 2017 the Vega Radeon Pro WX GPU SKUs.”?

      It doesn’t matter. AMD followers seem to believe you must have x86 to be successful in Data Center (you are talking about data center aren’t you?).

      Did you read the article? NVIDIA is building a stand alone compute platform.
      Plug Xeon in, Or plug AMD in or plug in Power9. It doesn’t matter.

      AMD ought to focus on compute, there is a rather big opportunity they are missing.

      • AMD has Both Their PCIe based GPU accelerators and their APU on an Interposer based Zen/Vega with HBM SKUs(coming 2017-2018) to compete in the PCIe GPU accelorator and Interposer based CPU/GPU/HBM APU based workstation markets. The APU on an Interposer market for Portable workstation SKUs with a Zen CPU die and Big Vega GPU die along with HBM on an interposer will become a very popular solution for HP/Dell/Other portable workstation OEMs! With any Zen/Vega/HBM Interposer based APU SKUs offering unrivaled Zen CPU cores/die to Vega GPU die direct connection raw effective communication bandwidth! Maybe even some Zen cores die to big Vega GPU direct connection speeds of 1+ Terabytes per second of low clocked high effective bandwidth for data/code, and CPU to GPU cache chorency, transfers and redirects. So for some Portable and PC based workstations systems AMD’s HPC/Server/Workstation APUs on an interposer will be a very good choice among OEMs for some workloads and form factors.

        AMD will likewise have the Traditional PCIe base GPU accelorator solutions that used in concert with AMD’s ROCm(Radeon Open Compute(1)) open software/middleware/toolchain SDK to allow many OpenPower power8 server systems and ARM based server system clients to utilize AMDs PCIe based Radeon Pro WX GPU systems on their server platforms. AMD’s HIP Cuda porting tools will assist any cross CPU x86/ARM/OpenPower platform and AMD Radeon Pro WX customers make an easy transisition.

        AMD’s Zen CPU and Polaris/Vega CPU/GPU package solutions will also have an extra hardware/HSA feature set that when utilized tgether on both a traditional Zen x86 CPU to discrete PCIe based Radeon Pro WX or AMD’s new APU on an interposer solutions, will make use of AMD’s HSA compliant hardware to offer even more performance than on any non HSA compliant pairings with other CPU platfroms(Arm/OpenPower Power8).

        (1)

        “AMD @ SC16: Radeon Open Compute Platform (ROCm) 1.3 Released, Boltzmann Comes to Fruition”

        http://www.anandtech.com/show/10831/amd-sc16-rocm-13-released-boltzmann-realized

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.