One On One With Jensen Huang: Nvidia, The Platform Company

While a lot of ideas are ancient, some are relatively new and can come from only a modern context. Platform is one such concept, and given what we care about here at The Next Platform, it bears some analysis as we consider the company the Jensen Huang, co-founder and chief executive officer at Nvidia, is building out from its GPU gaming roots.

It is also worth considering what a platform is and is not given the philosophical conversation about platforms that Huang had with us this week as the fall GTC 2020 conference is underway and everyone is pondering the possibilities behind the completed acquisitions of Mellanox in April (for $6.9 billion) and Cumulus Networks in May (for an undisclosed and much smaller amount) as well as the $40 billion mike drop deal to acquire Arm Holdings from SoftBank.

Spelling used to be so much more interesting than it is today, and the word was either platte fourme or plateforme in Middle French during the middle 1500s, as the Renaissance was in full swing, and a compound made of the two Old French words plat and forme. The former plat being familiar in the words plate and plateau and meaning “flat” and perhaps related to the Greek platys for “broad shouldered” from which Plato, the pupil of Socrates, gets his name. The latter half forme is a word of antiquity as well, appearing in Old French in the 1200s. Some believe it is a jumbled up cognate of the Greek morphe.

So, broad shouldered shape shifting – that sounds about right in the modern IT context.

In the early days, a platform was a structure raised above the ground that was level, and eventually in the 1800s, concurrent with the rise of railroads and their station platforms, it took on the political connotation it also has today.

In essence, a platform is a level playing field, either nature made or man made, on which you can build. And there is no better word to describe the aspirations of Huang and the team at Nvidia that will build its own style of platform, whether or not Nvidia succeeds in acquiring Arm Holdings. Huang said that explicitly in the lengthy conversation he had with us.

“Our company can realize all of our hopes and dreams without Arm,” Huang explains, calling it a once in a lifetime opportunity that SoftBank wanted to sell Arm Holdings where “it was like my mind exploded, it was so good” and that it took three decades to build Arm into what it is and that “this is a team that won’t get built again” if the deal doesn’t go through.

Huang also made it very clear that not only did Nvidia intend to preserve Arm Holdings as an intellectual property powerhouse protected by British law and with a center of gravity in the United Kingdom, but once again reiterated that in the future, Nvidia would push all of its technologies – GPUs, network switch and adapter ASICs, as well as CPUs – through the Arm intellectual property channels to be licensed.

“Nvidia is not a chip company,” Huang says as he outlines his thesis of the ever-expanding platform that the company is building. “And from Day One has been an intellectual property company. We are not a chip company any more than Microsoft is a CD-ROM company. And you can tell because you know what your people are doing: they’re all typing. In a lot of ways, we are exactly like Arm. We’re just two IP companies. One of us has IP that’s hardened so that it is simpler, if you will, to deploy. Another one has soft IP so that it has the benefit of greater reach.”

That, in essence, is why Nvidia is spending all of that money to buy the cow when it could otherwise just license the milk, as it already does. The Nvidia platform will include Arm CPUs, and Arm CPUs will include Nvidia GPU content, and there will also be DPUs for offloading hypervisors and security and other functions, which Nvidia rolled out at GTC 2020 fall this week and which we report on here. The DPU is more than a SmartNIC, and it is definitely based on the ideas in the SmartNIC, but in Nvidia’s world the DPU is accelerated with GPUs and it has a machine learning stack running on it so it does not burn up the processing capacity of the CPUs. And everything will be stitched together with various interconnects, including Arm’s CCIX and Nvidia’s NVLink as well as PCI-Express inside the node and InfiniBand or Ethernet across the nodes in a distributed system.

Nvidia needs all of these pieces not because it wants control, says Huang, but because AI, HPC simulation and modeling, data analytics, and other workloads that the modern datacenter runs require full stack optimization.

“Accelerated computing is not about the GPU,” says Huang. “It’s about the whole stack. That was the great observation that we made a long time ago, that it’s a domain specific stack problem. And one of the one of the major stacks, of course, is AI. And we were early to recognize its implication and we mobilized the entire company across the board and we went full throttle into this into this new approach of computer science because we realized AI is not an algorithm. AI is the foundation of the next era of how we do software.”

The question, of course, is where does this full stack optimization end? Ultimately, because of the end of Moore’s Law, companies need to co-design hardware and software like a hyperscaler, up and down the stack. And given this, you might think that Nvidia will eventually have aspirations in enterprise software. Why stop at tuning up frameworks and algorithms? If Nvidia controlled all of the systems software – an operating system, a middleware stack, an in-memory or AI framework, a database – then it could co-design like crazy. And where does this stop? Will Nvidia actually make whole datacenters at some point?

We don’t think that will happen anytime soon, and we think, as Huang confirmed, that Nvidia is like a hyperscaler in that it will partner to get what it needs and only build what it must. Huang elaborates: “There are so many different datacenters, and the number of different CPUs is going to be quite diverse. This is exactly the reason why Arm is going to be successful in datacenters. But no one company can build it all. And when the market fragments, that is exactly when Arm does perfectly well, they do incredibly well, because of the soft IP approach.”

It is not a coincidence that Intel has acquired so many different kinds of computing and networking in recent years and has established a position in flash and persistent memory because it wants to try to build it all itself, or that Marvell has set itself up as a semi-custom chip designer for datacenter gear of all kinds. Nvidia wants to cover all the bases with its platform, but not forcing customers to choose between Intel’s approach or Marvell’s approach, but rather give them lot of different ways to consume that IP. Nvidia can build the whole system, it can build the components so ODMs and OEMs can build them, or it can license the underlying technologies do they can create their own chippery for compute, storage, and networking. In the end, the idea is to create a single, compatible, open, and flexible platform, which has a fairly large Nvidia software component to be sure, but which also runs software from others.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

3 Comments

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.