Site icon The Next Platform

Now Nvidia Is Armed To The Teeth

Companies with high stock valuations are a bit like the central banks of major countries. The cash they throw off as leaders in their markets gives them great sway over the industries they play in, and the ever-increasing stock value they have is another kind of money they can spend.

And that, in short, is precisely what GPU juggernaut and HPC and AI innovator Nvidia has just done, shelling out $40 billion in a mix of cash and stock to acquire Arm Holdings, the company behind the Arm architecture.

Back in July, when the rumors were swirling around that Nvidia was looking to buy Arm, we did an analysis and thought experiment on the deal in The Dollars And Sense Of Nvidia Paying A Fortune For Arm. We are not going to repeat that entire story here, but we will follow up on some initial thoughts that we had that we can now flush out a bit.

As we said at the time, Masayoshi Son, who runs IT investment conglomerate SoftBank Group, who has a bunch of bad investments he has to deal with, and who paid $32 billion to take control of Arm four years ago, is eager to get his balance sheet in order. Arm is one of the jewels that could, in theory, command a premium for, and it is clear that this has happened. (The conglomerate’s Alibaba and Yahoo Japan stakes are pretty juicy, too, but we suspect that Son wants to hold onto these.)

With Nvidia stock flying high as the sky – at the market close on Friday, shares were trading at $486.58 and the company had a market capitalization of just over $300 billion dollars – the company can issue more shares to buy Arm from SoftBank and dilute the value of those currently holding shares because relative to the rest of the tech market, who else are you going to bet on that isn’t also getting a sky high valuation? Success is a printing press for a certain kind of free money that, as we can see, is still useful for buying real things. Like other successful companies, for instance, so completely up-end the semiconductor market as we know it.

Son needs some real money from Nvidia, of course. Under the deal that Nvidia and Arm announced on Sunday night in the United States, well ahead of the Sun as it enlightens the stock markets of the world, starting with Tokyo in the East and moving West until it reaches Wall Street, Nvidia will issue $21.5 billion in stock and spend $12 billion in cash to acquire Arm from SoftBank. If Arm hits certain financial targets between now and when (and if) the deal passes muster with the world’s antitrust regulatory bodies, SoftBank has a shot at another $5 billion in cash or stock (it is not clear how this is determined), and Nvidia has also guaranteed to issue another $1.5 billion in stock to Arm employees to keep key people them on the job – first and foremost Simon Segars, long-time chief executive of Arm. This is, by the way, far lower than the $55 billion deal that was rumored back in July.

The key thing here – and we were stressing this back in July – is that on the face of it, based on Arm’s financials, Arm was either not more valuable than Mellanox, for which Nvidia paid $6.9 billion, or only slightly more valuable. It really comes down to an interpretation of what Arm’s business in 2019 meant. The Arm business generated just under $2 billion a year in licensing fees and other revenues in SoftBank’s fiscal 2019 and 2020 years ending in March and had an income of $1.27 billion in fiscal 2019 and a loss of $400 million in fiscal 2020. The Arm division had a one-time gain of $1.67 billion in that fiscal 2019 year after setting up a joint venture in China, for which it received a lot of cash. But the underlying business was not profitable in fiscal 2020 and, absent this deal, looks to have only counterbalanced the underlying gain the business had in fiscal 2019. Arm is slightly bigger than Mellanox in terms of revenues, but in the most recent year at least, Mellanox is more profitable. As it turns out, Nvidia is only burning $12 billion in cash to get Arm, which is reasonable compared to spending $40 billion in cash.

This is not the only way to look at it, of course, and as it turns out, this is not how Jensen Huang, co-founder and chief executive officer at Nvidia, is looking at it. Nvidia wants to take on the entirety of chippery – just as we suggested it might six weeks ago: “With increasing competition coming in GPUs and a failure to capture any of the initial exascale class machines, Nvidia might be angling to change the nature of the game in the datacenter. This is the only thing that makes sense to us, and it is a hell of a high price to pay but it could – just maybe – pay off. And Nvidia would have a business that spans from datacenter to edge to cloud to client to embedded.”

That is indeed the plan, and Huang said as much in a briefing on Sunday night.

“The two companies are complementary – we are additive,” Huang explained. “Where Nvidia is very strong in datacenters and high performance computing, Arm has in the last several years has made really wise decisions and investments, and we are now seeing the early results of that, with the Fujitsu A64FX and the Amazon Graviton2, but there is going to be a lot more and we want to put a lot more energy into it. In order to create a broadly successful datacenter platform for all of the different ways we use datacenters today, whether it is virtualized or distributed clusters, or high performance computing, or bare metal or containerized and disaggregated compute on microservers – all of it starts with amazing chips but also requires all of the system software and engines and libraries  and application frameworks on top. We are unique in that we can take a CPU that has been refined over time and really turn it into a computing platform, from end to end. The system, the software – all of the algorithms and the frameworks. I am super-excited about turning Arm into a world-class datacenter CPU.”

Huang reminded everyone that we are at the end of Moore’s Law, and that we are in the era of accelerated computing – what we would prefer to call hybrid and highly tuned collectives of computing – and that Nvidia was really after creating one overarching (and hopefully not overreaching) architecture that would come from one company and span the entire $250 billion semiconductor total addressable market for datacenter, edge, embedded, and client markets.

So, when you look at a TAM like that and you realize that Arm still has a chance to take a chunk of the $67 billion or so in datacenter compute chips that are sold, then $12 billion in cash and $28 billion in stock issuance doesn’t seem that expensive.

There are other factors that can be added into the equation. Nvidia has committed to keep Arm located in Cambridge, outside of London, and all of its intellectual property licenses and political and economic restrictions will be under the auspices of the British government. In the wake of Brexit, it seems highly unlikely that any British government – Tory or Labour – will be putting any kind of export controls on Arm intellectual property. And for all of the talk about making Cambridge a center for AI, with its own world-class supercomputer based on Arm CPUs and Nvidia GPUs, this export control issue is key. With the capriciousness of the US government, you don’t want to be under its control, which is why the RISC-V Foundation is now licensing technology from Switzerland.

The other important factor is that the Arm collective consists of thousands of companies and has a huge volume of chips. Huang said on the call that Nvidia shipped about 100 million chips last year across clients and servers, but the Arm collective shipped 22 billion chips. And importantly, when we asked about it, said that all options were on the table, including pushing Nvidia GPUs and Mellanox switch ASICs through the same Arm licensing model and releasing them into the wild to be tuned and tweaks to the content of the hardware and software engineers of the world. If the end of Moore’s Law means anything, it probably means using collections of highly tuned chips in aggregations tuned specifically for one or a couple of jobs. And having the ability to mix and match – and license for a reasonable fee – these technologies is one stream of revenue as is actually implementing some of the designs for specific use cases where Nvidia thinks it can really add value. Such as is HPC and AI systems, for example. Nvidia has been selling graphics cards and now whole systems against its partners for years now, and that hasn’t stopped its partners from making money even as Nvidia has.

“We will have in one company three incredible franchises and three incredible platforms,” Huang told The Next Platform, referring to the Nvidia GPU business, the former Mellanox now Nvidia Networking business, and at some point the Arm CPU business. “I think these three ingredients are the essential components of computing, and we have the opportunity to help advance computing across the entire range, from cloud to high performance computing to PCs to workstations to consoles to self-driving cars to robotics to edge AI. The amount of computer science horsepower that is inside this company will be quite extraordinary, and it allows us to create solutions and platforms that can then be made available to this vast network of partners and developers to build the next trillion – tens of trillions or hundreds of trillions – of computers. We just have to realize that because of AI, these computers can be all over the world, performing amazingly intelligent things, all by themselves. Of course they will be connected to edge datacenters that are in turn connected to cloud datacenters, but the availability of this type of software is going to expand computing tremendously. It is impossible for one company to build all of those solutions. But it is possible for us to come up with some architectures that every company in the world could benefit from.”

We will be talking to Huang later in the day on Monday, after Wall Street closes, to drill down a little deeper into the Nvidia’s plan for the edge and the datacenter and how Arm will fulfill its vision of the datacenter as the unit of compute. Stay tuned.

Exit mobile version