Finally, The Right Pilot At The Intel Helm

Running a company of any size is the hardest thing you will ever do. And having run a few of them ourselves here at The Next Platform, when we see companies in the IT sector struggling and executives trying to steer their companies between Scylla and Charybdis, our first reaction is compassion. This is tough stuff. Real jobs and therefore real lives are affected by how well or poorly the ship is constructed and how the pilot steers in the roughest waters.

It is hard to say how much of the troubles that Intel has had in recent years are self-inflicted and how much are random distributions of Murphy’s Law spread across the universe and it is just Intel’s turn. Our observation after more than three decades of watching the IT market is that any time a company gains the kind of control that IBM, Microsoft, or Intel has accomplished in their respective eras, troubles always follow. Some of it is hubris. Some of it is dumb luck. But it has always happened, and the competition is always fierce when you are biggest and most profitable. It’s a law of economics, and it is certainly the law of the IT jungle.

IBM’s vaunted mainframe business in the 1960s and 1970s was knocked down a few pegs by the advent of proprietary and then RISC/Unix and then Wintel/Lintel systems, and it was blindsided by the rise of the PC to a certain extent. Even though it recovered for more than a decade, IBM could just not keep up. And while Microsoft was able to take its hegemony on the Windows desktop into the datacenter with Windows Server and a large stack of systems software, it has not been able to keep Apple from rising from the dead – for the second time, mind you – and creating a huge and profitable client machine. Intel similarly made the leap from the desktop to the datacenter, and has become the dominant compute engine maker to an extent that we have not seen since the late 1960s with the IBM mainframe. In 2020, if the final quarter works out as we expect, X86-based machines will account for over 90 percent of the $82 billion in server revenues and approaching 99 percent of the more than 12 million server shipments worldwide. And Intel Xeon SP processors will be in the overwhelming majority of those machines. Still. After years of Arm and AMD.

It is not an exaggeration at all to say that Pat Gelsinger, who will be talking the helm of Intel as its chief executive officer on February 15, is largely responsible for driving Intel’s initial success in the datacenter. And the Intel machine that Gelsinger helped design – he is the creator of the tick-tock method of rolling out chips, which we will get to in a minute – has been riding on the momentum that Gelsinger and his team put into the flywheel all those years ago. It is also true that, for many different reasons, that Intel flywheel is spinning down. (Intel is not a perpetual motion machine, and no company is.) But don’t get confused here. Apple has risen from a much deader place than this, twice. IBM has risen once. Microsoft, too. You have to be a damned fool to think Intel cannot right itself and steer between the cliffs and the whirlpool.

To do that, Intel will need a different ship, for sure. And maybe that ship would have already been built had Gelsinger not left Intel back in September 2009 – only six months after Intel got its Xeon act together with the “Nehalem” Xeon E5500 processors that were the first step in vanquishing AMD’s Opterons from the datacenter and putting Intel on the path to compute engine hegemony. But that water is already under the bridge too far now. . . to mix some metaphors. Gelsinger did good work for EMC, and eventually became CEO at enterprise server virtualization juggernaut VMware, a much cushier job in some ways until Kubernetes came along. But Intel might have been better off if Gelsinger had never left.

Gelsinger left Intel to become president and chief operating officer at EMC’s Information Infrastructure Products division, which managed its storage products, and did so because of an executive reshuffling and company reorganization instituted by Paul Otellini, who was Intel CEO from 2005 through 2013 (and notably was Intel’s first CEO who did not have a technical background). This is when Intel created the Data Center Group formally, breaking its client and PC efforts in two, but with an overlay called the Intel Architecture Group, which put executive vice presidents Sean Maloney and Dadi Perlmutter to co-manage its operations; Maloney was responsible for business and operations and Perlmutter headed up product development and architecture. Intel’s manufacturing operations were centralized in the Technology and Manufacturing Group and put under the control of Andy Bryant, who was still chairman of the Intel board until a year ago. Kirk Skaugen (who now runs enterprise computing at Lenovo) took over Data Center Group, essentially half of the job that Gelsinger had as senior vice president and general manager of the Digital Enterprise Group at Intel, a position that Gelsinger rose to after being named the very first chief technology officer at Intel in January 2000. Intel didn’t need a CTO before then because Intel’s co-founders, Gordon Moore and Andy Grove, both past CEOs, were still walking the hallways of the chipmaker they created with Bob Noyce.

No one has ever explained why Gelsinger left, but it is not hard to see it is what happens when a marketing CEO – the same one who was championing the Itanium processor in the late 1990s and early 2000s – meets an engineering CTO. Otellini rejiggered the Intel organization, and whatever the cause, Gelsinger did not like it and left. It happened fast, so fast that Gelsinger was on deck to do the keynote at Intel Developer Forum the week following the reorganization and his departure. This was no doubt hard for him. Gelsinger got an associate degree in electronics from Lincoln Technical Institute in 1979 and promptly got a job at Intel and then got a bachelor degree in electrical engineering from Santa Clara University in 1983, and a master degree in electrical engineering and computer science from Stanford University in 1986 and then spent 30 years at the company, working intimately with Moore, Noyce, and Grove.

Among his many accomplishments at Intel, Gelsinger was the architect of the 80486 processor in 1989, which at the time was Intel’s fourth generation of X86 CPUs and the one that really started to take off in servers. Among the more than a dozen other processor design efforts that Gelsinger steered at Intel was the Pentium Pro launch in 1995, which had symmetric multiprocessing built in and was designed explicitly for workstations and servers as the RISC processors from Unix workstations and servers had been for the past decade. As Gelsinger explained it at the Nehalem launch in 2009, the Pentium Pro set the stage for industry standard, high volume servers. It is helpful to remember that in 1995, maybe 700,000 Intel servers were being sold a year, and by 1989, the Intel server base represented about 85 percent of the 8 million servers sold per year and drove about half of server revenues.

The other key thing that Gelsinger did at Intel was to separate architectural innovation from manufacturing process progress. This is the tick-tock model, which was instituted in 2007. To give you a sense of what Intel really thinks it is, the tick is the crank on the manufacturing process and the tock is the processor design. Intel thought of itself as a manufacturer first and a designer second, and at the very least, both have been equally important. To our mind, the Nehalem chip was Gelsinger’s crowning achievement, after four years of dismal Xeons and the whole Itanium mess, which saw the first rise of AMD in the datacenter. And the manufacturing issues that have plagued Intel in the past three to four years, which Gelsinger has watched on the sidelines, are a replay of sorts of the challenges that Gelsinger already lived through.

What we know is that time that Gelsinger has spent at EMC and VMware will serve him well as he returns to Intel. In that time, Gelsinger sold products that were based on or ran on Intel CPUs, but has also had experience with other compute architectures and other chip suppliers. This will give him perspective that he would not otherwise have.

But, on the other hand, he is returning to an Intel that is not at the top of its game with either manufacturing or processor design, and it is a very different company with very different people. Moreover, Intel has a lot more product lines, with CPU, GPU, FPGA, neuromorphic, and quantum compute engines adding to the complexity.

When Gelsinger left, Itanium was already essentially dead but Intel could not say it because Hewlett Packard Enterprise needed these chips for its enterprise servers and SGI needed them for its supercomputers. The combination of the Great Recession and a very good Nehalem design and 45 nanometer process gave Intel a good footing and this was augmented and extended with both ticking and tocking up through the current 14 nanometer “Skylake” Xeon SPs. The ticking has stopped since then, and the tocks with the follow-on Xeon SPs have been interesting but relatively minor.

Intel was an all-CPU company and the CPUs were pretty good, and Gelsinger, as we reported in 2013 when he was at EMC and about ready to take the helm at VMware, did not really see Arm servers as a threat. (AMD was not even starting on the path back to being a threat at this point.) And to be precise, Gelsinger told us that the future was Intel in the datacenter and Intel and Arm at the endpoints. Period. It hasn’t quite worked out that way, now has it?

Still, it has been a very good decade run for Data Center Group, and Intel has done a brilliant job squeezing the most profits possible out of this business between 2009 and 2020, inclusive, for those dozen years. Call it $187 billion in revenues and $85.5 billion in operating profits. Chew on that for a minute.

But Intel’s designs and Intel’s manufacturing are no longer the best in the field. Losing any one of these would be bad for Intel; losing both is more than two times as bad. And very little of this had anything to do with Bob Swan, Intel’s chief financial officer who was elevated to the CEO post two years ago and has done the best he could under the circumstances. AMD is bringing the heat with the “Milan” third generation Epyc X86 processors and is spending $30 billion to acquire FPGA maker Xilinx, there is new – and better – competition coming from the Arm server space, and heaven only knows what open source RISC-V chip designs really mean in the datacenter.

As we said at the end of last year, Intel has to stop financial engineering and start engineering its future. With an engineer back at the helm, and one with a very long view of Intel, perhaps Gelsinger can get the company back on track.

Gelsinger left Intel in the midst of a massive reorganization, and he comes back in just as Intel is getting ready to give updates on its 7 nanometer process, which was delayed in July 2020 by about six months, and maybe its 10 nanometer process, which has been beleaguered for years. We were also expecting an announcement, perhaps next week, regarding Intel’s plans to offload at least some of its CPU and GPU chip manufacturing to rivals Taiwan Semiconductor Manufacturing Corp and Samsung Electronics. It will be interesting to see how this will be handled with a new CEO on the way in one month’s time. Whatever decision is being made, it is now being made with Gelsinger’s consent, we presume.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

1 Comment

  1. You should examine this sentence for typos and correct:

    “It is helpful to remember that in 1995, maybe 700,000 Intel servers were being sold a year, and by 1989, the Intel server base represented about 85 percent of the 8 million servers sold per year and drove about half of server revenues.”

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.