Cloud Not Growing Fast Enough For Intel

It is hard to believe, but one of highest-growth markets that the IT industry has ever seen – the transition from bare metal machines to fully orchestrated virtual infrastructure that we have come to call clouds – is not rocketing up fast enough for the world’s largest chip maker. So Intel is going to do something to stoke the fires even more, and that something is spend money.

Specifically, under a new program called the Cloud for All initiative, Intel is seeking to make cloudy infrastructure, whether public or private, the default way that organizations consume compute, storage, and network infrastructure and thinks it can do so by working at all levels of the cloud software stack to make it work better together and work best with Intel’s own technologies.

This is precisely why Intel has made heavy investments in the Lustre parallel file system commonly used in HPC systems and has also pumped $740 million into Cloudera, the largest of the Hadoop data analytics stack vendors. (This was also why Intel was tempted to do its own Hadoop stack, but that might have been as much about getting leverage with the big three Hadoop distributions, which includes MapR Technologies and Hortonworks.)

“Given the current state [of clouds] and the fact that the industry has been marking significant investments in this area over the past five-plus years, we believe what is needed a very structured approach if we are going to make a difference in the adoption curve of clouds,” Diane Bryant, general manager of the Data Center Group at Intel, said in a conference call explaining the new initiative. To help bend that adoption curve into more of a hockey stick shape, Intel will engage in a wide range of initiatives, including financial investments in the form of both acquisitions and equity, standards development, industry collaboration, market development funds, product launches, and contributions from Intel software engineers directly into the cloud software stack. “We will make investments in the software-defined infrastructure stack to make sure that it is fully functional and easy to deploy,” she added.

That latter bit is the stickler. There are a lot of moving parts to building a real cloud, not just a rack of virtualized servers, and typical deployments take months to roll out and are all unique and fundamentally fragile, as Bryant put it. While this may be true, system makers like Hewlett-Packard, Dell, IBM, and others have been shipping clouds-in-a-box for years, using the Eucalyptus cloud controller first and then dropping it like a hot potato when OpenStack came along. They have also ginned up pre-made clouds using the VMware and Microsoft virtualization and orchestration stacks, too, and now we have the same usual suspects of vendors lining up to deliver hyperconverged server-SAN hybrids with virtualized compute. It is fair to ask how much easier Intel hopes to make this, but clearly, the barriers to cloud adoption are high enough to warrant significant investment from Intel.

“We are not planning to be a service provider. We will benefit as the overall industry grows.”

How much money Intel is going to pony up, neither Bryant nor Jason Waxman, general manager of the Cloud Platforms Group that sits within it, would say. “We are a pretty big company, and big for us is big,” Waxman said when asked for a precise figure. “It is bigger than a breadbox.”

It had better be several hundred million dollars. Maybe more. And with Intel spending $16.7 billion to acquire FPGA chip maker Altera and also trying to manage the transition from 22 nanometer chip making processes down to 14 nanometers and further to 10 nanometers, Intel has plenty to do with its cash.

Bryant said that Intel would do somewhere on the order of 15 to 20 announcements over the coming year from within the Cloud for All initiative.

The first such announcement was done in conjunction with Rackspace Hosting, which is arguably the fourth largest supplier of cloud computing in the world, behind Amazon Web Services, Microsoft Azure, and Google Compute Engine. (IBM SoftLayer is probably still smaller than Rackspace, but comparing the two depends on how strict of a definition of cloud you want to use; Rackspace and SoftLayer do a lot of traditional hosting as well as selling utility-style compute and storage cloud capacity.)

Intel will be adding hundreds of engineers to the OpenStack cloud controller project as part of its collaboration with Rackspace, said Waxman, and the goal is to work to make the Nova compute controller inside of OpenStack more robust as well as improving its networking functions, which have been notoriously cranky. Intel is also kicking in to help on general bug fixes and to work to integrate Docker and other containers into the OpenStack controller.

In addition to that, Intel will fire up two 1,000-node OpenStack clusters in the San Antonio datacenter owned by Rackspace and open it up for companies to test OpenStack clusters at scale with their applications. Waxman said that OpenStack has been shown to scale to a few hundreds of nodes “at best” but companies wanted it to scale to thousands of nodes and wanted proof that it could do it. (The original design goal of OpenStack set by NASA and Rackspace five years ago this week was 1 million nodes and 60 million virtual machines, in case you don’t remember.) Intel and Rackspace do not plan to charge for access to these test clusters, which should be up and running in six months or so. And the OpenStack community is rightfully more concerned with getting enterprise-grade features into OpenStack and less about that humungous scale of the original plan from the two project founders.

More Than OpenStack

It is important to not get the wrong idea that this is somehow just about OpenStack. Mesophere and CoreOS will also eventually see similar collaboration with Intel, we surmise. One could argue that Microsoft and VMware don’t necessarily need help with their private cloud stacks and neither do Google and AWS (or Microsoft or VMware) with their public clouds. But they will take resources – people, money, code – if Intel is offering, and this could have the desire effect Intel seeks.

The one thing that Intel is not going to do is build its own public cloud. “We are not planning to be a service provider,” Waxman said. “We will benefit as the overall industry grows.”

This Cloud for All initiative is really about Intel’s enlightened self-interest, and in the case of Rackspace, which has become very enthusiastic about building its own Power-based server under the auspices of the OpenPower foundation, such help with OpenStack raises the ante for the OpenPower folks to kick in similarly. The same will hold true at any cloud builder, service provider, or enterprise that is thinking of building cloudy infrastructure on a Power or ARM platform. Google is flirting with Power servers, and Facebook is flirting with ARM machines, and heaven only knows what AWS is doing. Our guess? Heavily customized Xeons and not much more given that it has to run other people’s workloads.

It is hard to say precisely how many clouds have been built, but it is probably on the order of a few thousand worldwide, if you want to use a classical definition of a cloud that includes chargeback or metered pricing (depending on if it is private or public cloud) and automated orchestration for workloads across clusters of virtualized servers. But Intel is sure that it can help in myriad ways make it easier for companies to build either private or private clouds and to make it easier for organizations to consume this capacity.

intel-cloud-for-all

Bryant said that the aim was to accelerate cloud deployments through targeted investments that will bring cloud computing to tens of thousands of more organizations worldwide, and after all the work is done, have a software stack that allows for a cloud to be deployed in a day – or under an hour, she added. This is a bold goal, as anyone who has ever set up an OpenStack or VMware vCloud setup will no doubt attest to. (We have never done it, but we have heard the complaints from those who have. And the fact that Intel and Rackspace can’t just fire up 2,000 nodes for a test bed over the weekend shows, we think, that this is still not easy.)

Intel estimates that about half of all applications running in the world are deployed from clouds, be they public or private, and that by 2020 this number will rise to 85 percent. The company also reckons that the aggregate cloud market – again including public and private clouds of all shapes and sizes – is growing at around 14 percent per year. Intel’s own cloud business is growing in excess of 20 percent per year, according to Bryant, and it needs more than that.

What Intel seems to be after – and Bryant brought this up again as did Google a few days ago when it was talking about freeing its Kubernetes container management system – is making cloud computing obey Jevons Paradox. This paradox, first observed by William Stanley Jevons back in 1865, related to coal consumption in England. As machinery was created that more efficiently burned coal for various industrial uses, demand did not drop, but rather increased and importantly, increased non-linearly. This is what has been commonly called elastic demand, and computing has, generally speaking, obeyed this rule. The server base has increase worldwide in the past decade by approximately a third even as the amount of compute within a server has gone up by several orders of magnitude. Some might not expect that. Some would say the server chassis is an arbitrary boundary.

Here’s the funny bit: If Intel knows that Moore’s Law is slowing down, as it has said it is, and that enterprise spending is slowing, as it again said it is, then the chip maker has to do something to goose the market to accelerate the consumption of CPU capacity. So helping companies build better and more efficient clouds seems like a good thing to do.

But such an initiative could accelerate a plateau in computing capacity demand that we would not hit for many years from now. Maybe demand for X86 computing is perfectly elastic right up to the point where it stops being so. The question we can’t answer – and which is the real interesting one – is how much more computing will the world need to do if we all start doing it more efficiently? The virtualization waves hit mainframes hard in the 1990s, and Unix and proprietary servers were hit harder in the 2000s, and they did not bounce back from it. Period. Yes, datacenters got more flexible machines, but they could drive up utilization and ride Moore’s Law to actually have less costly machines – and fewer of them – as time went by. And revenues dropped off a cliff. The mainframe and Unix markets did not just collapse. Part of the decline was self-inflicted, not just a move to Windows and Linux platforms.

And some day, although this is hard to believe or even imagine, we could get to a point where the need for incremental capacity is less than capacity increases generation to generation for compute and storage. This certainly happened for back office systems in the 2000s, although hooking these monolithic applications into mobile apps for smartphones and tablets has driven up transaction volumes a bit in recent years. This, too, will settle out and eventually only grow with the population and our obsession with the data about our own lives. Eventually, it will settle out. And we admit that could be a while. In the meantime, Intel needs to make money in 2015, 2016, and 2017. The company can’t worry further out than that.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.