Infrastructure Pioneer Predicts Datacenter Days Are Numbered

If one were to scroll through the list of startups under the wing of Battery Ventures to get a sense of where capital is being pushed, it wouldn’t take long to realize that of the over three hundred companies, almost all have hefty infrastructure requirements.

From the household name brand web-based services and outlets to technology companies on the hardware and software sides, almost all of them get their datacenter fixes outside the firewall and many of them, according to Technology Fellow at Battery Ventures, Adrian Cockcroft, do all of their mission critical business on Amazon Web Services hardware.

And for what it’s worth, Cockcroft knows a thing or two about big iron—no matter where it resides or how it is delivered. Fifteen years ago, he was working on tough optimization problems for high performance computing systems at Sun Microsystems, which fed into his work guiding key scalability research at eBay Research Labs. More recently, he took all of his knowledge about large systems and moved it to Netflix, which made a massive migration to the Amazon cloud with a natively developed open source platform. Now at Battery, he spends his time talking to some of the world’s largest users of IT infrastructure and says that although there are still plenty of companies in various states of transition off of their on-premises gear, as more refresh cycles approach and yet more building leases come to an end, the age of ubiquitous cloud is definitely arriving.

As discussed yesterday, Fortune 100 companies, including GE, which is planning to shutter over 30 datacenters and move most of its mission critical applications to AWS, are less likely to be wooed by the capital expense involved with technology refreshes. Companies are looking at pushing $10 million to $100 million into their on-site datacenters and this is not only expensive, it’s cumbersome and wasteful. Depending on where your estimates come from, average on-premises utilization of the datacenter might top out at 20 percent. For the web-scale companies that put a lot of work into optimizing at scale, it might be higher. But given the fact that a cloud operator’s business is making it efficient at scale, the number is more like 40 percent to 60 percent. And when you’re done, you turn it off. What’s not to love?

Very little, says Cockcroft, although he says the security haunt still lingers (and in fact gets a fresh breath of life with new highly public hacks and failures). But the investments going into bolstering security on AWS and other clouds are set to pay off to the point where within five years, “it will be impossible to get security certification if you’re not running in the cloud because the tools designed for datacenters are sloppily put together and can’t offer the auditing for PCI and other regulators.”

“Big enterprise has their gold plated, enterprise grade hardware, and all of their enterprise software support and there are huge bills there. With cloud, this gets cleaned out, all of it, and replaced with open source software, commodity hardware, and those inherent savings, even with the re-engineering, is big. The difference in utilization alone, going from, for example, 10 percent to over 50 percent is how a lot of companies are thinking about this.”

Aside from that interesting five year projection, as one might imagine, Cockcroft believes that a far greater number of companies in the Fortune 1000 will be running a great deal of their mission-critical workloads in the cloud. The only real exceptions to this will be where there have been long-term investments made on-site in specialized hardware, including mainframes (yes, indeed, there are still plenty of them out there) or, at least until recently, deep memory systems for running SAP HANA and other data-intensive analytics jobs.

The thing is, AWS is taking away that excuse to cling to in-house hardware simply for memory reasons. Today’s announcement of a new custom Xeon E7 processor, laden with 2 terabytes of memory per EC2 instance (an order of magnitude improvement over AWS’s previous deep memory instance type and well above the previously winner in cloud memory on Azure) takes the zing out of that argument, and ushers in a whole new set of potential workloads that can cling to as much memory as can be offered. This includes everything from the newest wave of machine learning and deep learning approaches to more capable in-memory analytics. This is a defining turning point—even for an enterprise user base that seems mostly blasé on the subject of performance.

For most big enterprises, the value of cloud is more about agility and time to market than it is about getting better performance and in some cases, even cost alone, Cockcroft says. When asked how large enterprise users he works with think about, for example, new instances with more and beefier cores, he says some care, but for most workloads it doesn’t matter much. What these users want is to reduce the time it takes to provision and roll forward. And besides, for most enterprise users with standard applications, the performance they’re getting with their virtual machines in house is no different than what they’re getting with AWS. “It’s secondary, outside of a very small, select set of workloads, especially when compared to the value of agility. Think of all the meetings, all the high paid people it takes to get a big datacenter project off the ground—all the time lost. The cost. Not to mention the ability to get development going right away.”

Cost still certainly matters, and how users do the math tends to depend on several factors. However, in the point of the conversation with The Next Platform about datacenter utilization and how little enterprise users get out of their on-site IT, the point came up that since it is the AWS mandate to make their datacenters as ultra-efficient as possible, the cost and potential to get dense, well-performing machines for an even lower price is on the way.

Just as we have inklings about what companies like Google are considering as they look to their machines beyond the X86 borders, it stands to reason that AWS is always going to be on the lookout for more efficient architectures, provided the software ecosystem is there. Cockcroft says that among the trends he’s watching for large scale datacenters serving cloud customers, and especially AWS, is the introduction of ARM processors and the fast-maturing software ecosystem that will develop around it. As we know from the likes of Broadcom, Qualcomm, Applied Micro, Cavium, AMD, and others the time is ripe—and AWS is quite likely going to be among the first (and most motivated) to put a potentially cheaper way to compute (for themselves and presumably end users down the line) into play.

The cloud will be the norm in five years, Cockcroft is certain. And judging from the cadre of AWS backed companies under the careful eye of Battery Ventures, it is certainly going to be the case for the startups that might go on to become the next Fortune 100. In the meantime, Cockcroft says he is keeping watch on both the processor possibilities beyond our X86 reality now, and on the critical middleware and security features that will get the cloud out from under its own weight in terms of certification and verified stability.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

6 Comments

  1. And near all of them will regret this at some point in the future. As this will open them to so many funerabilities that the long term loss will outweight the short-term financial gain. Security, data ownage and availability all of these are concepts bean counters do not really understand. Just wait for the first major incident to happen. Amazon web service, Microsoft Azure and Google cloud all are such huge attack vector now it is just a matter of time till the first serious security breach will happen (or probably already has happened just nobody knows yet). Also would you really trust Amazon, Microsoft and Google itself? I wouldn’t who says they are not harvesting your data for their own purposes you will never able to proof that they don’t. What if they sell your cooperate secrets to your competitors? You probably will not be able to proof that either unless you have insiders in those companies.

    • OranjeeGeneral you really should look into this little known technology called ‘encryption’ – it’s magical…

  2. Each cloud vendor had the advantage that they can create a new set of security services and cover hundreds of customers. The disadvantage is that they become an “attractive nuisance”, a one-stop shopping opportunity for attackers.

    What I actually expect to see in that time-period is some common mechanisms, probably based on pre-existing work in the orange-book days, and a community maintaining them. That will address individual as well as shared (“cloud”) datacenters.

  3. Capex vs Opex. Security vs. security. I plan on spending a lot of contracting time later in my life moving customers out of the cloud.

  4. in this big, wide world there will be places for both cloud and private data centers. if you are a business who exists based upon shaving nanoseconds off of transactions versus your competition you won’t be playing in the cloud. if you deal with sensitive data and any such breach however small can destroy you you’ll opt for staying in your own data center as the cost isn’t worth it. most of life isn’t an either/or proposition.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.