We are pleased to announce that the first book from Next Platform Press, titled “The State of HPC Cloud: 2016 Edition,” is complete. The printed book will be available on Amazon.com and other online bookstores in December, 2016. However, in the meantime, supercomputing cloud company, Nimbix, is backing an effort to offer a digital download edition for free for this entire week—from today, October 31, until November 6.
As you will note from looking at the Next Platform Press page, we have other books we will be delivering in a similar manner this year. However, that this is the first is significant. The editors and creators of The Next Platform have followed closely key trends in both high performance computing and cloud infrastructure over the last several years and believed it was time to condense thoughts and stories to bring better clarity about the rate of adoption, infrastructure availability, software licensing, and other trends.
The behemoths of the IT industry have been talking about running HPC simulations and models in a utility fashion for a lot longer than we have been calling it cloud computing. And the irony is that this is still, despite all of the compelling arguments in favor of HPC in the cloud, a nascent market and one that defies easy qualification and quantification.
We could no doubt find a data processing bureau example from the 1960s or 1970s where a scientific application was run on a paid-for basis without the customer actually owning the iron, but this is not what we mean when we say that HPC in the cloud has been around for a long time.
Back in the fall of 1999 and not only way before it tacked Enterprise onto its name and a few years before it acquired Compaq, Hewlett Packard unveiled its Utility Data Center concept, bringing together a set of virtualization technologies that it acquired from Terraspring and developed in-house at HP Labs to create pools of virtual compute, storage, and networking capacity and a master switch that could dial them up and down over distributed systems; Phillips Electronics and DreamWorks Animation were early customers. IBM started talking up its OnDemand utility computing efforts around the same time, mostly aimed at webscale and enterprise workloads, but fired up its Supercomputing OnDemand utilities in early 2003, with Power and X86 clusters for running HPC applications utility style and with Petroleum Geo-Services being the flagship customer. Sun Microsystems launched the Sun Grid in February 2005, combining its Solaris Unix systems, Java runtime, and Grid Engine workload management, all for a $1 per core per hour flat fee, and got it up and running in March 2006.
That timing for the Sun Grid is significant because that is also when Amazon Web Services was launched, and for all we know, these and other utility-style computing efforts may have collectively been the inspiration that drove the online bookseller and expanding retailing giant to start peddling raw compute and storage capacity as a service. What we can say is that HPE, IBM, and Sun did not get utility computing right, but AWS certainly has and hyperscalers like Microsoft Azure and Google Cloud Platform, hosters turned clouds like IBM SoftLayer and Rackspace Hosting, and upstarts like Nimbix, UberCloud, Sabalcore, and Penguin Computing have all set their sights on attracting traditional HPC (simulation and modeling) and new style HPC (machine learning and accelerated databases) workloads to their cloudy infrastructure.
To try to reckon how much HPC there is in the cloud, by which we mean in both private cloud infrastructure in the corporate datacenter as well as in the public clouds, we have to back into it several different ways, and even then, the error bars on market sizing are probably pretty large. Incidentally, we have exactly the same issues in trying to figure out how much of the infrastructure being sold across the IT industry is cloudy and how much of it isn’t (meaning it is traditional bare metal or virtualized infrastructure without sophisticated orchestration and without utility pricing).
What can be honestly said is that the definition of cloud keeps evolving and that all infrastructure, in the fullness of time, will eventually be part of an orchestrated, metered, geographically distributed complex of compute, storage, and networking. And by then, we won’t call it cloud at all. We will call it computing, or better still, data processing and data storage, as we did in the old days.
Please do take advantage of this free download opportunity on the full book where we delve deeply into these and other trends. Again, the free download is sponsored by our friends at Nimbix–take a second to drop them a line of thanks if you enjoyed the book for free (list price will be around $30).
Be the first to comment