The best ideas are usually the oldest ones, thought up by early geniuses and perfected incrementally by others, and this is no less true in computing. Show us a new idea in computing and we can usually find that it is a twist on an old idea, modernized for a particular set of hardware and software and perhaps aimed at a slightly different use case.
This publication is called The Next Platform because everyone is trying to build one, either to sell to someone else or to run in their own datacenters. But creating a single, integrated platform that has all of the pieces necessary to run a diverse set of applications is a very old idea, and one that many system makers based their developments upon. The necessity to support different customer sets and different operating systems – first the Unix open systems revolution, then the Linux open source revolution alongside the movement of Windows from the desktop to the datacenter – eroded these platforms, and the advent of so-called industry-standard X86 servers did, too. The mainframe is still kicking, and proprietary midrange systems are still available and both are as technically sophisticated as they ever were. But these vintage platforms have been largely replaced by a new kind of distributed platform, one that at large enterprises, governments, supercomputing centers, and hyperscale datacenter operators is increasingly being built on open source software and that is being used to manage all kinds of transaction processing, analytics, and simulation workloads.
Decades ago, proprietary and monolithic systems were the very first data processing platforms, with everything from compilers to databases and data stores to middleware (even if it wasn’t called that at the time) all woven into an operating system. The mainframe was arguably the first such platform in that it offered a complete software stack and, importantly, spanned a wide range of price points and performance capabilities across a variety of very different underlying systems.
No one would be so foolish to suggest that we go all the way back to the proprietary days of the mainframe and minicomputer eras, with their relatively simple kinds of computing tasks. But we do mean that modern platforms are clearly taking some lessons from that earlier era. And increasingly, in our conversations with upstarts in hardware and software platforms, more than a few of them pay homage to the earlier computing generations. After a few decades of managing the integration of best-of-breed components to build systems, customers are looking for software stacks that are already integrated, but which also leave them room for choice, allowing them to pull out components and snap in other ones as they see fit. Organizations want the benefits of integration in The Next Platforms they are building or buying but they do not want to sacrifice the ability to innovate with those platforms. And while there is plenty of closed source software out there in the enterprise and some in the HPC and hyperscale worlds, open source software, provided it has strong community backing and technical support, is as trusted as any closed source products on the market and in many cases is the preferred way to build the software components of a platform.
There is a dizzying array of hardware and software technologies that organizations are considering as they build out their platforms, and The Next Platform is dedicated to tracking and analyzing these with a keen eye on what users are doing and why they are doing it.
It may be difficult some days to see the lines between servers, switches, and storage, but there are still distinctions that matter. So The Next Platform will track these technologies as elements of modern platforms under the Compute, Store, Connect, Control, Code, and Analyze sections. Compute is where you will find processors and coprocessors and the software that virtualizes computing infrastructure. Store is where all things relating to storage, from storage devices on up to file systems, will live. Connect is all about networking in the datacenter, and Control is focused on the management software to control systems, switches, clouds, clusters, and applications. Code is where we cover the tools for creating applications, from compilers all the way up to platform cloud frameworks. And Analyze is a section devoted explicitly to the technologies created to do data analytics in its myriad forms. For those who are looking for more of an industry breakdown, we will also sort content into HPC, Enterprise, Hyperscale, and Cloud sections based on their primary use case.
This publication is also dedicated to the very high end of the market, where the largest enterprises, government organizations, and hyperscale and cloud service providers are pushing the boundaries of performance and scale and, to put it bluntly, are out there on the cutting edge and taking the most risks.
Many of the technologies dreamed about in academia, in high performance computing, and among hyperscalers eventually get tested out there first and find broader applicability in the enterprise – Hadoop analytics and InfiniBand networking are but two examples that spring immediately to mind. Or, in some cases, ideas cross pollenate between HPC and hyperscale and have yet to trickle down to enterprises. Facebook’s Open Compute server and storage designs and Rackspace’s OpenStack cloud controller are making headway into the HPC centers of the world and among financial services firms. GPU, FPGA, DSP, and other kinds of accelerators are being deployed by enterprises after many years of development and deployment in academia and segments of the HPC and financial services sectors. Whatever the technology and however it is moving around between these groups of customers, suffice it to say that if a technology might find its way into large enterprises, The Next Platform will tell you about it. Any technology that can be used to increase throughput or lower latency and that has potential appeal to HPC, hyperscale, cloud, or enterprise sectors will be covered as well.
A lot of boundaries that we are all used to are blurring. As an example, with converged systems in the enterprise, serving and networking are brought under the same metals skins, and with hyperconverged systems, the storage is merged with the serving and networks to create a seamless and generally virtualized set of infrastructure that is for all intents and purposes a complete platform unto itself for running a class of enterprise applications.
Other examples of this fuzziness abound. For instance, most modern infrastructure is based on clusters of X86 systems, although there is room in the market for the addition of ARM and Power systems in clusters and for scale-out NUMA machines that offer in-memory processing on a much larger scale than nodes in a cluster can do. Similarly, there are differences in storage and in interconnects that link the systems to each other and to their storage. But a system that was created primarily for modeling and simulation, for instance, can be tweaked here and there and then be effective as a platform for data analytics or in memory transaction processing. (This is precisely what SGI has just done to expand its UV line of shared memory systems.) The word convergence gets a lot of use these days, but no matter how you describe it, there is certainly an increasing amount of overlap as vendors of traditional HPC systems are chasing new markets.
Tracking this interplay and interchange of technology between these different parts of the high-end of the IT market is one of the core missions of The Next Platform. How risk-averse enterprises adopt these technologies for competitive advantage – and why and when they do it – is a theme we will return to again and again. This is what is most interesting about the IT sector, after all, and it is what drives the budgets that keep this whole industry moving forward—and here at The Next Platform, us too.