The Next Platform launched February 23, 2015, providing in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds.

If you’re looking for a more detailed overview of the vision, read the Editors’ Introduction or, if that’s too wordy, a) you might be in the wrong place and b) reach out to the editorial staff for more specific info if you need it.

Some companies are building their own platforms for internal use, while others are building platforms for others to use to run their applications as a service.

Inspiration for platform designs is coming from the hyperscale data center operators that have pushed the boundaries of scalability for new kinds of analytics, as well as from supercomputing centres that have been scaling up simulation and modelling workloads for decades.

Regardless of where the inspiration originates, all platforms have some common characteristics. They are based on an integrated stack of hardware and software, tuned to run specific workloads, and outfitted with orchestration tools to automatically react to changes in those workloads.

The idea is to break down as many silos in the data center as possible, virtualizing system components and mashing them up in interesting ways to increase the efficiency of the underlying servers, storage, and networks.

The Next Platform will step behind the headlines and provide analysis to help readers understand what technologies are used to solve particular problems, how they are integrated with other systems and applications, why organizations choose particular technologies to solve their problems, and why they do not pick something else.

The new publication will cover the key elements of the modern system, from processors, main memory, storage, and networking up through operating systems, middleware, and other key systems software such as databases and data stores, systems management tools, and cluster and cloud controllers.

It will look at the myriad clustering technologies available today to bring compute to bear, from hyperconverged systems for virtualized enterprise workloads, to shared memory NUMA machines, all the way up to massively parallel supercomputing systems.

The technologies created to solve one set of problems can often be used to solve another set, and The Next Platform will examine how this is actually happening in the data centers of the world. If a technology can be used to rejig a system to have more throughput, deliver lower latency, or to be easier to manage or program, then The Next Platform will drill down into it.

The Next Platform will be inclusive about underlying hardware and systems software, it will also be broad in its coverage of the applications that drive the design of systems from the beginning.

Transaction processing systems are still important and evolving with in-memory databases, but they are augmented by layers of analytics software that wraps around transactions before they start, as they are running, and after they are done.

Modelling and simulation are also key aspects of the manufacturing and the financial sectors and these applications are increasingly being integrated with other kinds of analytic workloads to make better products.

The important thing to consider about the distinct but related markets – large enterprise, high performance computing, hyperscale data centers, and large-scale clouds – is that technologies developed in one arena are being adopted by the others.

For example, Facebook’s Open Compute server and storage designs and Rackspace’s OpenStack cloud controller are making headway into the HPC centers of the world and among financial services firms, just to name two early adopters. GPU, FPGA, DSP, and other kinds of accelerators are being deployed by enterprises after many years of development and deployment in academia and in segments of the HPC and financial services sectors.

Various analytics tools that got their start at hyperscale and media companies (Hadoop being the obvious one) and file systems and middleware that have an HPC heritage are being mashed up and used by enterprises, too. (Everybody is looking to sell a replacement for the Hadoop Distributed File System and other layers of the Hadoop stack because of their inherent limitations.)

Tracking this interplay and interchange of technology between these different parts of the high-end of the IT market is one of the core missions of The Next Platform. How risk-adverse enterprises adopt these technologies for competitive advantage, and why, is what is interesting.

“We believe it is time to create a single publication that brings several different parts of the high-end of the IT market together to reflect the increasing convergence of systems and interdependence of workloads that are being brought to bear to solve tough IT problems,” explains Timothy Prickett Morgan, co-editor of The Next Platform. “We also want to get back to the idea that depth and insight matter. It takes more than sound bites to make sound decisions.”

According to co-editor, Nicole Hemsoth, “All told, there are a few tens of millions of programmers, administrators, architects, and managers in the IT world, and probably somewhere on the order of a third of them work at hyperscale, HPC, and high-end enterprise shops as defined above. That is the broadest definition of our intended audience — and it is one that we know well.”

The Next Platform will, Hemsoth notes, focus on the IT challenges of companies with more than several thousand employees and in excess of $250m and their equivalents in the public sector.

“Over the course of a year we will document these trends by tapping into user-specific stories that reflect the global economy at large, with stories from the manufacturing, distribution, retail, energy, financial services, public, media, and other sectors.”

3 Comments

  1. Regarding your piece “WITH ROCM SOFTWARE AND INSTINCT MI200 GPUS, AMD HAS ECOSYSTEM CRITICAL MASS”

    I get that AMD paid you to write it but seriously, sir, you are only encouraging AMD’s delusion about ROCm.

    Unless ROCm is supported on the mass market RDNA2 gaming cards it will remain a niche product only used by a small group of people who can afford very expensive mi250/X cards and a few expensive workstation card models. This is a tautology.

    I have a lot of respect for AMD and for Lisa Su for turning it around, but this is Su drinking her own Koolaid.

    If ROCm is not made available to programmers using the mass market AMD GPUs then it will not survive more than this one round of American national lab supercomputers.

    Anyway, I love your site, even if I sometimes poke at the grammar.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.