Without the right kind and the right amount of I/O between the components of a system, all of the impressive feeds and speeds of the individual components don’t amount to more than a pile of silicon and sheet metal. Distributed compute and storage may be the norm for a lot of workloads, but the bottlenecks keep shifting around as workloads and components all advance on their own independent trajectories.
This is one of the reasons why there is an increasing diversity of interconnects that link servers to each other – often to make distributed, software defined storage systems – and that also link servers to their storage and to their accelerators. A system is a complex tapestry of fabrics, and getting it to perform well means picking the right transports and protocols to make everything hum along.
Up and down the system and storage stack, everything is changing and changing fast.
Storage was just an extension of a server and really an integrated part of the system for the first several decades of computing, often by means that were proprietary to each system maker.
Network attached storage for file systems in the 1980s and then storage area networks for block storage in the 1990s started breaking storage free from servers, and it was not long before storage became a kind of system in its own right, standing free from – even if it was sometimes based on – servers.
While there are plenty of NAS and SAN appliances being sold into the enterprise to support legacy applications, modern storage tends to be either disaggregated – with compute and storage broken free of each other at the hardware level but glued together on the fly with software to look local – or hyperconverged – with the compute and block storage virtualized and running on the same physical server clusters and atop the same server virtualization hypervisors. And the prominence of unstructured data, which drives businesses in many ways, has also brought object storage into the forefront and it has become, in many ways, more important than traditional block and file storage, and inside the server, the storage hierarchy has been expanded with the advent of persistent memory.
Never before has the interconnect and storage hierarchy – moving from compute cores out through caches and now multiple layers of main memory, and fanning out to layers of flash and disk storage and expanding to tape and cloud archives – been so wide, so deep, and so interdependent. And never before have their been so many different options for optimizing the performance of servers and the distributed computing platforms for which they are the building block.
After stalling for the better part of a decade, datacenter networking is back on the Moore’s Law growth curve, supplying more bandwidth and more ports at lower costs. Ethernet still dominates except in niche cases where low latency is absolutely demanded, and there is no reason to believe this will ever change – not with Ethernet absorbing all good ideas. The hyperscalers have shaped Ethernet to their needs, distinct from mainstream Ethernet switching, as much as HPC shops adopted InfiniBand two decades ago and made it their own, and now enterprises can benefit from these advances by adopting the networking approaches employed by HPC and hyperscale datacenters – and many are doing just that.
The interconnect landscape is more than Ethernet and niche technologies such as InfiniBand and Omni-Path. The PCI-Express bus within servers has become a fabric in its own right, and is being used as an interconnect between chiplets on a die and sockets in a system as well as a peripheral bus, as a switch infrastructure for compute accelerators and in-chassis and in-rack storage, and is being used as a transport layer for the NVM-Express flash protocol as well as for accelerator and storage protocols such as CCIX, Gen-Z, CXL, and OpenCAPI.
These, along with Nvidia’s NVLink interconnect for its Tesla GPU accelerators, are all going to be important architectural elements of future systems, which will by and large have hybrid compute, storage, and networking. Silicon photonics is also going to play a complementary role in providing a new transport for protocols to lash system components together within rack infrastructure.
We talked about all of this momentum in both storage and networks at our sold-out Next AI Platform event in May, where the focus was on building the entire hardware stack—from accelerators to storage systems.
Given all the interest in the data storage and movement that day we are breaking this topic out for an I/O focused event, which we expect will be as technical and dynamic.
And so, we proudly introduce The Next I/O Platform event. A packed day of technical insight and conversations with no PowerPoint presentations permitted. Just in-depth, live on-stage interviews with those at the bleeding edge of storage and networks on both the end user and technology creation sides.
The event will be held at The Glasshouse in San Jose, CA on September 24.
As regular readers know, The Next Platform goes far beyond the basics to understand how things work, what they replace and why, who will use what technologies and why, what the ROI is, and finally, how these new innovations impact the wider market and competitive ecosystems. Interviewers at The Next I/O Platform will ask the same deep, relevant questions with opportunities for audience questions and plenty of time for networking and one-on-one conversations.
Join us for a day of live interview and panel based discussions with plenty of time for Q&A and conversations on September 24. Our previous event quickly sold out and we expect this focused day to do so as well. Secure a space now before full registration opens.
Full agenda and other details coming soon but we have a great lineup in the pipeline. We can’t wait to share it and meet you there.
We are sold out of Platinum sponsorship already but there are other options. Please contact our events producer, Jimme Peters at jimme@24-7consulting.com or event chair, Nicole Hemsoth at nicole@nextplatform.com
Be the first to comment