Software Already Defines Your Datacenter

Just stop and take a few minutes from your always-on day, to reflect upon how enterprise IT used to be. A lot has changed in the past two decades: X86 servers, Ethernet, storage networking (SAN and NAS), high availability clusters enabled by shared (network) storage, storage and server virtualization, flash storage, software-defined networking, and now, software-defined storage.

Contrast all of that with the emergence of hyperscale datacenters: Internet properties leveraging commodity servers and open source software that also disrupted traditional enterprise IT architecture, with Everything-as-a-Service.

As Marc Andreessen famously said, Software is Eating the World. What he foresaw a few years ago goes without saying today: Whether you’re an enterprise IT department, Amazon Web Services, or a service provider, software is already running everything in your datacenter. Software runs app servers, inside network switches, and dedicated storage systems. This is no surprise, because network and storage devices are, in fact, also X86 servers.

Change Moves IT From Hindsight To Insight

The latest disruption, public cloud, built on server virtualization and web-scale automation, is an inversion of control that effectively bypasses IT by offering business service owners the ability to deploy apps without concern for traditional data center infrastructure. The cloud runs your app. Compute and storage resources are provisioned off premises. The resulting operational simplicity is generally appealing, and hence also becoming available on-premises in the form of integrated private cloud software for the enterprise. IT is now considering hybrid cloud, to right size their investments, via a combination of private and public cloud hosting for apps and data.

Recognizing this magnitude of change, the industry needed a way to express the new IT landscape. While software, in all of these cases, is an implementation detail, it effectively defines the purpose of your entire data center. It runs on hardware there, and hosts apps with a service-driven agility never experienced by IT before. A new reality has been declared: it’s called the Software Defined Data Center (SDDC), which is not just VMware’s first stab at trying to come up with a catchy name for this, but one of a number of attempts.

SDDC has become the next-generation platform for new enterprise apps. Data analytics that enable real-time business intelligence create insight when only hindsight was previously available from offline batch processing of data. As they say, change is the only constant, but now it’s real-time too.

A Very Physical Hangover

Until quite recently, conventional wisdom held that enterprise storage should be consolidated into a centralized shared IT resource. This storage network attempted to satisfy the unique and varied demands of the entire organization’s business services, essentially trying to be all things to all users. The SAN offered storage performance, capacity, availability and protection. In practice, only subsets of these promises were delivered, and in 2015, storage is more fragmented than ever. It is not uncommon for there to be siloed storage systems or networks for different workloads: virtual server infrastructure, another for virtual desktops, and a separate SAN for clustered databases versus unstructured data in filers.

shipwreck
Data virtualization and automated data mobility finally gets your data off the island.

The root cause is ridiculous: the SAN administrator has had to manually provision storage for each app’s unique service-level demands, and adjust as usage changes. This created islands of storage, which are a hangover remnant of the legacy physical world. Apps are exposed to the static storage limitations of the pre-virtualization era. Storage silos are inflexible, costly, and error-prone, and represent the antithesis of the SDDC.

Static Storage Creates Desert Data Islands

As an example, server-attached PCI-Express flash is used to deliver very high performance to mission critical applications without incurring the overhead of storage network access latency. These apps typically rely on shared-nothing for high availability. In contrast, public cloud storage creates a different tradeoff, which offers limitless inexpensive capacity while sacrificing I/O performance. To benefit from the best of these and other storage “extremes” requires a new approach: data should be positioned on the right storage that satisfies the immediate need. And data should be moved when these needs change, as experience and research indicates they invariably will.

To be universally effective, the solution must orchestrate multiple dimensions: Performance, Protection, Availability, Durability, Reliability, Security, and Cost. But the key is to deliver storage agility from the application’s perspective, as the consumer of heterogeneous data services, versus the static provisioning of storage islands. Thus, the ultimate solution is data (not storage) virtualization.

The Data Virtualization Holy Grail

Instead of replacing what is already working reasonably well, given its physical isolation, data virtualization adds new capabilities and to existing storage infrastructure. Data virtualization provides non-disruptive, application transparent data mobility. It compliments existing storage by allowing data to be seamlessly distributed and repurposed. It therefore orchestrates data with respect to application-defined service level objectives (SLOs) and actual (versus expected) usage.

This means data that can benefit from all-flash performance will reside on either server-side flash storage or an all-flash array, while other data will be stored on cost efficient hard disk storage, or even cloud storage, according to policy. Virtualization decouples apps from the underlying storage infrastructure, and elevates data to the same level of sophistication; liberating apps and data from the physical constraints of the legacy data center.

The brains behind knowing the right thing to do, and when, is no longer an overworked IT staff, it’s software. And the SDDC is now able to satisfy its promise: Always on data agility.

 

Robert Wipfel brings nearly three decades of engineering and architecture leadership to his role as Principal Systems Architect at Primary Data. Follow @Primary_Data on Twitter for news and views on data virtualization and dynamic data mobility.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. That’s exactly, “Software already defines your datacenter”. These days software is playing a major role in developing the applications. That’s all run through the servers and dedicated storage systems. Thanks for sharing!

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.