Today’s tech users are spoiled. The devices we use daily, such as our phones and our laptops, rapidly process information. Our Internet connections are fast and reliable (well, most of the time). As a result, we can access personalized content – from news to entertainment – on-demand. In this climate, it’s no surprise that we expect to be able to quickly access and transfer whatever data we need, whenever we need it.
Yet, in the enterprise IT space, storage pros haven’t been able to resolve the latency issues that occur when they move large amounts of data to the cloud, especially when unpredictable Internet performance is a factor. That factor is now prevalent in an enterprise world consisting of geographically distributed islands of compute. If you’re not considering physical distance and even the speed of light when choosing a storage system, you’re going to find yourself facing unexpected delays and issues –and your users, who are accustomed to on-demand access for everything else, won’t tolerate the slow down. If you’re not careful, these problems can kill your cloud initiative or business transformation plan.
Latency is an old conversation when it comes to IT infrastructure. It’s not uncommon for storage pros to feel bombarded – by vendors and even the media – with questions about how latency affects their storage systems’ performance. However, performance and latency are not one and the same. And in many cases, it’s network latency, not storage infrastructure latency, creating the hurdle most IT teams are unequipped to jump. This issue is never more relevant than when the public cloud comes into play.
The cloud is disrupting the way businesses consume IT. Forrester predicts the global public cloud market will grow to $160 billion by 2020, and companies everywhere are turning to cloud services for increasingly important business applications. Because of that shift, network latency – which can create delays for data as it moves to the cloud – is becoming a top concern for IT teams. Although some teams maintain that solving network latency isn’t a pressing issue because their organizations aren’t hosting critical workloads in the cloud, this is a myth. As new reference architectures and solutions enter the market, the cloud is becoming a secure, flexible and affordable option for every data set, and many companies are already feeling network latency pains sparked by public cloud use.
Enterprises will not be able to take advantage of regional cloud hubs and gain the flexibility and economic benefits the cloud has to offer until they solve the latency challenge. One way to solve the problem is to work with service providers that write cloud-ready applications from scratch. You’ll be able to maximize performance and scalability for platforms such as Splunk and others, while minimizing the size of your storage infrastructure, the effort you’re sinking into management and the resulting costs of each.
To properly diagnose and address latency in a storage context, it’s also important to recognize the difference between a tier – a permanent, protected home for a given data set – and a cache. Tiering storage systems can quickly frustrate users with overrun bandwidth, causing delays as data transfers between disks and across Internet connections. Meanwhile, caching systems can be optimized to reduce latency, maximize performance, and minimize churn throughout the data lifecycle.
You already know that when you struggle with latency, your company takes a blow to its efficiency and business results. The public cloud’s growing prominence gives a new complexion to those familiar latency problems. However, this shift can have a positive outcome.
Rather than fearing the public cloud and the supposed latency and reliability issues it threatens to introduce, dive in and get to know the platform better. Only by becoming familiar with the cloud’s weaknesses and working with partners well-versed in overcoming them can you find the ideal way for the public cloud to support your data. Then, your company can take advantage of the cloud’s unprecedented economics, flexibility and scalability benefits.
Lazarus Vekiarides is the chief technology officer and co-founder of ClearSky Data, the global storage network that simplifies the entire data lifecycle and delivers enterprise storage as a fully managed service. Previously, Vekiarides was a member of the core leadership team at EqualLogic and an executive at Dell. He is an expert in data storage, virtualization and networking technologies.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.