The recent OpenStack Tokyo event offered a view of the growth and maturation of this vital open source cloud technology. Big enterprises are operationalizing OpenStack deployments, and many more are poised to follow as we cross the chasm into broad adoption. But as we get into the nuts-and-bolts of real implementations, the question of the day is becoming clear: What is the best way to scale an OpenStack cloud while assuring high performance and manageability? When you reach the limitations of stock OpenStack components, where should you turn for reliable, battle-tested technologies to take your platform to the next level?
The good news is that there is no shortage of options for third-party extensions to scale and enhance OpenStack in exactly the right way for your business. The community has done outstanding work to converge on standard interfaces and frameworks that allow any vendor to add their technology to a common control plane – while making it simple for organizations to switch out components at will. In this article, we’ll explore the whys and hows of tricking out your OpenStack cloud to deliver high performance at scale for your business.
OpenStack Goes Mainstream – With Help
For organizations of all kinds, OpenStack has never been more relevant—or popular. Offering public cloud flexibility with low cost, the platform is expected to generate $3.3 billion in revenue by 2018, with services growing by 32.9 percent annually through 2019. Business drivers like operational efficiency, support for agile innovation and escaping the traditional purchased software model give OpenStack a significant tailwind for mass adoption.
Still, questions have been raised about the scalability of OpenStack—and reasonably so. As some have noted, a stock OpenStack deployment can’t really scale past 30 nodes. OpenStack is being positioned as a single control plane for your entire cloud infrastructure, but in reality, each change you make to it ripples across and impacts several other components, calling for additional adjustments to networking, load balancing, security, and so on. This can feel all too similar to a traditional datacenter, where each new application added called for resource-intensive, time-consuming, and error-prone manual configuration.
What people want is a fully automated datacenter where you can use the OpenStack control plane to throw a new app or change at the cloud and have it automatically set up by policy on the back end. That is something you can’t do with a vanilla deployment.
Of course, this raises the question: Who said anything about vanilla deployments and stock components? Much of the beauty of OpenStack lies in it highly interoperable and integration-friendly design. Each of the compute, storage, and networking projects that make up OpenStack offers a clean API set that allows for easy replacement with different components of your choice. You can think of main trunk OpenStack software the way a car customizer views a stock Chevy or BMW – as a fully functional if basic vehicle that, with the addition of third-party components, can become the high-performance ride of your dreams.
As this approach becomes common, we are seeing deployments that put to rest any concerns that OpenStack can’t scale. In a recent survey by the OpenStack User Committee, respondents reported rising numbers of compute nodes, cores, instances, and IP addresses as well as growing storage. This is important validation of OpenStack as a true cloud-scale technology for the enterprise. The question to answer now is how you can customize – or “trick out,” in automotive terms – OpenStack to fit your organization’s technology strategy and business needs.
Scaling OpenStack Your Way
In enterprise terms, tricking out OpenStack means using its clean interfaces to swap out default components for purpose-built plug-ins for anything from messaging and load balancing, to management and orchestration, to foundational compute or storage. These days, two areas drawing particular attention are software-defined networking (SDN) and application delivery control (ADC).
SDN plays a critical role in ensuring efficient agility for OpenStack clouds at scale through the ability to make dynamic changes to your network without human involvement. As you move, add or remove workloads in your cloud environment to meet new requirements, an SDN plug-in will align network resources automatically to relieve stress, an especially valuable capability for multi-tenant environments. Key vendors in this area include Cisco Systems, Big Switch Networks, Nuage Networks, and Brocade Communications; large telecom providers like Ericsson and Nokia also provide their own SDN controllers.
The agility and network flexibility of SDN goes hand-in-hand with load balancing and ADC functionality. An ADC plug-in for OpenStack can make it simple to manage any number of ADC instances throughout your cloud environment automatically, through a single point of administration. In this way, you can use a prescriptive, app-driven approach to simplify network design, automate network configuration, consolidate network services and integrate application-awareness into your network as a whole.
The first step to trick out your own OpenStack environment for cloud-scale agility and performance is to identify the third-party vendors who work with your OpenStack distribution. Fortunately, most major vendors work with most major distributions, so you will have plenty of options and can choose plug-ins from as many sources as you like – one vendor for your SDN controller, another for your virtual ADC, another for your virtual router, another for your hypervisor, and so on. Every large infrastructure vendor now has an OpenStack strategy and component, using this control plane as the common point that everyone plugs into.
Avi Networks, Array Networks, Citrix Systems, Radware, and others offer ADCs that are compatible with OpenStack. As you choose components, it is important to look for a vendor who already has a mature product in the space you are looking at and is bringing the same proven enterprise infrastructure technology into OpenStack. Citrix NetScaler, for example, through its NetScaler Control Center, supports cloud-scale management across complex multi-tenant cloud environments through automated fleet management of hundreds of distributed ADCs. The fact that there are potentially hundreds of ADCs running across the infrastructure is transparent to the OpenStack system and therefore the users – it “just works” and all the necessary flow-through configuration and changes happen under the hood.
At the same time, the component should be fully standards-aligned and API-compatible to preserve the openness and flexibility of your OpenStack environment. Stay away from extensions that subvert the OpenStack frameworks and APIs that the community has worked so hard to introduce. Now that we’ve converged on this common control plane, it is important to work within its parameters. To be a good OpenStack citizen is to take a highly distributed, scaled product and provide a standardized interface into OpenStack. In this way, the standardized architecture itself maintains primacy, while vendor-specific processing and activities take place underneath it.
In the true spirit of open source, the clean, open API sets provided by the OpenStack foundation ensure that every vendor is now expendable. If you aren’t happy with a particular component or its developer goes out of business, you’ll always have more options to choose from and can easily swap it out without creating a mess of infrastructure dependencies. You will always have free rein to do what you want and to trick out your OpenStack deployment any way you like.
OpenStack offers the flexibility, operational efficiency, cost savings, and opportunities for innovation today’s businesses need – and it can do so at scale. By tricking out your stock OpenStack software with extensions for compute, storage, networking, management and other key resources, you can ensure performance and agility no matter how large and dynamic your environment may be.
Nand Mulchandani is the vice president of market development for Citrix Systems. Mulchandani joined Citrix through its acquisition of ScaleXtreme in May 2014, where he was the CEO and co-founder. Prior to Citrix, Mulchandani was an entrepreneur-in-residence at Accel Partners where he started work on the company that became ScaleXtreme, in addition to working on projects in cloud computing, consumer internet, as well as advising a number of companies on product and growth strategies. Mulchandani was previously CEO of OpenDNS and led the company through a growth phase in DNS traffic, consumer router integrations, and the launch of enterprise products. Previously, he served as senior director of product management and marketing at VMware, responsible for security strategy, product management, and security marketing. In addition to a number of other startups, Mulchandani was at Sun Microsystems, working on compilers and optimization, including work on the first Java compilers. He holds a bachelor’s degree in computer science and mathematics from Cornell University.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.