HPC Clusters Lead a Double Life with Hadoop

The number of production Hadoop clusters is growing, but far too often, that means the number of dedicated clusters just for running it is expanding as well. This means a lot of extra management, maintenance, and cost overhead, hence the rush to make clusters do double-duty for specialized workloads and Hadoop at the same time.

As we saw with recent efforts to avoid the separate cluster problem via projects like Myriad, which are trying to make Yarn and Hadoop clusters work together, so too is this same need driving a call from other areas that need “one cluster to rule them all.”

Hadoop was slow to enter high performance computing territory and if it did find its way into supercomputing sites, it was tucked away in a corner, a small cluster separated from the big systems to chew on defined analytics workloads. This HPC Hadoop segregation is set to change in the wake of increasing demand from supercomputing sites, which want to be spared the maintenance and management of a Hadoop cluster for slices of their overall workflows. While Hadoop will never displace the stack for high performance computing applications, it is becoming easier to run Hadoop on the same cluster where these workloads live—all while using optimizations designed for HPC hardware.

Having such capabilities would be useful for those whose Hadoop demands are likely to be spotty or for those in the experimental phase with the framework, which describes a large number of HPC centers that we have encountered over the last few years. Hadoop is not the core competency, and certainly not central to large-scale simulations, but it is increasingly of interest for some key slices of the data analysis side of such applications. Further, high performance computing clusters are often tuned for compute-intensive workloads and run their own optimized stacks—everything from HPC-specific compilers, schedulers, and other middleware packages.

Consider, for instance, a traditional HPC-centric cluster management framework like Scyld, itself rooted in the early days of supercomputing at NASA (it was built on Beowulf clustering). This tooling was layered, coined Scyld, and spun out as an offering that was acquired by Penguin Computing in 2003. Since then, the team at Penguin (which offers HPC and enterprise hardware, middleware, and cloud services) has cobbled Scyld to become a “turn-key” to quickly provision and manage HPC clusters. It has all of the goodies one might expect to find in an HPC-centric offering, including common schedulers (Torque, Maui, Grid Engine), various MPI implementations, as well as HPC centric compilers from Intel and PGI. For those with Red Hat or CentOS clusters, Scyld Clusterware has critical tweaks for the supercomputing set, including network driver optimizations, support from Infiniband, and spice to the OS to give it the high throughput and high utilization required for HPC workloads.

“What we’ve been seeing in the market lately is that there are a lot of HPC users that are using big data and just the same, there are plenty of big data users who are using HPC. The technologies and use cases are coupled, which means they should be supported in an environment that can address both.”

This is all well-tested for standard HPC workloads, but what happens when users want a certain set of nodes to speak Hadoop but carry over all the hardware and other optimizations for the cluster? While provisioning is not automatic, Penguin has stepped to the plate by providing Scyld Clusterware for Hadoop that lets an HPC cluster handle Hadoop workloads alongside HPC jobs on the same machine. According to Victor Gregorio, VP Cloud Services at Penguin Computing, this frees HPC sites from managing two clusters and allows for more seamless management between the two workflows running on different nodes across the system.

“The idea is that users can provision a bare metal Hadoop environment and an HPC environment all from the same pool of resources and single point of management but the key thing is that there is a single image install, said Gregorio. “Because of this, changes that are applied to that image are immediately propagated to nodes in real-time without rebooting so the functionality that allowed for change management for HPC environments now allows for the same for Hadoop environments.” The OS optimizations, especially on the networking side also apply to Hadoop. The fact that this runs lightweight bare metal nodes which can run out of RAM disk means a node can be focused on the application that’s being run and not worry about services and other tools that might interfere with memory or CPU utilization.

To be fair, this is not the only way HPC and Hadoop are operating in conjunction. Instead of provisioning Hadoop nodes from the admin side to run a Hadoop job alongside other HPC applications in a separate environment, there has been a great deal of work happening to bring high performance to the framework by swapping out the native HDFS file system with Lustre, running MapReduce on Slurm, and other approaches. Still, for the HPC center that is simply experimenting or has only small, occasional workloads for Hadoop—the question is why would one ever bother with a distinct cluster with an increasing array of tools that let you tap into your own hardware (to say nothing of using a cloud-based version of Hadoop on Amazon Web Services or another provider).

As a side note to end, this approach to allowing HPC clusters to lead double lives is not just limited to Hadoop running alongside HPC. Penguin and other companies that have historically had roots in high performance computing have found ways to take the best of the non-HPC world, including virtualization and OpenStack, and provide that as an option to run across a defined set of machines. For instance, with something like Scyld Cloud Manager, users can free up hardware for HPC requirements by virtualizing head nodes, login nodes, schedulers, and other drags on infrastructure as well as use that capability to virtualize certain applications that don’t require absolute bare metal high performance.

In many ways, the race is on to make the HPC cluster “The Next Platform” that is primed for the most demanding of all workloads—but that can be shuttled off to handle additional jobs in a way that’s seamless and one hell of a lot less costly than a dedicated cluster.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Nice post! Hadoop’s capabilities are revealed as data volumes increase. Managing data and clusters at scale presents different challenges to those associated with running test data through a couple of machines. Again and again, organizational deployments of Hadoop fail as they simplistically try to replicate processes and procedures tested on one or two machines across more-complex clusters.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.