Singularity Containers for HPC, Reproducibility, and Mobility

Containers are an extremely mobile, safe and reproducible computing infrastructure that is now ready for production HPC computing. In particular, the freely available Singularity container framework has been designed specifically for HPC computing. The barrier to entry is low and the software is free.

At the recent Intel HPC Developer Conference, Gregory Kurtzer (Singularity project lead and LBNL staff member) and Krishna Muriki (Computer Systems Engineer at LBNL) provided a beginning and advanced tutorial on Singularity. One of Kurtzer’s key takeaways: “setting up workflows in under a day is commonplace with Singularity”.

Many people have heard about code modernization and are familiar with how it addresses scaling and performance challenges, but code modernization also implies modifying applications so they are mobile and have the ability to deliver reproducible science across systems and software versions both now and in the future.

The “proof of the pudding lies in the tasting”, which is highlighted by the global acceptance of Singularity at HPC centers around the world on large systems. Following are a few large-scale examples:

Organization Cores Machine
Texas Advanced Computing Center 462,462 Stampede
GSH Helmholtz Center for Ion Research 300,000 GreenCube
National Institute of Health 54,000 Biowulf
UFIT Research Computing at the University of Florida 51,000 HiPerGator
San Diego Supercomputing Center 50,000 Comet and Gordon
Lawrence Berkeley National Laboratory 30,000 Lawrencium
Holland Computing Center at UNL/LHC 14,000 Crane and Tucker

Figure 1: Partial list of organizations. (Full list available here.)

Singularity was created for HPC in the absence of other solutions

Singularity was designed so that applications which run in a container have the same “distance” to the host kernel and hardware as natively running applications as shown below.

Figure 2: Singularity preserves the “nearness” of native applications to the OS

This translates to performance, jitter reduction, and the ability to directly utilize GPUs and communications fabrics such as InfiniBand and Intel Omni-Path Architecture (Intel OPA).

Intel supports Singularity containers on HPC products and provides Application Notes showing how to import large, complex HPC applications such as NWChem into Singularity containers so they can run on Intel processors and Intel OPA. Intel recommends that users run “like on like” container images that match the same kernel and OS distribution to the Intel OPA-basic release. They state, “Other combinations may work, but there is no support implied”. (Source Intel Application Note J57474-1.0 page 9.)

Figure 3: Identifying compute nodes for containers (Source Intel)

Unlike Docker (currently the most well-known enterprise container system) and other container systems, Singularity preserves the security of the host HPC system and does not represent a breach of security. Plus Singularity includes MPI – an essential part of HPC computing.

Succinctly, if a user wants to be root inside a Singularity container, they must first be root on the system outside the container. A user with root access can view and change any file on the system – either inadvertently or maliciously. Thus, HPC security models tightly control root access and forbid non-authorized people (e.g. general users) from gaining root access. Due to the design of Docker and other enterprise container systems that utilize root-level user-writable daemons and other security-permeable design features, HPC systems managers have to isolate the both HPC networks and user access to data before these containers can be allowed on the system.

Succinctly, if a user wants to be root inside a Singularity container, they must first be root on the system outside the container – Singularity Permissions, Access, and Privilege

The ramifications are far-reaching and precludes access to InfiniBand and Intel OPA high performance and optimized storage platforms as well as locally mounted file-systems. Thus a typical Docker solution uses a virtual cluster within the physical machine. Unfortunately, virtual machines introduce jitter, which can degrade HPC application performance by a factor of 4x or more. (See the paper, “The Case of the Missing Supercomputer Performance” for more information about the impact of even tiny amounts of jitter on HPC applications.) Network isolation, jitter, and other issues explain why Kurtzer tells people that Docker and other enterprise container systems, “Remove High Performance from HPC”.

Further, MPI is included in Singularity where it is omitted in enterprise container systems. In particular Kurtzer notes that Docker has, “No reasonable support or timeline for MPI”. Current estimates are that MPI support in Docker is at least two years out. Succinctly, Kurtzer observes that “HPC is not a use case for Docker or other enterprise container systems like runC and RKT”. Kurtzer created Singularity in part because, “Patches to help make Docker/runC/RKT a better solution for HPC have been submitted, but most have not been accepted!”

Patches to help make Docker/runC/RKT a better solution for HPC have been submitted, but most have not been accepted! – Gregory Kurtzer (Singularity project lead and LBNL staff member).

This explains why Kurtzer created Singularity to address enterprise design omissions (security, performance, and MPI) plus other issues. The lack of these features in currently popular container systems also provides the reason for HPC users to evaluate and adopt Singularity on their HPC systems.

Please see the security documents for more information about the Singularity security model.

Mobility in the Cloud

Singularity is also used to perform HPC in the cloud on AWS, Google Cloud, Azure and other cloud providers. This makes it possible to develop a research workflow on a laboratory or a laboratory server, then bundle it to run on a departmental cluster, on a leadership class supercomputer, or in the cloud.


Singularity containers can be built to include all of the programs, libraries, data and scripts such that an entire workflow can be contained and either archived or distributed for others to replicate no matter what version of Linux they are running. Singularity also runs on Mac and Windows systems.

Singularity also blurs the line between container and host such that local directories can exist within the container. Applications within the container have full and direct access to these files, which enables arbitrary and persistent workflow configurations. Meanwhile, users can get results reported to their local file-system.

Containers can also be bundled so they contain commercial code. Essentially, the container can be installed using a certified version of the operating system. The Singularity documentation then states, “The application environment, libraries, and certified stack would all continue to run exactly as it is intended” inside the container.

The advantage of containers is that legacy workflows will continue running far into the future. This is a double-edged sword because workflows will continue working “as-is” far into the future, which puts the onus is on the maintainers of the containerized workflow to ensure the code stays current rather than becoming fossilized. Still, even ancient containers can be exhumed to provide result validation.

Use cases

Users are finding that they deploy an application on an HPC cluster with an installed workload manager such as Slurm, HTCondor or Torque code with little effort and similar performance to workflows in other container systems. Kurtzer tells people, “Setting up workflows in under a day is commonplace with Singularity”

Setting up workflows in under a day is commonplace with Singularity – Greg Kurtzer

The National Institute of Health wrote, “We’ve had many users ask for programs like TensorFlow and OpenCV3 that are difficult or impossible to install with our current OS. Many users have also been asking for Docker to create portable reproducible data analysis pipelines. Singularity allows us to provide this functionality to our users in a secure environment. Our admins have found it easy and intuitive to use Singularity. Some of our staff have even begun to install tricky applications into Singularity containers and write wrapper scripts and module files that make the Singularity environment transparent to the end user.” (Source: Singularity Registry download file)

Nextflow wrote a detailed blog about their work to containerize a bioinformatics pipeline at the Center for Genomic Regulation (CRG). Their benchmarks show that there isn’t any significant difference in the execution times between Docker and Singularity. (Source: The Nextflow blog, “More fun with containers in HPC”.)

Pipeline Tasks Mean task time Mean execution time Execution time std dev Ratio
    Singularity Docker Singularity Docker Singularity Docker  
RNA-Seq 9 73.7 73.6 663.6 662.3 2.0 3.1 0.998
Variant call 48 22.1 22.4 1061.2 1074.4 43.1 38.5 1.012
Piper-NF 98 1.2 1.3 120.0 124.5 6.9 2.8 1.038

Figure 4: Docker vs. Singularity runtimes (time in minutes. Reprinted courtesy NextFlow)

The February 2017 Intel Application Note, “Building Containers for Intel Omni-Path Fabrics using Docker and Singularity” shows how to configure and run Singularity on Intel OPA fabrics. They provide a specific example of building and running NWChem in a Singularity container and note in the conclusion:

“When comparing the container technologies, we found Singularity to be a viable alternative to Docker for running MPI applications in our test HPC cluster environment. Singularity interfaces with the MPI mechanisms installed on the host machines and can be used with external resource managers. It is also possible to run Singularity directly as a normal user without needing root permissions to run certain tasks.”

This same Application Note also shows how to convert a Docker container into a Singularity container.

Gorgolewski et. al. wrote in PLOS Computational Biology, “Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms.”

Carlos Eduardo Arango Gutierrez at the Universidad del Valle says that Singularity helps them, “in reducing development, deployment and optimization effort in our objective of building a large-scale, organized and self-managed cluster, offering a distro and vendor neutral environment for the development of heterogeneous HPC applications.”

Lai Wei-Hwa at notes that, “Finally, a solution for Docker’s security holes.” In particular, they find the following advantages:

  • Root in your Docker container does not mean root on your host thanks to Singularity.
  • Kernel panic in your bootstrapped Docker container doesn’t have to mean that your host goes down.
  • Docker container breakouts can also be mitigated by Singularity.

Containers are a new concept for the scientific and HPC communities. For security, mobility, and reproducibility reasons, developers are strongly encouraged to look into a container solution like Singularity.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. This project is somehow similar with Rancher OS the small Linux distro that runs entire OS in tiny containers. A smart move.

    • not really, it is just a packaging project so far for traditional HPC environments. I feel HPC have to do things the snowflake way. instead of contributing and being part of OCI, they have their own image, why not invest on a scientific/hpc run time, complement the efforts of OCI and Linux foundation?

  2. Rob, thanks for the nice summary on current status of Singularity. Please allow one clarification: when you say that “Current estimates are that MPI support in Docker is at least two years out” you might not have looked at UberCloud’s Docker based HPC containers which run MPI over Infiniband (and recently also Omni-Path) for over 2 years now, with development support from Intel on the LLNL Hyperion supercomputer. And because these containers run a user’s workload on dedicated HPC nodes they are also fully secure. More here:
    BTW, Docker containers have been developed 6 years ago especially for microservices typical in (web services based) enterprise applications, while Singularity has been developed recently from the ground up just for HPC.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.