NSA Makes Another Cloud Jump with $2 Billion HPE Deal

Perhaps the National Security Agency (NSA) in the U.S. is over procuring and maintaining some of its on-prem supercomputers.

Over the next decade, the NSA will spend $2 billion on HPE’s GreenLake platform for its high performance computing needs. This investment pales in comparison to the recently revealed $10 billion AWS contract for data storage and ingestion. It could be that the NSA is spreading its risk (and investment) across cloud ecosystems — using one service, like AWS, for ingestion and retention while the GreenLake platform handles HPC simulations or AI, for instance.

According to HPE’s HPC and Mission Critical Solutions VP, Justin Hotard, “implementing artificial intelligence, machine learning, and analytics capabilities on massive sets of data increasingly requires high performance computing systems.” He adds that the NSA can now “tackle a range of complex data needs but with a flexible, as a service experience.”

The new service includes a combination of HPE Apollo systems and HPE ProLiant servers, which will ingest and process high volumes of data, and support deep learning and artificial intelligence capabilities. As part of the HPE GreenLake service, HPE will build and manage the complete solution that will be hosted at a QTS data center — a hosting facility that delivers secure, compliant data center infrastructure and robust connectivity to support scaling of operations.

The key word here is flexibility. It is not surprising that large agencies with widely varying workloads might want to shake off the inflexibility of systems procured years ahead of emerging trends. For instance, some systems for big agencies are planned as many as three years before ever coming online. As we have seen with the swift rise of AI/ML, much can happen in that short span, leaving agencies with systems that might not provide the right type of compute for where they want to go next.

On the financial side, even though $2 billion might sound like an incredible investment, over the course of the next ten years it is not that far removed from what an agency like the NSA might spend on traditional supercomputers. While there has been clarity on the systems installed at the NSA’s massive Utah data center, let’s assume they are on par with some of the U.S. national labs and say each machine is $300 million.

Let’s assume every three years there is a $300 million system (on average) that eats around one billion in investment over a decade. The savings are not just in the infrastructure spend. Power and cooling, facilities, and of course, the human capital investment can reach the second billion within ten years. There are other more difficult-to-price angles here as well. HPE is handling all the backend storage, applications and frameworks, security assurances, and further, it is likely a distributed service for redundancy — something that would be expensive for the NSA to build out physically as well.

Resource flexibility is priceless. Being able to decide on a dime to run a massive graph neural network training algorithm then spin that around for serving on machines that don’t need powerful CPUs? Tackling double-precision, dense HPC applications in tandem with low or mixed-precision workloads? That would likely have been difficult on machines procured years ago that might have an older generation GPU on some nodes with majority CPU architectures.

Unlike broad cloud contracts, an HPE GreenLake arrangement will allow the NSA to tailor the backend infrastructure to its mission. HPE provides options for different compute requirements. It can also “add on” as needs change. HPE’s GreenLake platform has targeted platforms for everything from Hadoop-centric workloads, to AI/ML, to SAP HANA, databases, and VDI.

It is for these reasons we expect large agencies, like the NSA, to continue forgoing large supercomputers (except when security is paramount) for the flexibility and managed care of clouds.

The NSA will be giving a greenlight for the GreenLake shift in 2022.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. How is that different than the so called past „strategic outsourcing“, as tailored full IT operation service model with min/max capacity commitments from both sides?

    The initial proposals are traditionally very attractive and allowed many large corporations to show a clear positive business case. Expensive is the „change management“ where every minor deviation from the SOW will be extremely expensive. The perceived flexibility proved quickly to be purely coin-based.

    In the last years many corporations thus did not extend their outsourcing contract but instead started to bring IT back in house, combined with new processes and new culture to enable higher flexibility to what they had both with previous in-house operation as well as outsourced operation.

    In tailored environments where you don’t share resources, i.e. have actually shared staff and shared servers, any flexibility a service provider can give you, you can just as well create yourself – without having to pay the premium for it being a „change“.

    Will be extremely interesting to see how the experience in the public sector one, two decades after the private sector will mirror that or deviate from it.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.