Today the White House issued an executive presidential order to create a national strategic computing initiative. In essence, the letter notes the significance of supercomputing for economic competitiveness and scientific progress and calls upon the funding and governmental agencies with budgetary authority to consider this order from on high as they set about making future allocations.
Although it is a powerful statement for several reasons that we will outline below, how such a grand vision is funded is still some time off. Further, it will fall to the decision-making hands of various agencies with their own missions and objectives. Still, having the brunt of presidential authority behind it is important, in part because it keeps supercomputing in the limelight–something that has been easier with increased buzz about the closer competitive quarters with other nations who also see HPC as a strategic economic asset.
Presidential orders are not uncommon when it comes to technology initiatives, and like others, while it does not come with a dollar amount (the president does not make budget allocation decisions) it does set the stage for a future of investment. Specifically, the emphasis is on the 2025 timeframe exascale computing. While the CORAL systems for the United States will form the triumvirate of top Department of Energy machines (and likely, among the top in the world when they emerge in 2017-2018), this initiative could push other agencies with supercomputing investments (in terms of what they require for their own missions) toward eventual systems. Among these might be the National Science Foundation, which operates some large machines in the United States with partners, but might see a more significant investment over time.
From similar decrees on behalf of the president for other areas, including manufacturing, genomics, big data, and materials science and more, the statement is made and signed by the president, and this feeds the decision-making process for future efforts as budgets are aligned for the future. To better understand the process and what this all means for the future of exascale investments in the United States, The Next Platform talked with Dr. Horst Simon, Lab Director at Lawrence Berkeley National Laboratory and long-time participants in discussions in Washington around the significance of supercomputing and it’s the future of its funding.
Simon notes that this marks a success for the efforts of his colleagues, who feel that supercomputing has been left off the map in the context of larger technology policies and debates. However, with the increase in attention around the fact that the United States is no longer holds the top machine in the world by a long shot (although the U.S. carries the dominant share of the Top 500 systems) and the notable uptick in progress on the supercomputing front from other nations in Asia and Europe, the time is right to strike for more attention.
“This is a great thing to hear coming from the White House. But if you go back five years ago or so, we had PCAST [President’s Council of Advisors on Science and Technology] putting out a report on information technology and high performance computing was barely mentioned in there and was seen as a technology that was no longer relevant.” In his view, this created the impression that what was happening in the national labs and research supercomputing sites was not of importance, especially compared to the larger focus of the time, which was on social networks and the explosive growth and potential building there. “This report fundamentally hit exascale at the time, it had a long term impact,” Simon says.
According to Simon, the budgets that are designed for specific agencies are yet to be seen, but this should begin having an impact in 2016 when the allocations are being made. The Department of Energy has a clear, defined mission, especially when it comes to computing, he says, which leaves the impact there up in the air, but the HPC bug might catch in other agencies that need not only powerful FLOPs-capable machines, but also systems that can handle the data at exabyte scale.
These investments (both monetary eventually as well as philosophically) encompass five main themes. The first and most obvious is to build systems that can provide exaflop capability to exabytes of data. What is interesting here is that they are separating numerical simulation, which was once the focus of HPC, at least in general, into a more data-centric view. “In the last ten years, a new class of HPC system has emerged to collect, manage, and analyze vast quantities of data arising from diverse sources.” The exabytes of data is being treated with equal weight as the exaflop capability, a fresh new emphasis from a policy point of view since there have often been distinct HPC and big data initiatives, but none that have taken a combined approach in so balanced a way. This is also in line with the architectural decisions that were made for the next-generation pre-exascale machines as part of the CORAL procurement—there is a named “data centric” architecture from OpenPower systems at Oak Ridge and Lawrence Livermore and an equally data-focused emphasis for the Argonne Theta and eventually, Aurora machine.
“By combining the computing power and the data capacity of these two classes of HPC systems, deeper insights can be gained through new approaches that combine simulation with actual data…Achieving this combination will require finding a convergence between the hardware and software technology for these two classes of systems.”
One of the other important themes connected to this is a mandate to begin to bring HPC hardware options to the fore. And what’s interesting here is that the document recognizes that the Moore’s Law party is coming to an end—at least in terms of the long tail they’re projecting for exascale. Accordingly, the focus is not just on semiconductors as we know them, but any promising technology that might lead the way.
“There are many possible successors to current semiconductor technology, but none that are close to being ready for deployment.” Given this, the government must “sustain fundamental, pre-competitive research on future hardware technology to ensure ongoing improvements in high performance computing.”
This is where things get interesting for companies like D-Wave, vendors who are doing interesting things with ARM, memristors, and other technologies that we follow for fun here at The Next Platform, but with somewhat of a distant gaze given the fact that so much has yet to be proven in silicon, let alone on practical real-world applications.
While the exascale and exabyte combination is important technologically speaking, the goal, as defined by the second theme, is to keep the U.S. at the forefront of HPC capabilities. According to a fact sheet that was circulated, “Other countries have undertaken major initiatives to create their own high-performance computer technology. Sustaining this capability requires supporting a complete ecosystem of users, vendor companies, software developers, and researchers.” To be fair again, the U.S. does still hold the largest share of any nation in terms of the numbers of HPC systems (publicly stated on the Top 500 anyway). Still, as the document states, it’s not just a matter of being at the exascale target for its own sake, there must be real applications that can take advantage of that kind of scale and scope.
Accordingly, the fourth theme is to develop HPC application developer productivity. In working with vendors, agencies will “emphasize the importance of programmer productivity as a design objective. Agencies will foster the transition of improved programming tools into actual practice, making the development of applications for HPC systems no more difficult than it is for other classes of large-scale systems.” Further, the goal is to make HPC readily available (the fifth pillar), which connects with the application goals just stated. “Agencies will work with both computer manufacturers and cloud providers to make HPC resources more readily available so that scientific researchers in both the public and private sectors have ready access. Agencies will sponsor the development of educational materials for next generation HPC systems, covering fundamental concepts in modeling, simulation, and data analytics, as well as the ability to formulate and solve problems using advanced computing.” This means new programs and future investments in tools, libraries, languages, and more—all of which are gaps for future exascale machines (and already challenges for the pre-exascale systems as detailed here).
As the White House concluded in the full document today, “By strategically investing now, we can prepare for increasing computing demands and emerging technological challenges, building the foundation for sustained U.S. leadership for decades to come, while also expanding the role of high-performance computing to address the pressing challenges faced across many sectors.”
Be the first to comment