It is one thing to know that large-scale, expensive supercomputers back the world’s capability to provide accurate global and regional forecasts, but it is another entirely to understand that the investments made in improving those systems can translate to billions, if not tens of billions, of dollars.
Wrapped together, the agricultural, transportation, tourism, general business, and other economic impacts of accurate (or woefully inaccurate) forecasts are easy to overlook. But an even more difficult tale to tell is just how big of an impact an order of magnitude boost in computing power can turn all of that around. For the public and funding agencies alike, the argument that for the same cost as, say, a fighter jet, the country can get far high-resolution weather models of both global and regional scope that can provide far more accurate results. This counts for both severe weather events and more spot-on extended forecasts that drive our economy in more ways than might be quickly visible (transportation alone is rife with examples).
For a country like the United States, which was the leader for decades in numerical weather prediction modeling and weather systems operation, investments like these dwindled due to lack of leadership, funding, and misalignment with the research community—all of which compounded in some spectacular failures. The most obvious sign of reduced investments in critical forecasting infrastructure is in the computing investments, argues Dr. Cliff Mass, Professor of Atmospheric Sciences at the University of Washington, and prolific writer on all things weather-related, including some detailed pieces on the need for sustained supercomputing investments for global and regional weather prediction and how the U.S. edge has dulled.
Mass says that until three years ago, the United States had fallen so far behind in its computational investments in weather forecasting that it had one-tenth the computing power of the European center. This was an embarrassing place to be overall, but it came to full light during Hurricane Sandy when the Europeans saw the storm, but the U.S. models, which were much lower-resolution, could not come to the same conclusions. Since that time, NOAA has made fresh investments and brought the U.S. back to the top of the pack for global and regional forecasts, but there is still a long ways to go. “Theoretically, the more computing power you throw at weather models like these, the better the predictions get. There is no such thing as having too much compute power here. And the economic incentive is clear.”
As Mass tells The Next Platform, highly publicized failures include the costly inability to deliver accurate forecasts that could have provided faster warning about Hurricane Sandy, and could have diminished the economic impacts of over-predicting the great snowstorm (that really never happened) in New York City last winter. But even with these shortcomings, the case for funding large-scale supercomputers for weather prediction is a tough one, at least compared to how Federal funds seem to be looser when it comes to paying for big supercomputers that target climate modeling research at NASA centers and various national labs.
To put all of this in perspective, consider that the large-scale global forecasts in the United States are limited to two centers in the United States, NOAA and the U.S. Navy’s Fleet Numerical center are the only two organizations tasked with long-range, complex ensemble-based forecasts that require massive supercomputers. To be fair, this limited number is not out of balance with other countries, or even continents for that matter. Although there are many regional centers that run heavy-duty (HPC) forecasts, global modeling requires far more brute computational force—and there are only a few companies on the planet who are catering to this small, but ultra-high margin market. So far, the one company with the dominant share of weather forecasting systems is supercomputer maker Cray.
As we described in detail earlier this year, Cray has been on a winning streak when it comes to top global weather forecasting centers. One of the most notable wins as far as the United States goes—and, in fact, the one that tipped the scales more in the U.S. favor in terms of competitive national weather systems – was a win at NOAA.
The NOAA supercomputer deal will deliver a 10X increase in application performance at the center, and interestingly will do it by moving from a 2.5 petaflops machine to 5 petaflops system. (The performance scales non-linearly and to their benefit with the addition of more cores.) But what is often missing when news hits the mainstream about big investments like this ($44.5 million) is what it means for the future of weather forecasting in the U.S. That 10X improvement will, according to Dr. Louis Uccellini, director of NOAA’s National Weather Service, translate into the ability to process “quadrillions of calculations per second that all feed into our forecasts and predictions. This boost in processing power is essential as we work to improve our numerical prediction models for more accurate and consistent forecasts required to build a Weather Ready Nation.”
As NOAA explained earlier this year, in advance of the upgrade, each of the two operational supercomputers will first more than triple their current capacity later this month (to at least 0.776 petaflops for a total capacity of 1.552 petaflops). With this larger capacity, NOAA’s National Weather Service in January began running an upgraded version of the Global Forecast System (GFS) with greater resolution that extends further out in time. “The new GFS will increase resolution from 27 km to 13 km out to 10 days and 55 km to 33 km for 11 to 16 days. In addition, the Global Ensemble Forecast System (GEFS) increased the number of vertical levels from 42 to 64 and increasing the horizontal resolution from 55 km to 27 km out to eight days and 70 km to 33 km from days nine to 16.”
Cray Supercomputers A Port In Many Storms
And if even that is difficult to process in terms of value, consider the impetus. These investments were spurred during the Hurricane Sandy “wake up call” that Mass says was sad, “but one of the best things that could have happened for weather prediction” because it connected the value of high-resolution weather models to real-life consequences if those fall short. The Disaster Relief Appropriations Act of 2013 went into effect following the public shaming of the U.S. weather system in its failure to accurately predict the storm, which originally to give IBM first crack at the system (although the Lenovo acquisition caused concern, making way for Cray to step into the deal).
Cray had another big win for the Met Office in the UK for $156 million, another for $53 million to provide a new Cray XC40 supercomputer at the Bureau of Meteorology in Australia, which for context, will allow the center to run nearly eight times the number of forecasts it could have done with its current machines. The point is, capacity and capability matter when it comes to global forecasts—and the U.S. is finally getting current. The company is pushing its streak of weather wins further today with the announcement that the Danish Meteorological Institute is investing over $6 million in a new system that will provide a 10X boost to its capabilities, and we expect this run to continue globally.
The point is, at least from the U.S. perspective, delivering highly accurate weather models is critical to national and regional economies. Although weather forecasting at global scale is expensive, the economic impacts of not deploying the highest-resolution models can be catastrophic. Although the White House has recently backed future investments in exascale-class supercomputers for the future (future systems that Mass says can and should eventually be used for highly accurate, long-range weather ensembles), he says it will be an uphill battle to keep investments flowing to extend the capabilities of these new supercomputers once their installed—and to keep pushing the capabilities of ensemble forecasts in the years ahead.