Exascale Capabilities Underpin Future of Energy Sector

Oil and natural resource discovery and production is an incredibly risky endeavor, with the cost of simply finding a new barrel of oil tripling over the last ten years. Discovery teams want to ensure they are only drilling in the most lucrative locations, which these days means looking to increasingly inaccessible (for a bevy of reasons) sources for hydrocarbons.

Even with renewable resources like wind, there are still major financial risks. An accurate prediction of shifting output and location for expensive turbines are two early-stage challenges, and maintaining, monitoring, and optimizing those turbines is an ongoing pressure.

The common thread that ties risk to reward is the capability to model and simulate resource discovery and renewable operations. At the heart of that is supercomputing, which according to the European and Brazilian teams behind the EU-funded High Performance Computing for Energy (HPC4E) project, is the key to keeping energy on track for the decades ahead.

While current supercomputing capabilities in the petaflop range are already deployed at every major oil and gas company and an increasing number of renewable companies, reaching exascale-class computing is a priority because more compute power maps directly to more potential to avoid risk and maximize investments. It is this risk and reward cycle that drives difficult decision making at some of the world’s top oil and gas companies, as we described in some detail earlier this year and drilled down into based on the HPC experiences of French oil and gas giant, Total.

The current business challenges of oil and gas and renewable energy companies are complicated enough, and as is the case in a few other areas in research, the more compute that can be thrown at a problem, the better the result. Accordingly, programs like HPC4E and the exascale emphasis carries great weight, especially at a time when investments in resource discovery are more cautious than ever.

HPC4E is focused on mapping exascale technologies to renewables such as wind, and more traditional oil and biomass discovery and use. Key objectives include improving the odds of finding “pre-salt” reservoirs for oil extraction (which are hard to locate and extract), optimizing the operation and prediction capabilities of wind farms, and designing better biomass blends and the turbines and furnaces required. All of these efforts take advanced computing software stacks and increasing amount of compute horsepower, according to the team.

With that in mind, however, the current architectural conditions might not be the most efficient grounds to bolster energy discovery. In a recent analysis of architectures to support discovery and production targets for future energy, the group says there is still a need to “optimize the performance while keeping a high degree of portability.”

“One of the main HPC consumers is the oil & gas (O&G) industry. The computational requirements arising from full wave-form modelling and inversion of seismic and electromagnetic data is ensuring that the O&G industry will be an early adopter of exascale computing technologies. By taking into account the complete physics of waves in the subsurface, imaging tools are able to reveal information about the Earth’s interior with unprecedented quality.”

To that end they are looking at accelerated systems with GPUs and Xeon Phi with codes based on programming models and tools like OpenCL. Teams in the HPC4E program are also looking at architectures from AMD, and system architectures like the NUMA-based SGI machines (now part of HP). Also of interest is an exploration of ARM as another processor choice. To tie this all together is the key point, “the load balancing and data placement, [which will] take into account new scheduling algorithms capable of improving locality.”

The HPC4E team has identified a few key areas where HPC systems and management frameworks can be optimized for energy workloads based on their developing view of an ideal exascale stack for this sector on both renewable and non-renewable fronts.

Algorithms for Exascale Energy  – Target algorithms for future HPC energy applications include partial differential equations, finite element methods, sparse linear solvers, and larger data management codes. Modernizing and optimizing these areas is important, as is adopting newer “big data” management tools. The team notes that this area will be explored by using tools like SimDB, UpsilonDB, and Chiron. For wind energy exascale computing projects, computational fluid dynamics and large-eddy simulations are the targets. “The objective here is to have CFD models ready for exascale systems in order to overcome the present limitations and increase the accuracy of the evaluation of technical and economic feasibility of wind farms.”

“For wind energy industry HPC is a must. The competitiveness of wind farms can be guaranteed only with accurate wind resource assessment, farm design and short-term micro-scale wind simulations to forecast the daily power production. The use of CFD LES models to analyse atmospheric flow in a wind farm capturing turbine wakes and array effects requires exascale HPC systems.”

Biomass Research at Exascale – A newer challenge in HPC is to develop “a validated, predictive, multi-scale combustion modeling capability to optimize the design and operation of evolving fuels,” the team says. “The next exascale HPC systems will be able to run combustion simulations in parameter regimes relevant to industrial applications using alternative fuels.” Here too is a CFD challenge for these codes and systems.

“Biogas, i.e. biomass-derived fuels by anaerobic digestion of organic wastes, is attractive because of its wide availability, renewability and reduction of CO2 emissions, contribution to diversification of energy supply, rural development, and it does not compete with feed and food feedstock. However, its use in practical systems is still limited since the complex fuel composition might lead to unpredictable combustion performance and instabilities in industrial combustors. The next generation of exascale HPC systems will be able to run combustion simulations in parameter regimes relevant to industrial applications using alternative fuels, which is required to design efficient furnaces, engines, clean burning vehicles and power plants.”

In addition to these larger areas of architectural research to develop the ideal stack for these algorithms and applications at exascale are more focused concepts, including integrating checkpointing techniques into Slurm, which is the most frequently used HPC workload manager, at least according to Top 500 data. It works well with third party plugins and as the team says, “already counts with plugins to support checkpoint libraries and perform some basic operations like checkpoint and restart. In the team’s proposed optimized software stack for energy applications on HPC systems, “a stack based on Slurm and DMTCP plus coupling any job can be checkpointed and restarted transparently to the user and the job itself.

Work on developing an ideal exascale stack for future energy codes is a public and private effort with extensive involvement from both sectors, as tends to happen with European HPC projects. The HPC4E program’s industrial members include nearly all of Europe’s major oil and gas and energy companies, including Total, among others. The effort is coordinated via the Barcelona Supercomputing Center and runs through November 2017 with the 2 million Euro in fund it received earlier this year under the EU’s exascale-focused Horizon 2020 program.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.