Cray Details New Five Petaflop Oil and Gas Supercomputer

Another large-scale supercomputer devoted to oil and gas exploration will be coming to market in Norway within the year, marking a sizable commercial HPC deal for Cray. Petroleum Geo-Services (PGS) purchased the five petaflops machine to address the needs of ultra-high resolution seismic processing and the demand for 3D subsurface models it sees from its global energy customers.

There are between five and eight systems (tricky because some are public/private R&D machines) in the top 100 tier of the Top500 list of the world’s fastest supercomputers devoted to the oil and gas industry. And if one had to make an educated guess, especially factoring in systems that are known outside of the LINPACK benchmark at BP and Exxon, there are likely five or ten more of the same magnitude at other companies.

The new PGS system is a standard liquid-cooled Cray XC40, which features the native “Aries” XC interconnect and the newest high-end “Haswell” Xeon E5 processors from Intel. While we don’t have details on which Xeon variant the company chose, we can still point out that the company did not look to Tesla GPU or Xeon Phi accelerators to boost performance, a choice that Cray’s VP of business development, Barry Bolding, told us is common among Cray’s energy sector customers. He says that even though there are companies that have ported their codes to take advantage of acceleration, the standard architectures, especially for customers like PGS that do general purpose seismic processing, are the prime choice. The main consideration for them, he says, is memory bandwidth and a balanced architecture that addresses this, I/O and compute, especially since their codes are designed to take advantage of distributed memory models.

“A lot of the algorithms we see in this area are distributed memory applications and while there are still some that can benefit from large memory, that is not dominant,” Bolding says.

PGS is a full service company for oil exploration, handling data acquisition (bouncing signals off the ocean floor for large-scale surveys like one happening now in the Gulf of Mexico), gathering and processing that data, and distributing it to energy companies.

“This system is sitting squarely in the middle of their workflow for the products they provide—it’s a production system,” Bolding stressed, noting that even though the energy industry is under major pressure, the data-driven services that it depends on need to continue addressing far more data against ever more complex models.

Oil and gas companies, like other commercial HPC users, are pressured from two directions to deliver their end results. The computational side has always been something of a challenge, and it is more complex now that the resolution for the models is required to be far higher. But the more pressing, and somewhat newer, problem for companies that do complex seismic modeling is data. Bolding says that many of the datasets that come from developing 3D subsurface models are in the petabyte range. While PGS did purchase a Lustre-based, InfiniBand-connected Sonexion storage system (minus the DataWarp I/O driver, which was part of the XC40 release last year) with its compute, the workflow is more complex than just taking advantage of all the cores and the fast Aries interconnect.

We talked with Dr. Jan Odegard, a well-known HPC oil and gas researcher and associate VP of IT and XD at the Ken Kennedy Institute at Rice University, about how the Cray system reflects broader trends for large-scale energy exploration systems. While we noted that this is unlike other approaches, including the shared memory machines at Total and other places, which have been touted as ideal for handling seismic models, he says that the NUMA architecture, for example, is bumping against its limits because of the larger datasets.

“The R&D from the energy companies like PGS is driving a lot of the architectural thinking. PGS just started working on moving their codes about a decade ago toward distributed memory models, in part because of the data volumes they were working with. If you look at shared memory architectures, there is only so much of the model that can be fit.”

It’s not just a matter of data volume size, however. Odegard says that there has been a shift to standard architectures that can take advantage of a fast interconnect because seismic processing algorithms need to share data during the computation—a difference from earlier models where shots were processed in place before being shipped off to storage. This means that there is constant communication between nodes, which will continue to increase as the sophistication of the algorithms for workloads like full wave inversion and other models take precedence. These mean that the end energy consumers of these datasets have the highest possible resolution and most accurate models, but this also means that companies like PGS that provide these models will be pressed each step of the way to expand their systems not just computationally, but at the memory, I/O, storage, and interconnect levels.

“With the Cray supercomputer, our imaging capabilities will leapfrog to a whole new level,” says Guillaume Cambois, executive vice president of imaging and engineering for PGS. “We are using this technology investment to secure our market lead in broadband imaging and position ourselves for the future. With access to the greater compute efficiency and reliability of the Cray system, we can extract the full potential of our complex GeoStreamer imaging technologies, such as SWIM and CWI.”

This is notion of creating more balanced architectures is nothing new, but is increasingly important as the needs of large commercial HPC sites are just as data-driven as they are propelled by computational capacity.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.