Oil and Gas Giant on Tech Infrastructure Investments, Uncertain Times

Multi-million dollar decisions about investing in high performance computing systems are never taken lightly, but in the oil and gas industry, which is going through one of its worst crises in history, uncertainty reigns—at least in terms of the future outlook.

This week at the Rice University Oil and Gas Workshop in Houston, technical leaders from the world’s major oil and gas companies and several others on the upstream and downstream fronts joined to discuss future infrastructure requirements in the face of consistent demand and dramatically falling oil prices. Following some enlightening talks with those making technical purchase decisions at a time when the squeeze is on, however, a nuanced picture emerges—one that we will touch on here and in a follow-up piece this afternoon.

For a more focused look, however, we spoke with a technical lead at one of the “supermajors” in the industry, French energy giant, Total, about its decision to make a continuous investment in large-scale supercomputers (with a keen eye on the future of exascale computing) in the face of mounting uncertainty. Although, perhaps that sense of concern is not filtering as far up the infrastructure ladder as one might think, at least according to Francois Alabert, VP of Geotechnology Solutions at Total, who tells The Next Platform that the panic is perhaps somewhat overhyped, at least when it comes to the future of supercomputing investments in the industry.

“We have been through different crises in this industry and that has never prevented us from rigorously investing in high performance computing. Scientific computing is a competitive advantage, it is the competitive advance, and at the heart of oil and gas and that is not something that will change.”

Alabert points to the recent upgrade to the top-tier SGI-built Pangea supercomputer, which is now capable of 6.7 petaflops of peak performance—enough to place it safely on the list of the top ten supercomputers on the planet. This upgrade was roughly the equivalent cost of the original system and will add 4.4 petaflops with the addition of a new string of Xeon E5-2600 processors. The whole machine will run on 4.5 megawatts, which is expected for such a machine, but will by no means be the highest power system Total buys over the next five to seven years.

In November of 2014, Alabert approached Total leadership to propose the expected upgrade for the Pangea supercomputer, a $30 million to $40 million venture. “I made the case for the step change. On the production side, we were expecting new breakthroughs in finding new sources, including unconventionals, sub-salt resources, and other opportunities. And while it’s true things were not like this then, the case for investing is clear.”

alabert22

Total has been in the oil exploration game since the 1950s and started down the supercomputing path in the 1980s. The teams have oscillated between Cray and SGI machines over the last several decades, starting its supercomputing stride in the 1980s with a Cray 1S, followed by a Cray XMP, then later generations throughout the 90s, including a Cray YMP and C90. In the late 1990s, they made the switch to SGI with a brief interlude with an IBM SMP cluster in the mid-2000s. Alabert says that while they are watching the momentum toward exascale systems now at Total, the main concern is power consumption–just as it is as large national labs. Code scalability for future machine is also an issue, but he says that since a majority of Total’s HPC codes have been developed internally, that ongoing R&D investment will continue to match the capabilities of ever-more floating point capability.

Alabert leads a group of over 400 technicians and engineers that support the geoscience and reservoir engineering efforts behind Total’s exploration initiatives. For that team, the real challenges ahead include developing and making use of more complex multi-physics simulations are top priorities, given the high stakes of drilling in selected locations. “The average well costs around $400 million,” Alabert says, so the more detailed the simulations are, the less risk, especially as Total looks to exploiting other assets, including shale (and other unconventionals) and sub-salt and off-shore locales.

Making use of existing data is another pending challenge. Although Alabert tells The Next Platform that there are some opportunities worth exploring in machine learning and deep learning, they lack the in-house talent to develop some of these initiatives. Such additions will never replace the large-scale supercomputers, he says, but they can provide additional information aimed at taking the risk out of potential drilling operations. “Although it is surprising, we only use between 20% and 30% of the data we collect,” he says. The volume of that data is set to grow as well given the new 4D modeling methods that are based on sensors—always on, always feeding data to guide simulations.

NIC5

Despite these challenges on the code side, he says creating more energy-efficient systems should be a primary target for vendors building next-generation HPC systems. “Switching architectures is not an easy thing. We are always watching what is happening but any improvements can’t just be incremental–we cannot justify the effort of changing everything for just a 2X or 3X improvement. It has to be a big change.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Wow that makes you think how big that factor needs to be? A magnitude I guess? So no chance for ARM or Power then.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.