Why Big Oil Keeps Spending on Massive Supercomputers

With per barrel oil prices down by about 50 percent from the previous four years, the question naturally arises: “Why are oil and gas giants spending millions on new supercomputers?” The common sense answer is the correct one, while prices are down for now, recovery – at least in the short term – is forecast for the end of the year. Discovering and bringing new sources online can take years, which means no one can afford to halt efforts to improve exploration and identify future opportunities.

That being said, volatility in oil price has driven cost cutting in personnel, equipment and major infrastructure, with the supply ecosystems for the highest cost of discovery and extraction sites being hit the hardest. Herein lies the answer to why, in a time of economic uncertainty, so many of the major players are undertaking massive upgrades to their HPC infrastructure; while it can take years to drive any significant cost out of extraction, the ROI of improving exploration has been well demonstrated.

The single factor that has the highest impact on successful exploration is the effectiveness of interpretation of seismic data. For oil and gas companies, having a better understanding of subsurface structures translates directly to reducing exploration risk. Basically, the higher the accuracy of the company’s view of the geological area, the better its chance of striking oil when it drills.

To get the best subsurface view, oil and gas companies rely on high fidelity simulations and modeling. Assuming everyone looking in the same region can afford to get a similar quality of seismic data in a similar timeframe, the most competitive elements that remain in the race to discovery are the mathematical algorithms each company applies to that data, the amount of data they can model and the speed with which they can model it.

Significant improvements in algorithms typically take a long period of time, while adding infrastructure to improve data processing sizes and speeds takes only months. And when you look at ROI, improving speed and accuracy for drilling recommendations – even just by a few small percentage points – more than pays for a multi-million dollar HPC Infrastructure investment.

Therefore, the fastest route to gaining competitive advantage in exploration sits squarely on top of improvements in HPC Infrastructure – specifically on the how much data can be analyzed and reanalyzed in the shortest period of time. While most industry publications focus almost exclusively on rising compute power, when you analyze the environments of the sites that leverage the largest HPC infrastructures, some interesting things come to light about the importance of the storage side of the equation.

There are about a dozen petaflop systems in the world[1] that are not in government labs or academic research. All the publicly announced of these Petascale systems are in oil and gas. The storage in each of these petascale systems is managed using parallel file systems. Another noteworthy aspect of these Petascale sites is that each of them uses high performance storage (i.e. not commodity storage).

Parallel file systems combined with high performance storage enable the Petascale sites to support the tens of thousands of cores at an extremely high rate of performance. These sites – driven by the need to out-compete and maximize returns – grow their high performance storage[2] capacity at 120 to 500 percent per year at these sites; up to 50X higher than the general oil and gas industry, and the storage growth in general HPC sites[3].

And you don’t have to be a major oil and gas company with a petaflop+ compute system to build a competitive advantage based on HPC infrastructure. Midsized producers and independents are starting to exhibit similar trends with an upswing in high performance storage and parallel file system installations.

With oil prices facing modest short-term gains and a very uncertain future in the longer term, we may see continued slow downs and even collapse among exploration areas developed for $80 to $100 per barrel oil. The competition for the market below that price point, however, is likely to remain strong for many years to come, which means we will likely see HPC infrastructure investments – including high performance storage, networks and parallel file systems – continue at a lively pace.

About the Authors

Laura Shepard is Senior Director of Vertical Markets at DataDirect Networks

Chirag Dekate, Ph.D., is Senior Manager Vertical Markets at DataDirect Networks

Shreyak Shah is Manager, Vertical Markets at DataDirect Networks

[1] Based on public sources: press releases, top500, scholarly and press articles.

[2] Storage dedicated to high performance applications such as scratch and home

[3] Market data referenced from WW HPC 2014-2018 forecast (IDC Doc # 248835).

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.