The last several years have been challenging for the oil and gas industry on nearly all fronts. Still, to keep both the upstream and downstream businesses on the cutting edge, the segment has had to keep spending mightily on supercomputing resources.
There are a number of oil and gas supers on the Top 500 list of the world’s most powerful systems. The #9 machine, HPC5 at Eni in Italy, the #11 Dammam-7 and #19 Ghawar at Saudi Aramco, and Total in France with its #21 Pangea III system are recent examples of big spends for big oil in the top 25 alone. And these are just the supercomputers that run the benchmark and go on record with their performance metrics.
According to Keith Gray of TGS, even with resources like these, today’s exploration, development of new sites, seismic imaging, and production needs more. “We have ideas that are not practical today because of computing power limits. These ideas can drive finer resolution and new seismic acquisition technologies. We have algorithms to increase the accuracy of calculations and all of these capabilities mean HPC is critical for us.”
Gray is well-known in oil and gas supercomputing circles following more than three decades at BP leading compute infrastructure investments before his more recent advisory role at TGS. He points to a few emerging trends in the industry that are likely to have an impact on HPC investments for both upstream and downstream.
Given industry constraints, the goal for oil and gas is to be more efficient throughout the discovery to production chain. In recent years this has meant making use of existing investments. One of the most important trends is infrastructure-led exploration, he explains.
“This is the likely path for discovering new resources and extending the life of significant infrastructure investments.” In essence, this uses much of the discovery work at key sites and pushes it out to neighboring areas. “There will be less wildcat exploration. Instead these infrastructure-led efforts will be high-resolution and repeated over time.”
He adds that looking ahead, in addition to the seismic modeling and reservoir simulation compute requirements, the addition of ML, which he says is already useful in augmenting traditional oil and gas applications, means memory bandwidth is one of the most critical challenges ahead.
According to Earl Joseph, CEO of Hyperion Research (formerly IDC’s HPC division) at the recent HPC User Forum where Gray also appeared, this year the projection is $8,665 million spent on servers in the geosciences. There has been steady growth from 2016–2019 and while there were some 2020 COVID impacts, they were not dramatic. “Looking ahead we see steady growth in oil and gas but we are expecting 2025 to be a flat year,” he says, pointing to regular, expected cycles based on machine refresh cycles.
Joseph says that system sizes in oil and gas have continued to grow with a more diverse processor and accelerator lineup. Each company has a different strategy with system design. Some go for top-end GPU accelerators while others remain CPU-only.
Hyperion’s market numbers for oil and gas as well as the broader industry from the HPC User Forum below.
Be the first to comment