With the oil and gas industry continuing to spend on massive supercomputers, even in the wake of declining revenues, and no sign of the HPC business slowing for the oil and gas segment, one has to wonder what alternatives on the hardware and software front these companies might look to as costs (for both systems and the power required to support them) drives skywards.
The cloud is a viable alternative to on-site processing of some HPC workloads in other industries, but oil and gas is usually not at the top of the list when it comes to segments that are offloading some or all of their critical work to public (and even hybrid) cloud providers.
In some ways this is not a surprise. The problems with executing high performance computing applications in a cloud environment are well known. Latency, the transmission of large data volumes, inadequate software license models for complex simulation applications—these are but a few of the frequently cited roadblocks.
Even still, over the last few years in particular, especially as the infrastructure at the various public cloud providers has been firmed up to support HPC in the cloud (the addition of 10GbE, more powerful processors, GPU compute nodes) a crop of new use cases has emerged. Many of these uses are in areas like life sciences and manufacturing and it is rare to hear much about the oil and gas industry’s use of cloud.
This is for a few key reasons including data locality and security (oil companies are notoriously worried about data protection so others don’t swoop in and drill in their locations—a fact complicated by distributed datacenters for reservoir modeling). But there are more technical issues that go beyond the common security woes—but past the typical performance worries about latency.
According to Morgan Eldred, a former strategic IT projects manager at Shell and Maersk Oil, now a Gartner analyst who follows oil and gas IT infrastructure, at the top of the technical list of cloud barriers for oil and gas, there are the applications. Even though companies like Schlumberger are offering a high-end and newly architected cloud-based offering that reworks both the way the application runs in the new environment as well as the licensing model. This is still not an ISV trend. Most of the existing software vendors in this area still use physical dongles to manage license use, after all, says Eldred. For that matter, the same is true in other sectors, including manufacturing where it is only just recently that high-end engineering simulation capabilities from companies like ANSYS are available via a cloud model.
As he tells The Next Platform, one of the biggest challenges for running large-scale oil and gas simulations in the cloud—one that trumps the performance barriers of running a node-to-node communication sensitive workload in a remote environment—is that the software itself is not primed for new architectures. “There are multiple applications in oil and gas running on HPC systems; from deep processing of data, to 3D visualizations of models, to simulations of deep earth events that are massive in scale. For those big simulations, even if you throw huge processing power, it always comes down to dependency on the way that application has been programmed.”
“There is no way around the applications problem without re-architecting code if it was developed internally. The scientists who run simulations are working with mostly legacy architected simulations. Learning to run these things is not a simple undertaking—there is a lot of scripting, batch processing, and complexity.”
For many of the companies that might look to cloud for their large-scale simulations, one of the barriers is simply that to reduce data movement delays and costs to and from the cloud, the only proposition is to go “full cloud” and keep data there, using the cloud as a terminal. That is a scary prospect for oil and gas companies, especially since, as Eldred notes, “for certain firms, especially in Russia, the Middle East and elsewhere, that data is the lifeblood of the organization. What happens in a reservoir under the ground does not follow national borders, so if another company gets the data, it becomes a big issue.”
Eldred says that there have been major shifts toward the cloud for large oil and gas simulations at a few of the major companies. While he was unable to name the oil giants involved, he said that while this was not an enterprise-wide shift into the public or even hybrid cloud, it was proving valuable for smaller pockets of engineers and researchers for spotty project demands. The large supercomputers these companies have are often occupied with mission-critical simulations, some of which might be able to consume all of the machine. Having on-demand access to compute resources is useful in such a case in terms of cost as well as users are able to spin up and automatically scale down their compute without moving it into queue to run on a massive power-hungry oil and gas supercomputer.
While the “spotty demand” use case for moving some workloads into a cloud environment is nothing revolutionary since it is really the primary way HPC users are interacting with public cloud resources, it is a small but important step, according to Eldred. He says that another major leap comes from the companies that are finally rearchitecting their applications for better performance under the weight of virtualization and latency as well as updating their license models to fit these users.
At the end of the day, the cloud is proving to be a rich well for compute to back computationally-intensive simulation tasks for individual research groups within a big oil and gas company, but the outlook for clouds as a high value paradigm for big companies to run full-scale simulations is not favorable, even with all the firming up of the cloud infrastructure, applications, and costs.
“The human element, as in so many other industries is the real barrier,” Eldred concludes. “The IP is simply too valuable, and the ‘all or nothing’ approach to avoid having to move data around” looms too large for big oil to process.