Energy Giant Eni Starts Investing In Supercomputers Again

Energy is not free, not even to energy companies, and so they are just as concerned with being efficient with their supercomputers as the most penny pinching hyperscaler or cloud builder where the computing is the product.

Like the other major oil and gas producers on Earth, the last few years have not been easy ones for Ente Nazionale Idrocarburi, the Italian energy major that employs 33,000 people and operates in 76 countries worldwide and now has the distinction of having the most powerful supercomputer in the energy sector – and indeed, among all kinds of commercial entities in the world. The company has a firm belief that simulation and modeling will improve its oil, gas, electricity, and chemical refining operations – the main task being finding the hydrocarbons and drawing them out of the ground for less and less money as each year goes by – and invests accordingly.

As the price of a barrel of oil plummeted in the past few years, revenues dropped precipitously, falling from €98.2 billion in 2014 to €72.3 billion in 2015 and down to €55.8 billion in 2016; full figures for 2017 are not out yet, but through the nine months ending in September, the company brought in €49.4 billion in revenues and invested €7 billion in capital expenses related to oil and gas exploration and development of hydrocarbon reserves. Through that downturn, Eni kept investing, spending €11.2 billion in 2014, €10.7 billion in 2015, and €9.2 billion, and over that time has booked an aggregate loss. This is a short-term problem that all energy producers face, and Eni knows that it has to invest in the future if it hopes to have one. If and when energy prices rebound, Eni will, like its peers, profit handsomely; if prices do not rebound, Eni, the fire-breathing six-legged wolf, will make a living like the rest of us.

In recent years, Eni has taken a two-step approach to upgrading its supercomputers, and it took a few years off upgrading its systems during the worst parts of the oil and gas downturn but is investing once again in gear. Back in 2013, Eni was an IBM shop and it installed its HPC1 system, which was comprised of 1,500 iDataPlex DX360M4 nodes with a pair of Intel “Sandy Bridge” Xeon E5-2670 processors, each with eight cores running at 2.6 GHz; the system was interconnected with a 56 Gb/sec FDR InfiniBand network. The HPC1 system had a peak theoretical performance of 499.2 teraflops, and was rated at 454 teraflops on the Linpack Fortran matrix math benchmark, which is not really that much computing by modern standards.

A year later, Eni did a very substantial upgrade by shifting to a hybrid architecture, adding another 1,500 iDataPlex DX360M4 nodes. But this time around, the nodes deployed faster “Ivy Bridge” Xeon E5-2680 v2 processors, which had ten cores running at 2.8 GHz, plus a pair of Nvidia “Kepler” Tesla KX20x GPU accelerators, which use the GK110 GPU with 2,688 cores running at 732 MHz and 6 GB of GDDR5 memory to hold applications and data. The iDataPlex nodes are pretty skinny, which is why Eni could not cram them with dual-GPU Telsa K80 accelerators, which pack a lot more floating point oomph. That said, the K20X accelerators have 1.31 teraflops at double precision and 3.95 teraflops at single precision (useful in seismic analysis), and they accounted for the vast majority of the 4.61 petaflops of aggregate compute in the HPC2 system. (The machine was rated at 3.19 petaflops sustained on Linpack.) And with a total of 3,000 nodes across the HPCC1 and HPC2 system, Eni still had over 1 petaflops of raw CPU compute for workloads that were not accelerated by GPUs.

Last year, after a nearly three year hiatus in supercomputing acquisitions, Eni got out the checkbook and started spending on supercomputers again. IBM had sold off its System x server business, and therefore the iDataPlex line, to Lenovo several months after the HPC2 system was installed. Lenovo was obviously in the running for the HPC3 system that was booted up in April 2017 in Eni’s Green Data Center in Ferrera Erbognone, outside of its Milan headquarters. And in fact, it won the deal.

The HPC3 machine was based on the follow-on NextScale nx360M5 server nodes, which were designed by IBM’s System x team some years back and which are now sold and supported by Lenovo. The server nodes in the HPC3 machine have a pair of “Broadwell” Xeon E5-2697 v4 processors, with 18 cores each running at 2.3 GHz, plus a pair of Tesla K80 accelerators, which each have a pair of GK110B GPUs on them. (So that is actually two CPUs and four GPUs per node; the nodes are linked by 100 Gb/sec InfiniBand gear from Mellanox Technologies.) The Tesla K80 cards are rated at 2.91 teraflops double precision and 8.74 teraflops at single precision, and represented a huge leap in performance for about half the number of nodes of the HPCC1 and HPC2 systems. The HPC3 machine was rated at 3.8 petaflops peak and delivered 2.59 petaflops sustained on the Linpack test.

With the HPC4 system that was just announced, Eni is breaking into the top ten most powerful sites in HPC, with a machine that will weigh in at 18.6 petaflops peak and that has not, as yet, been tested running the Linpack benchmark. The goal for HPC4 was to break 10 petaflops of sustained performance, and it seems like Eni will be able to do that.

The HPC4 system is interesting for a number of reasons. First, Hewlett Packard Enterprise won the account away from Lenovo. The feeds and speeds of the new system are not being detailed as yet, but the HPC4 supercomputer consists of 1,600 plain vanilla ProLiant DL380 nodes, each with a pair of 24-core “Skylake” Xeon SP processors plus two Nvidia Tesla P100 GPU accelerators based on the “Pascal” GPU architecture.

The Pascal accelerators are a generation back from the current “Volta” V100s, but they also probably are available for much less cost and deliver equivalent performance on the kind of double precision and single precision floating point math that Eni requires for its software. The software stack includes seismic analysis for oil exploration (Eni uses Schlumberger’s EXTRACT tools for this), reservoir modeling, and oil and gas plant optimization applications. (Volta’s Tensor Core units are really aimed at machine learning – at least until HPC coders figure out how to hack their code to take advantage of them.) Each P100 accelerator is rated at 4.7 teraflops double precision and 9.3 teraflops at single precision, and this is where the big jump in performance with this latest upgrade is really coming from.

The HPC3 and HPC4 systems in the Green Data Center facility will bring 22.4 petaflops of aggregate computing to bear on Eni’s workloads, which works out to a factor of 45X increase in performance in the past five years at the company.

The Green Data Center is an interesting facility. It consists of six buildings, arranged as a pair of mirrored clover leafs, and one for each leg of the Eni dog. Construction on the facility, which cost €28 million, started in December 2011 and was completed in early 2013.

You can get a good sense of how important HPC is to Eni by counting the datacenters. In each clover, two of the datacenters are allocated to general processing, such as cranking through 5 million electric bills for the 37 TW/hours of juice that Eni sells from its gas turbines, and one larger datacenter is dedicated to HPC. The six datacenter facility – what a hyperscaler would call a region – is rated at 30 megawatts, but has the capability to scale up to 36 megawatts during peak loads. The facility uses a mix of outside air cooling (which represents about 75 percent of the cooling capacity) and traditional air conditioning (about 25 percent). The datacenter deliver a power usage effectiveness – the ratio of the power sent into the datacenter divided by the power consumed by the computing, storage, and networking gear – of 1.2, which is on par with the middle of the road datacenters from the hyperscalers and cloud builders, who push the efficiency envelope and sometimes get PUE down to 1.1 or even lower. A good enterprise datacenter might have a PUE in the range of 2.0, and some of them are up to 3.0 or higher, which is not very energy efficient at all.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.