To put the needs of Airbus in some basic computational context, consider that for a single large passenger jet, there are well over two million individual parts that need to be simulated individually or as part of a larger system.
Further, those millions of parts must stand up to varied pressure and strain over the course of the typical jet’s lifetime, which is between thirty to fifty years. Couple that with the need for operational reliability (this is different than safety, which is its own set of issues) of over 99%. as one can imagine, this takes top supercomputer-level capability–and according to Airbus, as long as they can continue scaling their complex multiphysics codes, they wil keep finding ways to take advantage of exascale-class compute capability and beyond.
Gerard Buttner who handles the HPC decision making for the Airbus fleet tells The Next Platform that the company is currently in the process of whittling down the vendor list for their next generation of supercomputer systems, the main three of which are spread across two sites in Europe and which will be tasked with crunching a large number of the team’s modeling and simulation workloads for existing and future aircraft. He says that as the company looks to the next range of systems, finding an architecture that meshes with their custom engineering and physics code is the big challenge–the rest is on their internal teams to continue pushing homegrown and commercial codes to new scalability heights.
Airbus currently has a number of Top 500 supercomputers, all of which are HP POD clusters featuring Intel Ivy Bridge and InfiniBand connections. The top machine is in France (34,560 cores and at #127 on the list), another is in Germany (#166 with 21,120 cores), yet another farther down the list (#295) is also in France and sports 24,192 cores. This brings the Airbus total to over 80,000 cores and approximate peak performance possible around 1.5 petaflops total.
As seen in the slide below, Airbus is sitting on Top 500-level systems that tend to follow the trends globally at the top of the supercomputing list.
In addition to the two main large-scale clusters in France and Germany, there are other smaller end of line clusters throughout Europe, but this is a strategy that creates a difficult situation for Airbus and its supercomputing fleet. The company has been on a path to consolidate some of these, Buttner says, and has had some luck since the process of culling together its disparate clusters began in earnest in 2006. At that time, he says, they had 46 separate clusters from a range of vendors (and 13 different operating systems) across five countries. “It was awful,” he says, “it took us an entire year to roll out a single piece of software.” Although consolidation was a priority, he says Airbus never serious has looked to the cloud, even a private a cloud in Iceland (as he says was once proposed), because of the data movement costs and delays as well as for the expected reasons around data protection.
“We are always experimenting with things like GPUs and Intel MIC, but for us, we have to be able to see a clear value. For us, with anything that has 60 or more cores that is great, but if the applications don’t scale there is no value. Also, if the software does not fit or takes a lot of work that is not a fit for us either” Buttner says that despite the fact that Airbus is home to some many compute cores, most of their mission-critical applications only scale to 256 or 512 cores, something that is a big problem for them internally and to consider as they look to their next generation of systems.
On the same afternoon of our chat with Buttner, Dr. Klaus Becker, Senior Manager of Aerodynamic Strategies from Airbus Engineering described the computational complexity of the many simulations across the various components and entire passenger and cargo jets, highlighting how an infinite amount of computing power could continue to yield new insights—assuming, of course, that the software could scale to meet the opportunity of exascale capabilities.
Becker walked us through several of the computational fluid dynamics and other codes Airbus uses, describing the multiphysics problems that are tackled at scale, but as he talked, he kept coming back to some key challenges at the software level, particularly software scalability, the lack of tools to allow Airbus to take advantage of new architectures, and the fact that integration of new algorithms still need to fit better with hardware architecture.
Becker also notes that when it comes to extending their internal HPC operations, the costs continue to mount. He points to the compute infrastructure as just one leg of the overall pricing equation—the cost of operations, from people to power consumption, continues to grow. Further, he says, as they look their next generation of infrastructure and investments, the need to be able to use their HPC systems to tackle what are essentially “big data” workloads is also paramount (something that was also addressed by several parties, including Intel, at ISC 15).
Speaking of the next wave of infrastructure investment at Airbus, so far, Buttner says Airbus is down to selecting between three vendors—a list that once included nine, but many had to drop out of the RFP process because the requirements that Airbus has are “too complex” on the application performance side. “We describe our use cases and applications, gave them to the market, and watched as many disappeared.”
The interesting takeaway here is that the compute part is “easy” for Airbus, especially as they continue consolidating their infrastructure. For an organization with mission-critical needs and complex applications, it is tuning, scaling, and evolving the hardware to meet the capabilities that are available in high-core count chips–as well as accelerators. While these are all on the table for research and development, in their day to day operations, Airbus simply wants HPC that works. And without a clear ROI on their existing codes, it is going to be tough road for novel systems here, even if the Top 500 that they are compared to above seems to be weighted toward accelerators and eventually, more novel architectures.
HPC code scalability might not be as sexy of a topic in supercomputing as the big iron itself, but the concerns expressed by Airbus are echoed throughout industry. Manufacturing, electronic design automation, and other areas are faced with limited choices for their simulations. Without scalability, portability, and accessibility of those codes, all the hardware in the world will, at a certain point, fail to make a difference.