EMC Tracks Trickle Down Effect from the Exascale Bubble

For companies that make investments in high performance computing technology, the financial math can be a tricky game. For the select few whose businesses are rooted in supercomputing systems as a primary focus, the quarters are lumpy, the market potential is relatively stagnant, and yet interestingly, the requirements for performance per watt, per dollar, and per application continue to climb, which implies the need for constant, heavy pushes in extensive research and development.

Saying that supercomputing is a tough business is not likely to raise too many arguments, but for large companies who look at HPC as a much less important (financially) market, yet still invest millions into technologies for the broader commercial HPC and large-scale data analytics markets, there has to be a clear, defined story for how those products will trickle down even further to the enterprise masses—and a strong one at that. Consider, for example, EMC in this equation.

Although they are not often discussed as widely in supercomputing circles as some of their other more HPC-oriented brethren, the company has put a fair bit of investment into core technologies that will push the exascale envelope in coming years–and mostly because the federal government has invested in those investments. Like several other major tech companies, they have been on the receiving end of funding via the federal FastForward I/O initiatives and have worked extensively with Intel, Los Alamos National Lab, and companies and labs to push out new technologies that while interesting for the narrow extreme scale market.

Four years ago (and in lockstep with FastForward allocation decisions), under direction from Pat Gelsinger and Joe Tucci, EMC spun out a new division within the Office of the CTO (not a business unit, to be clear) called the Fast Data group. Under the guidance of HPC veteran and EMC Fellow, Percy Tzelnic, the team has worked on bringing critical HPC technologies to the fore, including pioneering efforts on burst buffers (in conjunction with Gary Grider who developed the concept in the mid-2000s at Los Alamos National Lab). The order from on high at EMC was to look into technologies that could be deployed at extreme scale but that had the potential to trickle down to enterprise use cases because, as Tzelnic tells The Next Platform, for just HPC, especially with something at the component level, there is just not enough of a margin or a market to productize something like the burst buffer work.”

“In technology, you learn by doing. Which is why we started working on exascale even though we knew we would never be an exascale vendor. There’s just not enough margin there but a lot of what we do work on does trickle down to where our real focus is.”

This statement applies to the burst buffer technologies as they stand today, although Tzelnic says that the generations they developed with Los Alamos and others were pushed into the first generation of the idea, which Tzelnic helped developed with Grider, called ABBA. This added active processing into the burst buffer, which fed into the second generation of the concept, another collaborative effort called IOD, and then led to developments on the burst buffer for the Cray Trinity supercomputer, which is a pre-exascale machine set to come online in the 2018 timeframe (along with other large-scale DoE supers including Summit and Aurora).

While all of these developments have been useful for extreme-scale computing, Tzelnic says they are not likely to see the commercial light of day, at least until they can be respun around existing technologies that have a place in enterprise settings, which he says they no doubt will—albeit in a modified form. For instance, the 2 Tiers project (a deeper piece on that is forthcoming) that Tzelnic and his group are working to push to a wider market, is the direct result of innovations that were spurred from FastForward and exascale focused research that was revised for a commercial market.

EMC has always had an interesting position when it comes to supercomputing in that they are clearly involved, although not interested in aggressively pursuing a path, at least at the top tier. Indeed, they are to be found roaming the floors at all of the major supercomputing events, and they do actively participate via research endeavors like the ones Tzelnic and his group lead, but ultimately, he says, “we are a commercially oriented company.” Although at the time of the acquisition of Isilon in 2010 the company did say that the purchase would help them bolster their reach into HPC, the focus then was more on technical computing markets that fall into the commercial HPC (rather than general enterprise or conversely, extreme scale/Top 500) sphere. And although EMC clearly saw something interesting in the ever-stealthy DSSD, Tzelnic says that it too is a unit that is focused on the needs of high-end commercial computing where performance is a requirement—not on Top 500-class supercomputing technology.

We asked Tzelnic just how important HPC is strategically and to the bottom line, and while he said the market is important as a technology development bed for eventual commercial technologies (as in the case of ABBA and its successors) the innovation there has to have a mainstream slant to remain viable. For example, he says that they are looking to lead HPC developments into where there are needs in large-scale data analytics, in commercial technical computing, and general enterprise areas where users “are willing to pay more than fifteen cents per gigabyte because they see the value in having more features.”

So here’s the interesting backstory for this story. The reason it is appearing is because EMC reached out to The Next Platform to talk to us in detail about the executive order from President Obama pushing more emphasis toward supercomputing and describe their investments previously and in the future. Although Tzelnic agrees that the order lacks some “show me the money” appeal and that indeed, it remains to be seen how it will have an impact on actual exascale funding (which still sits unheeded on many a White House desk). But for a company like EMC (and not just EMC, all companies that were historically on the receiving end of federal FastForward funds that then fed into commercial product developments), seeing a fresh round of exascale investment is critical for the future of their research and developments.

According to EMC, they have been a leader on the I/O stack for exascale since 2011 when they joined Los Alamos in the CRADA program to develop storage and I/O technologies for next-generation systems. While much of this work produced a wealth of open source technology, as Tzelnic says, some of this finds its way, albeit in altered form, into commercial products—and on occasion, not just for EMC. For instance, the $50 million FastFor2ward investments from 2012-2014 that EMC was involved with (with Intel as the primary) led to the burst buffer technology that will support the Cray Trinity system, which are partially based on EMC’s collaborative innovations in that space.

Ultimaitely, as Tzlenic says, “it is more difficult than ever for vendors to justify the huge investment required to participate without appropriate public incentives, based on current ROI models (e.g. in 2011, IBM had to terminate their contract for Blue Waters for that reason).” He notes that similarly, “even though EMC has developed key exascale technology for the I/O stack, it allows other vendors to productize and monetize the technology, while EMC, for its part, seeks commercial avenues for it in the enterprise.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.