Steps Toward Programmability for Non-Volatile Memory

Among the trends that we have been tracking over the course of the year, few others, outside of key processor developments, have attracted more attention than what is happening in the non-volatile memory space. Companies like Intel, with its 3D XPoint memory, IBM’s phase change memory, and HP’s memristor-based technologies are still out of general reach, but these additions will bring new levels of performance and efficiency to large-scale computing platforms in the years to come.

While we tend to focus on the hardware and future implementation details around these forthcoming technologies, for developers, such innovations are viewed with some degree of wariness, since, as one might imagine, taking full advantage of tech like non-volatile memory means adapting code. Still, the benefits will outweigh these costs, at least in theory. Consider, for example, HPE’s “Machine” which will be laden with a large pool (320 terabytes) of shared non-volatile memory, not to mention 2,460 cores and a photonic interconnect.

In such a case, using the non-volatile memory pool for standard random access is still an option, and so too is it still possible to use existing codes just as they are. Without any application legwork, the benefits of having much faster access over flash are still there, but the potential boost of using that shared non-volatile memory pool are lost. In fact, according to Susan Spence, of Hewlett Packard Labs, who is in charge of focuses on exploiting the performance advantages offered by non-volatile memory for large-scale applications, there will be a number of other advantages left in the dust without code changes. The most of obvious of these would be persistence and fault tolerance—two of the major often-cited benefits of non-volatile memory, in addition to the other performance-related promises.

The big question for developers looking at the potential world of non-volatile memory, however, is how much footwork will be required. Spence’s team has been hard at work on that problem and has devised a new approach, for now limited to codes based on Java and C++, called “managed data structures.” Via this API, the platform for programming to meet the capabilities of shared non-volatile memory will provide a single software layer and data format that allows an existing application (assuming its written in those two codes—more languages are set to follow, Spence says) to read and write directly into and from that pool of non-volatile memory inside The Machine.

Specifically, her team has developed a “managed data structures” approach that provides a single software layer and data format to allow Java and C++ based applications to read and write directly into and from the well of non-volatile memory inside The Machine. “As far as application developers are concerned, they work with their application level data structures using managed data structures. This supports direct reads and writers in non-volatile memory, and because it’s non-volatile, there is ease of data sharing and no data copying.” The big benefit, however, is the reduction in the number of software layers involved as well as the associated different data formats. Having these, as is the case with traditional database environments, creates more complexity and leads to more room for error—something she notes is minimized with the single layer framework of managed data structures.

The way the non-volatile memory story and managed data structures story might best play out is in an enterprise data warehouse scenario, or for the purposes of seeing it in action here, in a Hadoop/MapReduce environment. In this case consider the number of map processes that can write their data out to a shared pool of non-volatile memory. A shuffle engine can then identify what data needs to be made available for the reduce task part of the applications. With non-volatile memory, the opportunity to share that data versus the conventional approach where shuffle engine sends out the updates across the network (causing bottlenecks there) is potentially a game-changer. The benefit in such a case is that all the data is shared concurrently in non-volatile memory so all the shuffle process needs to do is that data over via a shared memory pointer.

For developers, the question is how this might be done. To prove their point, Hewlett Packard Labs has already developed Spark for The Machine, which has a shuffle engine that itself has been rewritten using the managed data structures approach to use shared non-volatile memory, which Spence said gives it major (but unspecified) performance advantages over the traditional approach and its associated network congestion.

The Machine will run Linux, of course, and support all standard file system APIs, but if you’re going to pay a premium (at least we expect, pricing is still a bit of a mystery), using managed data structures when it’s possible will provide an advantage, at least for that bulk of applications that are written in Java or C++. The file system APIs will use the non-volatile instead of flash or disk as usual and those managed data structures then read and write to directly to non-volatile memory.

Ultimately, Spence says, for developers, this all leads to a shorter path to persistence, less code overall thus fewer errors, and a faster development cycle given the single software layer and data format.

Despite the fact that non-volatile memory developments have been progressing rapidly, debates about the future programming challenges have been isolated to smaller conference and research debates. There are some interesting ideas presented in this University of Wisconsin research that highlights the various programming interfaces for future non-volatile memory technologies, and several academic papers, including this one of note in particular, but the tooling on the part of makers of such devices has been relatively scant.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.