Memory chip partners Intel and Micron Technology shook up the flash and main memory markets back in July with the announcement of 3D XPoint memory, something that can be used as both a bit-addressable device like DRAM and a block device like flash.
The two companies talked in generalities about how this memory might perform, saying that 3D XPoint will have about 1,000 times the performance of NAND flash, 1,000 times the endurance of NAND flash, and about 10 times the density of DRAM. The companies have also suggested that 3D XPoint memory, which will carry the Optane brand at Intel and which will be available in both main memory DIMM and SSD form factors, will cost somewhere between that of flash and DRAM, too. This gives Optane many attractive attributes both technically and economically to become a popular system component, and we fully expect that it will be just that.
Intel is being careful not to overpromise and to also not give away too many details before Optane SSDs start shipping sometime in 2016. The company has not said when it will deliver Optane DIMMs, but at the OpenWorld conference hosted by Oracle, Brian Krzanich, chief executive officer at Intel, talked a bit about the evolving memory hierarchy – something we were coincidentally discussing at length here at The Next Platform as 3D XPoint was announced – and trotted out some benchmark tests that pitted Optane SSDs against flash SSDs.
Keeping the ever-increasing number of cores on a processor fed is a big challenge, and Krzanich said that the addition of Optane 3D XPoint memory to systems would go a long way towards putting the balance of I/O, compute, and storage back in alignment to allow companies to unleash the performance in the processors. To show off the performance of the future Optane SSDs, Krzanich took a standard 1U server – it looked like on of Oracle’s X5-2 machines, a two-socket “Haswell” Xeon E5 v3 machine – and ran two Oracle benchmark tests using Oracle’s clone of Red Hat’s Enterprise Linux.
The Oracle server was running two different Oracle application stacks – Krzanich did not identify them – and was partitioned to run the tests using an Intel P3700 NAND flash SSD on one side and the prototype Optane SSD on the other. The Oracle machine, by the way, has NVM Express links for SSDs, which is a way that the processor and the flash can be linked to each other with a thinner driver stack that gets unnecessary SAS and SATA controller code out of the stack and substantially boosts throughput and lowers latencies for non-volatile storage of all kinds – in this case, both NAND flash and 3D XPoint.
On the first test, here is how the prototype Optane SSD stacked up against the P3700 NAND flash SSD:
As you can see, it did a little bit better at 4.4X the I/O operations per second and about 6.4X lower latency. You might be presuming that the latency improvement was not just due to NVM-Express combined with Optane, but as you can see NVM-Express links were used for both kinds of SSDs in the test. A comparison with SAS or SATA SSDs would show an even wider performance and latency gap, we are sure.
Here is how the two storage media stacked up on the second test using Oracle software:
The performance gap was even wider on this test. (It would have been good to know precisely what these tests were, of course.)
“This is a huge performance improvement,” Krzanich said. “I think all of us, for any kind of I/O operations, that could see a 5X to 8X improvement in speed – that’s what we have been looking for. It is an improvement in both performance and latency. But there is really more to this technology. Intel Optane SSDs provide about 200X less variability, and that is an additional benefit that you can count on for your datacenter.”
By variability, we presume that Krzanich means that the average latencies will coalesce around a certain figure and not wander too much with jitter from outliers that have much longer latencies. As we have pointed out time and again here at The Next Platform, raw performance often matters for lots of scale-out, scale-up, and scale-in workloads, but consistency of performance often matters more. Raw performance is not much good if you can’t count on it. The idea is to do more transactions with a higher consistency, said Krzanich, with improved response times, and ultimately it will end with better user experiences for applications.
“We have been real careful to not show this with slides, but with working demos, because I want people to get excited and realize that this is coming,” he said. “This isn’t five years from now, this isn’t two years from now. This is next year, and this is going to transform how we think about data and memory and storage.”
Now, of course, SSDs are important, but in the long run, Intel also wants to have Optane 3D XPoint memory slot into the same sockets as DDR4 main memory, and Krzanich brought a mechanical model of an Optane DIMM to show off. This truly marks the return of Intel to the memory market, something it walked away from the main memory business in 1985, but given the tight coupling of processing and memory technologies, we have always said that it was inevitable that Intel would get back into the memory business. (If you count various cache memories, Intel never left the business, of course.)
Krzanich said that Intel will have working Optane DIMMs ready later this year for early testers, and will combine the performance of DRAM with the capacity and cost of flash. What this means is that a mix of DDR4 and Optane DIMMs in a two-socket server with a total of 6 TB of addressable memory, “virtually eliminating paging between memory and storage, taking performance truly to a whole new level.” Krzanich added that data encryption in the DIMM, so that data at rest on the DIMM – it will take us all a while to get used to that – is secured.
We look forward to seeing those Optane DIMMs in action and in seeing how they stack up to DDR4 DIMMs in terms of price and performance. We are gathering our thoughts about what Optane DIMMs and SSDs might mean for system and cluster designs, too. Like Intel, we expect some big changes.