IBM knows how to adapt to an ever-changing enterprise tech landscape. The venerable company more almost 20 years ago shed its PC business – selling it to Lenovo – understanding that that systems were quickly becoming commodity devices and the market was going to stumble. A decade later IBM sold its X86-based server business to Lenovo for $2.3 billion and in the intervening years has put a keen focus on hybrid clouds and artificial intelligence, buying Red Hat for $34 billion and continuing to invest its Watson portfolio.
However, that hasn’t meant throwing out product lines simply because they’ve been around for a while. IBM has continued to upgrade its mainframe systems to keep up with modern workloads and the bet has paid off. In the last quarter 2019, the company saw mainframe revenue – driven by its System z15 mainframe in September 2019 – jump 63 percent, a number followed the next quarter by a 59 percent increase.
Tape storage is a similar story. The company rolled out its first tape storage device in 1952, the 726 Tape Unit, which had a capacity of 2MB. Five decades later, the company is still innovating its tape storage technology and this week said that, as part of a 15-year partnership with Fujifilm, has set a record with a prototype system of 317 gigabits-per-square-inch (GB/in2) in areal density, 27 times more than the areal density of current top-performance tape drives. The record, reached with the help of a new tape material called Strontium Ferrite (SrFe), is an indication that magnetic tape fits nicely in a data storage world of flash, SSDs and NVMe and a rising demand for cloud-based storage.
It also shows that for at least the next 10 years tape storage technology can continue to scale, an important consideration in an industry that is showing exponential growth in the amount of data that is being generated – 175 zettabytes a year by 2025, according to IDC – and is rapidly shifting to a hybrid cloud model, where all the data in the cloud needs to be backed up and, currently, most of it is being done on tape. In addition, as the amount of data being stored grows, the cost of doing so needs to be controlled.
“The total amount of data in the world is doubling every year or two, while at the same time the areal density scaling of hard disk drives has really dramatically slowed down in the last few years,” Mark Lantz, manager of cloud FPGA and tape technologies at IBM Research in Europe, said this week during a streamed demonstration. “Currently it’s scaling at less than eight percent compound annual growth rate. That areal density scaling in HDD is critical because historically that’s what’s driven the dollars-per-gigabyte scaling. The slowdown in areal density scaling has translated into a stagnation of the dollar-per-gigabyte scaling of HDD. The net result is the datacenter is getting out of balance. We’re currently creating data at a much faster rate than we can afford to store it, at least if we want to store all of that data on spinning disk.
“Fortunately, a large fraction of the data that’s out there is what we call cold. It hasn’t been accessed in a long time or is very infrequently accessed and much of that cold or infrequently accessed data can actually tolerate much longer latency or higher latency. For data like that that tolerates higher latencies, tape technology is actually very well suited for storing and preserving that data because it is by far the lowest total cost of ownership for storing the data. If the data isn’t actually getting accessed, it doesn’t consume any power, so it’s an extremely green technology.”
Even as all-flash storage systems and SSDs continue to garner much of the headlines, the tape storage market continues to churn forward. According to a report from the Tape Storage Council released earlier this month, the three tech providers for the LTO program – IBM, Hewlett Packard Enterprise and Quantum – saw shipments in 2019 reach a record of more than 225 million cartridges shipped, adding that more than 225 million have shipped since LTO was introduced in 2008. The numbers can be seen below.
That makes sense, according to Lantz. Hyperscalers and other cloud providers are embracing magnetic tape for a number of reasons. The total cost of ownership is lower for storing large amounts of data, about four times less than disk, he said. Security is another factor, with tape offering what Lantz called an “a natural air gap, an extra barrier against unintentional or malicious attacks.” There’s also built-in, on-the-fly encryption on the drive.
The third reason is the one IBM was addressing with its demonstration: the future scaling potential. Like other tape storage vendors, IBM has been leveraging barium ferrite (BaFe) particles to coat the magnetic tape storage media. It’s a technology that the company has been using since at least 2006. The microscopic particles are used to encode data onto the strips of tape. However, scaling with BaFe has become an issue.
This is where SrFe comes in. with particles that are smaller than the ones in BaFe. This leads to increased density, more data capacity and greater scaling capabilities. In this case, the use of prototype SrFe resulted in 317 GB/in2 in areal density, opening up the possibility of a single tape cartridge storing about 580TB of data. IBM said such cartridges could come to market within 10 years, with the ability to storage data for as long as 30 years without drawing additional power.
By comparison, IBM’s latest tape drive – the TS1160, which came out in 2018 – holds 20TB on a cartridge that can fit in the palm of a person’s hand.
Along with the SrFe magnetic tape, IBM also developed a range of other technologies to build the prototype that was used in setting the record, Lantz said. These include a new low-friction tape head that can leverage very smooth tape media and a device to detect data written on the SrFe media at a linear density of 702 kilobytes-per-inch (Kbpi) when the data is read back on a narrow 29 nm wide TMR read sensor.
There also was also a new servo pattern that is pre-recorded in the servo tracks, a prototype head actuator and a set of servo controllers. Servo tracks are used to help the servo controller keep a precise positioning of the read/write heads to the tape that is using the head actuator and ensure an accurate read of the data on the tape. The new technologies provide accurate head positioning.
The combination of the SrFe media and new technologies introduced by IBM will enable tape to continue scaling for at least the next decade. IBM estimates that more than 345,000 exabytes of data are currently stored on tape now and the company said it can now give hyperscalers and cloud providers a roadmap through 2030.
However, when all of these technologies will reach the commercial market is unclear, Lantz said.
“We’ve actually developed a huge family of new technologies to enable this new record areal density demonstration,” he said. “Some of those technologies can be integrated into our products quite quickly. For example, the technologies that are essentially implemented in software or the drive firmware can be transferred into our products relatively quickly. Other technologies, like things that need to be built into the new generation of ASICs, take longer to integrate into products depending on the refresh cycle of the ASICs that we use in our drives. All of these technologies will be phased into our products over a period of maybe a year or two to several years and will enable the continued scaling of the areal density of our products out into this 10-year horizon. There will be really a kind of a staged introduction of the technologies over time.”
As we’ve talked about over the past few years, there is an expanding roster of use cases for tape storage. Hyperscalers that are storing massive amounts of data may be embracing flash SSDs and disk drives, but there is a place for tape as well and they have a hell of a lot of it and will for the foreseeable future. IBM is seeing to that.
Be the first to comment