Site icon The Next Platform

Looking Ahead To The Next Platforms That Will Define 2017

The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.

Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for a bit and get caught up. But the world does not work like that, even if the hope of a pause is what sometimes is the only thing that gives us the strength to keep forging ahead. This is a good kind of hopeful self-deception, we think. And when we look back, it is the progress we have made that we remember.

So it is as we enter 2017, which will likely go down as one of the big transformational years in the IT industry. Whether the economies of the world go into recession or not – we contend that recessions accelerate transformations rather than slow them – there is going to be a lot of change.

It is certainly going to be one of the biggest years for compute that we have seen in a long, long time, and we think that deep learning and machine learning are going to become even more entrenched in the HPC, cloud and commercial sectors as well, changing the very way that we think about applications. There will be continued progress on the networking fronts, and a slew of non-volatile memories are going to be making their way to market, too, upsetting the boundaries between short-term main memory in systems and long-term magnetic media storage attached to the systems. The ever-changing software stacks will continue to evolve, moving from point products to platforms as they always do, and who knows, some new idea may come along and upset all of the establishment.

We certainly hope so because we, like you, enjoy the excitement that comes from new ways of doing old things and from doing completely new things, too. We will have our eyes wide open for both here at The Next Platform, as is our mission.

So many different kinds of processors and coprocessors are going to be vying for the right to move and crunch data in 2017 that we are thinking of it as a kind of Cambrian Explosion, and whole new niches are opening up just as new kinds of processing are following fast on the heels to fill those niches. This is precisely how it should be. The question we have is will the desire for selling these myriad forms of compute outstrip the appetite for compute.

Getting more efficient by having custom processors or offloading from processors to coprocessors means selling fewer processors, by definition, but thus far, these alternatives have not had much of an impact on sales of X86 processors, largely Xeons from Intel. But Intel itself is peddling Xeon Phi manycore chips and Altera FPGAs, and others are pushing their own GPU or FPGA alternatives. Moreover, AMD is waking up in the datacenter this year, with its “Zen” server chips and associated “Polaris” and “Vega” GPUs, and Cavium and Qualcomm are keen to drive their respective ThunderX and Centriq processors into the racks where Xeons currently roam free. IBM will also be getting its Power9 processor into the field about the same time that Intel rolls out its “Skylake” Xeon chips (they probably will not have E5 and E7 designations as the fast several generations of Xeons did), and Nvidia will be getting its “Volta” GPUs out as well.

The ARM collective is gunning for 25 percent market share, IBM wants 10 percent to 20 percent for Power chips, and myriad specialized chips as well as FPGAs are fighting for share in a market where Intel Xeons drive nearly 90 percent of server revenues and over 99 percent of server shipments. AMD will take some share by default with Zen CPUs (maybe 5 percent this year and 10 percent next year), and will start making headway with GPUs, too. Nvidia will see its tesla and GRID compute businesses continue to grow, too.

Clearly, there is some share to take away from Intel, but not anything like half of the market. It took Intel two and a half decades and a massive expansion in compute to get the amazing share it holds, and these conditions cannot be repeated again. This time will be different, and the outcome is a lot less certain. It was easy to bet that the main chip on the desktop would jump to the datacenter, morph to suit its needs, and take share as Windows and Linux ascended and other platforms declined. It is less clear what will happen this time, which is why Intel has been hedging its bets with Altera and other acquisitions.

In addition to the big name processor vendors seeing some shakeups in 2017, we will continue to watch at the fringes as this will be the year several of the novel architectures we covered in 2016 (and before) will be snapped up. We expect this momentum to pick up for those with custom ASIC and machine learning chips that can handle both deep learning and regular compute workloads alike. It is hard to say at the beginning of this new year what the processor of choice will be for deep learning shops, but as we begin 2017, it’s still GPUs for training and lower power alternatives for training. With a single chip able to chew both sides of that workload will come a new big business for someone–and it could very well be a small startup (akin to something like Nervana Systems, which was acquired by Intel in mid-2016 and is the hinge upon which their machine learning push for the future swings).

On the machine learning note, there is another point to be made. From around 2013 until mid-2015, the focus was on large-scale analytics using frameworks like Hadoop and MapReduce and open source analytical tools. We expect that the whole “big data” push that led to the further development of those data analysis platforms will gave way to a larger emphasis on machine learning. That, in fact, the term “machine learning” will be a replacement for analytics since few statistical packages will not have a component (even if it’s something that’s always been there and is now recast as machine learning). Enterprise analytics is due for a shakeup and the emphasis around new hardware and software tooling to get ever more useful info from pools of data will continue.

With that said, we are pushing toward an IT sector that is truly platform driven. This is not a new concept by any means; folks have been talking a “platform” game for years now. The difference now is that there are actually solid foundations to build upon. Whereas the Hadoop and big data craze operated in its own niche and the deep learning and machine learning users followed their own drummers, advancements by hardware makers to create hardware and software platforms that leverage all of this work will define 2017.

As regular readers know, we cover large-scale infrastructure in research and enterprise. At the very top end of that coverage is the high performance computing segment. While we expect regular growth here in the gap years before the exascale machines come online, the platform-centric trends affecting other areas will hit HPC as well. We are already seeing a shift toward integration of deep learning into existing HPC workflows and this could potentially change the hardware environments over time. While a standard high-end CPU matched with an accelerator (GPU) or on-board via a self-hosted Knights Landing (and eventually, Knights Hill) part might seem like the way to go, there could be big shakeups here, especially if training and inference of models happens alongside the simulation job. These are all speculative statements but for the first time since supercomputing could be measured (via the Top 500) it is HPC watching the outside world for new ideas, hardware, and software instead of this highest end feeding bleeding edge research into enterprise. We are still excited about the future of HPC, but its position in terms of development and cutting-edge technology lags somewhat behind the hyperscalers. The notion of brute force, power hungry exascale supercomputers seems far less attractive than ultra-tailored custom machines for specific jobs in HPC. There’s a lot of political infrastructure keeping that trend at bay, but it lurks.

In short, 2017 is the year of the next platform. We don’t know what that will be and which vendors (or even at what level of the stack) will define it, but it’s coming–and soon.

Exit mobile version