Looking Ahead To The Next Platforms That Will Define 2017

The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.

Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for a bit and get caught up. But the world does not work like that, even if the hope of a pause is what sometimes is the only thing that gives us the strength to keep forging ahead. This is a good kind of hopeful self-deception, we think. And when we look back, it is the progress we have made that we remember.

So it is as we enter 2017, which will likely go down as one of the big transformational years in the IT industry. Whether the economies of the world go into recession or not – we contend that recessions accelerate transformations rather than slow them – there is going to be a lot of change.

It is certainly going to be one of the biggest years for compute that we have seen in a long, long time, and we think that deep learning and machine learning are going to become even more entrenched in the HPC, cloud and commercial sectors as well, changing the very way that we think about applications. There will be continued progress on the networking fronts, and a slew of non-volatile memories are going to be making their way to market, too, upsetting the boundaries between short-term main memory in systems and long-term magnetic media storage attached to the systems. The ever-changing software stacks will continue to evolve, moving from point products to platforms as they always do, and who knows, some new idea may come along and upset all of the establishment.

We certainly hope so because we, like you, enjoy the excitement that comes from new ways of doing old things and from doing completely new things, too. We will have our eyes wide open for both here at The Next Platform, as is our mission.

So many different kinds of processors and coprocessors are going to be vying for the right to move and crunch data in 2017 that we are thinking of it as a kind of Cambrian Explosion, and whole new niches are opening up just as new kinds of processing are following fast on the heels to fill those niches. This is precisely how it should be. The question we have is will the desire for selling these myriad forms of compute outstrip the appetite for compute.

Getting more efficient by having custom processors or offloading from processors to coprocessors means selling fewer processors, by definition, but thus far, these alternatives have not had much of an impact on sales of X86 processors, largely Xeons from Intel. But Intel itself is peddling Xeon Phi manycore chips and Altera FPGAs, and others are pushing their own GPU or FPGA alternatives. Moreover, AMD is waking up in the datacenter this year, with its “Zen” server chips and associated “Polaris” and “Vega” GPUs, and Cavium and Qualcomm are keen to drive their respective ThunderX and Centriq processors into the racks where Xeons currently roam free. IBM will also be getting its Power9 processor into the field about the same time that Intel rolls out its “Skylake” Xeon chips (they probably will not have E5 and E7 designations as the fast several generations of Xeons did), and Nvidia will be getting its “Volta” GPUs out as well.

The ARM collective is gunning for 25 percent market share, IBM wants 10 percent to 20 percent for Power chips, and myriad specialized chips as well as FPGAs are fighting for share in a market where Intel Xeons drive nearly 90 percent of server revenues and over 99 percent of server shipments. AMD will take some share by default with Zen CPUs (maybe 5 percent this year and 10 percent next year), and will start making headway with GPUs, too. Nvidia will see its tesla and GRID compute businesses continue to grow, too.

Clearly, there is some share to take away from Intel, but not anything like half of the market. It took Intel two and a half decades and a massive expansion in compute to get the amazing share it holds, and these conditions cannot be repeated again. This time will be different, and the outcome is a lot less certain. It was easy to bet that the main chip on the desktop would jump to the datacenter, morph to suit its needs, and take share as Windows and Linux ascended and other platforms declined. It is less clear what will happen this time, which is why Intel has been hedging its bets with Altera and other acquisitions.

In addition to the big name processor vendors seeing some shakeups in 2017, we will continue to watch at the fringes as this will be the year several of the novel architectures we covered in 2016 (and before) will be snapped up. We expect this momentum to pick up for those with custom ASIC and machine learning chips that can handle both deep learning and regular compute workloads alike. It is hard to say at the beginning of this new year what the processor of choice will be for deep learning shops, but as we begin 2017, it’s still GPUs for training and lower power alternatives for training. With a single chip able to chew both sides of that workload will come a new big business for someone–and it could very well be a small startup (akin to something like Nervana Systems, which was acquired by Intel in mid-2016 and is the hinge upon which their machine learning push for the future swings).

On the machine learning note, there is another point to be made. From around 2013 until mid-2015, the focus was on large-scale analytics using frameworks like Hadoop and MapReduce and open source analytical tools. We expect that the whole “big data” push that led to the further development of those data analysis platforms will gave way to a larger emphasis on machine learning. That, in fact, the term “machine learning” will be a replacement for analytics since few statistical packages will not have a component (even if it’s something that’s always been there and is now recast as machine learning). Enterprise analytics is due for a shakeup and the emphasis around new hardware and software tooling to get ever more useful info from pools of data will continue.

With that said, we are pushing toward an IT sector that is truly platform driven. This is not a new concept by any means; folks have been talking a “platform” game for years now. The difference now is that there are actually solid foundations to build upon. Whereas the Hadoop and big data craze operated in its own niche and the deep learning and machine learning users followed their own drummers, advancements by hardware makers to create hardware and software platforms that leverage all of this work will define 2017.

As regular readers know, we cover large-scale infrastructure in research and enterprise. At the very top end of that coverage is the high performance computing segment. While we expect regular growth here in the gap years before the exascale machines come online, the platform-centric trends affecting other areas will hit HPC as well. We are already seeing a shift toward integration of deep learning into existing HPC workflows and this could potentially change the hardware environments over time. While a standard high-end CPU matched with an accelerator (GPU) or on-board via a self-hosted Knights Landing (and eventually, Knights Hill) part might seem like the way to go, there could be big shakeups here, especially if training and inference of models happens alongside the simulation job. These are all speculative statements but for the first time since supercomputing could be measured (via the Top 500) it is HPC watching the outside world for new ideas, hardware, and software instead of this highest end feeding bleeding edge research into enterprise. We are still excited about the future of HPC, but its position in terms of development and cutting-edge technology lags somewhat behind the hyperscalers. The notion of brute force, power hungry exascale supercomputers seems far less attractive than ultra-tailored custom machines for specific jobs in HPC. There’s a lot of political infrastructure keeping that trend at bay, but it lurks.

In short, 2017 is the year of the next platform. We don’t know what that will be and which vendors (or even at what level of the stack) will define it, but it’s coming–and soon.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

7 Comments

  1. I don’t see AMD limited to just the x86 market only for it’s GPU accelerators with its Radeon Pro WX and Radeon Instinct GPU/AI accelerators. AMD is a founding member of the OpenCAPI Consortium(with IBM/Others) so there is potential there in any third party OpenPower Licensee Power9 systems that will support the OpenCAPI interconnect IP and use AMD’s Pro GPUs. Also I think that AMD will surprise with Naples/Zen paired with its Radeon Professional GPU products and AMD can price it’s Naples CPU SKUs together with its Radeon Pro WX/Radeon Instinct GPU SKUs for some interesting pricing as a package deal to get AMD more GPU accelerator/AI business as the Naples/Zen x86 product gets AMD back into the server CPU market in the latter half of 2017.

    AMD and its Radeon Technologies Group appears to be making some moves to compete beyond just the x86 based markets so once its Zen/Naples product is ready maybe we will get some definite information about AMD’s K12 custom ARM project that was developed in parallel with its Zen project that was also done under Jim Keller’s management.

    AMD’s Vega GPUs appear to have some new features designed to allow its Vega GPUs to make use of a large 512TB of virtual address space and also use HBM2 as a last level cache to allow the HBM2/HBC(high bandwidth cache) to leverage pools of larger amounts of second tier DIMM based DRAM as well as on GPU/PCIe card NVM memory. So Vega’s HBC controller should be able to manage a Virtual memory pool that goes from NVM to DIMM based DRAM on up into HBM2/HBC and keep any Vega based GPU accelerator product working from mostly HBM2 at very high effective bandwidth.

    With all the work of getting its first Naples CPU(only) server/workstation/HPC SKUs to market there is still going to be AMD’s server/workstation/HPC APU on an Interposer products that are under development for a new line of professional level APUs that are going to target the non consumer market place. So professional level APUs with large GPU accelerators on an interposer together with Zen cores complexes and HBM2 all wired up via some very wide interposer etched connection fabrics in a new class of server/workstation/HPC APU SKU. So there still is some new product categories for AMD to bring to market for professional APUs, and the Custom ARM/server market for two new types of systems professional APU and custom ARM/K12 server CPUs(if AMD’s custom ARM/K12 introduction is not further delayed past 2017).

    • The problem with AMD however still is, they still haven’t quiet grasped that it is the Software infrastructure they need to address. Both nVidia and Intel have understood that long time ago and are actively churning out SDK, APIs Compilers, etc. To help and bring people onboard to use it for getting the best out of their hardware in place. AMD has not really managed to respond on that it is all well and great to drum for open APIs like OpenCL but they need to do more, for example they need to bring out a DNN and MKL equivalent if they are serious about Machine Learning. Doesn’t matter how great your hardware is if your software infrastructure is lousy or non-existent (and that nearly always has been in AMD’s case even if it is just a perception thing) no one is going to buy it and use it.

        • Thanks I have been there before. Where are the ML tools? And APIs. While ClBlas and ClFFT is fine but that’s not enough. Where are they drumming the drum for their tools? Where are the partners? Where are the success stories.

          The answer to all the question above is either None or nothing.

          And that’s the problem. nVidia knows how to milk the crap out of their stuff even if it is exagerated but that’s what the public perceives that nVidia is active and AMD is quiet and doing anything and is missing the boat so don’t even bother. Because simply no one hears of anything.

  2. I think it is quiet clear that the winners from last year will not be the same this year. Both nVidia and Intel will come under severe pressure this year. The nVidia bonanza will likely come to a hard end with more and more threats to their DL enterprise emerge from all sides. And same will go for Intel if AMD and the ARM field get their act together this year

  3. It’s also going to be the year where people really start taking notice of some currently obscure technologies like Nantero’s CNT based RAM, and STT-MRAM like Everspin’s. Silicon photonics and optical interconnects will probably continue taking over traditional interconnect technologies as well. More working silicon photonics products are way overdue.

    ARM is also going to be a big deal, but the important ARM machines aren’t going to be up and running before the end of 2017, as far as i know.

    NEC is supposedly releasing a new vector architecture system this year as well. They’re the only ones doing old fashioned vector systems still, right?

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.