Applying Machine Learning At The Front End Of HPC

IBM and the other vendors who are bidding on the CORAL2 systems for the US Department of Energy can’t talk about those bids, which are in flight, and Big Blue and its partners in building the “Summit” supercomputer at Oak Ridge National Laboratory and “Sierra” at Lawrence Livermore National Laboratory – that would be Nvidia for GPUs and Mellanox Technologies for InfiniBand interconnect – are all about publicly focusing on the present, since these two machines are at the top of the flops charts now.


Turning The CPU-GPU Hybrid System On Its Head

Sales of various kinds of high performance computing – not just technical simulation and modeling applications, but also cryptocurrency mining, massively multiplayer gaming, video rendering, visualization, machine learning, and data analytics – run on little boom-bust cycles that make it difficult for all suppliers to this market to make projections when they look ahead.


Deep Learning Is Coming Of Age

In the early days of artificial intelligence, Hans Moravec asserted what became known as Moravec’s paradox: “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

This assertion is now unraveling primarily due to the ascent of deep learning.