HPC

Future Supercomputers Grow Out of File Systems, Into DAOS

In our coverage of the string of next-generation HPC systems, we have talked about the big changes on the programming, memory, and network horizons, but there is one potentially disruptive change on the way for storage—one that could tear down existing paradigms, including the concept of time-tested parallel file systems.

Uncategorized

Memory Technology Flashes Good, Bad, and Ugly

System reliability for large machines, including the coming cadre of pre-exascale supercomputers, which are due to start coming online in 2016, is becoming an increasingly important talking point since the addition of ever more densely-packed components in a system means frequency and compounded severity of errors and faults will be in lockstep with size.

AI

Hortonworks Keeps Time With Hadoop’s Cloud March

Over the last eighteen months Hadoop distribution vendor, Hortonworks, has watched a stampede of users rush to the cloud, prompting the company to look for better ways to extend usability for first-time entrants to Hadoop territory and to accommodate the rush of test and dev workloads that prefer quick cloud deployments.

Compute

Details Emerge on Knights Hill Based Aurora Supercomputer

In the story we broke this morning about the forthcoming “Aurora” supercomputer set to be installed at Argonne National Laboratory—one of three pre-exascale machines that were part of the CORAL procurement between three national labs–we speculated that unlike the other two machines, which will be based on an OpenPower approach (Power9, Volta GPUs, and a new interconnect), the architecture of this system would be based on the third generation Knights family of chips from Intel, the Knights Hill processors.