Co-founder and co-editor Nicole Hemsoth brings insight from the world of high performance computing hardware and software as well as data-intensive systems and frameworks. Hemsoth is former Editor in Chief of long-standing supercomputing magazine, HPCwire. She was founding editor and conceptual creator of the data-intensive computing magazine Datanami, as well as the conceptual creator and founding Senior Editor for the large-scale infrastructure focused EnterpriseTech.
February 12, 2018
The Lustre file system has been the canonical choice for the world’s largest supercomputers, but for the rest of high performance computing user base, it is moving beyond reach without the support and guidance it has had from its many backers, including most recently Intel, which dropped Lustre from its development ranks in mid-2017.
While Lustre users have seen the support story fall to pieces before, for many HPC shops, the need is greater than ever to look toward a fully supported scalable parallel file system that snaps well to easy to manage appliances. Some of these commercial HPC sites …Read more
February 9, 2018
In this episode of The Interview from The Next Platform, we talk with Andrew Jones from independent high performance computing consulting firm, N.A.G. about processor and system acquisition trends in HPC for users at the smaller commercial end of the spectrum up through the large research centers.
In the course of the conversation, we cover how acquisition trends are being affected by machine learning entering the HPC workflow in the coming years, the differences over time between commercial HPC and academic supercomputing, and some of the issues around processor choices for both markets.
Given his experiences talking to end users …Read more
February 7, 2018 Nicole Hemsoth
While it is possible to reap at least some benefits from persistent memory, for those that are performance focused, the work to establish an edge is getting underway now with many of the OS and larger ecosystem players working together on new standards for existing codes.
Before we talk about some of the efforts to bring easier programming for persistent memory closer, it is useful to level-set about what it is, isn’t, how it works, and who will benefit in the near term. The most common point of confusion is that persistent memory is not necessarily about hardware, a fact …Read more
February 6, 2018 Nicole Hemsoth
From DRAM to NUMA to memory non-volatile, stacked, remote, or even phase change, the coming years will bring big changes to code developers on the world’s largest parallel supercomputers.
While these memory advancements can translate to major performance leaps, the code complexity these devices will create pose big challenges in terms of performance portability for legacy and newer codes alike.
While the programming side of the emerging memory story may not be as widely appealing as the hardware, work that people like Ron Brightwell, R&D manager at Sandia National Lab and head of numerous exascale programming efforts do to expose …Read more
February 5, 2018 Nicole Hemsoth
There has been much recent talk about the near future of code writing itself with the help of trained neural networks but outside of some limited use cases, that reality is still quite some time away—at least for ordinary development efforts.
Although auto-code generation is not a new concept, it has been getting fresh attention due to better capabilities and ease of use in neural network frameworks. But just as in other areas where AI is touted as being the near-term automation savior, the hype does not match the technological complexity need to make it reality. Well, at least not …Read more
February 2, 2018 Nicole Hemsoth
If you thought the up-front costs and risks were high for a silicon startup, consider the economics of building a full-stack quantum computing company from the ground-up—and at a time when the applications are described in terms of their potential and the algorithms still in primitive stages.
Quantum computing company, D-Wave managed to bootstrap its annealing-based approach and secure early big name customers with a total of $200 million over the years but as we have seen with a range of use cases, they have been able to put at least some funds back in investor pockets with system sales …Read more
February 2, 2018 Nicole Hemsoth
Just before the large-scale GPU accelerated Titan supercomputer came online in 2012, the first use cases of the OpenACC parallel programming model showed efficient, high performance interfacing with GPUs on big HPC systems.
At the time, OpenACC and CUDA were the only higher-level tools for the job. However, OpenMP, which has had twenty-plus years to develop roots in HPC, was starting to see the opportunities for GPUs in HPC at about the same time of OpenACC was forming. As legend has it, OpenACC itself was developed based on early GPU work done in an OpenMP accelerator subcommittee, generating some bad …Read more
January 31, 2018 Nicole Hemsoth
Researcher Olexandr Isayev wasn’t just impressed to see an AI framework best the top player of a game so complex it was considered impossible for an algorithm to track. He was inspired.
“The analogy of the complexity of chemistry, the number of possible molecule we don’t know about, is roughly the same order of complexity of Go, the University of North Carolina computational biology and chemistry expert explained.
“Instead of playing with checkers on a board, we envisioned a neural network that could play the game of generating molecules—one that did not rely on human intuition for this initial but …Read more
January 31, 2018 Nicole Hemsoth
Even though graph analytics has not disappeared, especially in the select areas where this is the only efficient way to handle large-scale pattern matching and analysis, the attention has been largely silenced by the new wave machine learning and deep learning applications.
Before this newest hype cycle displaced its “big data” predecessor, there was a small explosion of new hardware and software approaches to tackling graphs at scale—from system-level offerings from companies like Cray with their Eureka appliance (which is now available as software on its standard server platforms) to unique hardware startups (ThinCI, for example) and graph …Read more
January 29, 2018 Nicole Hemsoth
There has been a great deal of interest in deep learning chip startup, Graphcore, since we first got the limited technical details of the company’s first-generation chip last year, which was followed by revelations about how their custom software stack can run a range of convolutional, recurrent, generative adversarial neural network jobs.
In our conversations with those currently using GPUs for large-scale training (often with separate CPU only inference clusters), we have found generally that there is great interest in all new architectures for deep learning workloads. But what would really seal the deal is something that could both training …Read more