Co-founder and co-editor Nicole Hemsoth brings insight from the world of high performance computing hardware and software as well as data-intensive systems and frameworks. Hemsoth is former Editor in Chief of long-standing supercomputing magazine, HPCwire. She was founding editor and conceptual creator of the data-intensive computing magazine Datanami, as well as the conceptual creator and founding Senior Editor for the large-scale infrastructure focused EnterpriseTech.
February 23, 2017 Matt Gillespie
One way to characterize the challenges of achieving exascale, is to look at how advancing compute, memory/storage, software, and fabric will lead to a future-generation balanced system. Recently Al Gara of Intel, Jean-Philippe Nominé of the French Alternative Energies and Atomic Energy Commission (CEA), and Katherine Riley of Argonne National Lab were on a panel that weighed in on these and a host of other interrelated challenges.
Exascale will represent a watershed achievement in computer science. More than just a nice, round number (“exa-” denotes a billion billion), exascale computing is also supposed1 by the Human Brain Project and …Read more
February 22, 2017 Panos Labropoulos, PhD
During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.
The computer vision, speech recognition, natural language processing, and audio recognition applications being developed using DL techniques need large amounts of …Read more
February 22, 2017 Nicole Hemsoth
When it comes to solving deep learning cluster and software stack problems at scale, few companies are riding the bleeding edge like Chinese search giant, Baidu. As we have detailed in the past, the company’s Silicon Valley AI Lab (SVAIL) has some unique hardware and framework implementations that put AI to the test at scale. As it turns out, scalability of the models they specialize in (beginning with speech recognition) is turning out to be one of the great challenges ahead on all fronts—hardware, compiler/runtime, and framework alike.Read more
February 21, 2017 Nicole Hemsoth
Whether being built for capacity or capability, the conventional wisdom about memory provisioning on the world’s fastest systems is changing quickly. The rise of 3D memory has thrown a curveball into the field as HPC centers consider the specific tradeoffs between traditional, stacked, and hybrid combinations of both on next-generation supercomputers. In short, allocating memory on these machines is always tricky—with a new entrant like stacked memory into the design process, it is useful to gauge where 3D devices might fit.
While stacked memory is getting a great deal of airplay, for some HPC application areas, it might fall just …Read more
February 21, 2017 Nicole Hemsoth
Many oil and gas exploration shops have invested many years and many more millions of dollars into homegrown codes, which is critical internally (competitiveness, specialization, etc.) but leaves gaps in the ability to quickly exploit new architectures that could lead to better performance and efficiency.
That tradeoff between architectural agility and continuing to scale a complex, in-house base of codes is one that many companies with HPC weigh—and as one might imagine, oil and gas giant, ExxonMobil is no different.
The company came to light last week with news that it scaled one of its mission-critical simulation codes on the …Read more
February 16, 2017 Nicole Hemsoth
Five years ago, many bleeding edge IT shops had either implemented a Hadoop cluster for production use or at least had a cluster set aside to explore the mysteries of MapReduce and the HDFS storage system.
While it is not clear all these years later how many ultra-scale production Hadoop deployments there are in earnest (something we are analyzing for a later in-depth piece), those same shops are likely on top trying to exploit the next big thing in the datacenter—machine learning, or for the more intrepid, deep learning.
For those that were able to get large-scale Hadoop clusters into …Read more
February 15, 2017 Nicole Hemsoth
Despite the emphasis on X86 clusters, large public clouds, accelerators for commodity systems, and the rise of open source analytics tools, there is a very large base of transactional processing and analysis that happens far from this landscape. This is the mainframe, and these fully integrated, optimized systems account for a large majority of the enterprise world’s most critical data processing for the largest companies in banking, insurance, retail, transportation, healthcare, and beyond.
With great memory bandwidth, I/O, powerful cores, and robust security, mainframes are still the supreme choice for business-critical operations at many Global 1000 companies, even if the …Read more
February 11, 2017 Nicole Hemsoth
Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.
However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first …Read more
February 8, 2017 Nicole Hemsoth
More than almost any other market or research segment, genomics is vastly outpacing Moore’s Law.
The continued march of new sequencing and other instruments has created a flood of data and development of the DNA analysis software stack has created a tsunami. For some, high performance genomic research can only move at the pace of innovation with custom hardware and software, co-designed and tuned for the task.
We have described efforts to build custom ASICs for sequence alignment, as well as using reprogrammable hardware for genomics research, but for centers that have defined workloads and are limited by performance constraints …Read more
February 7, 2017 Nicole Hemsoth
We have written much about large-scale deep learning implementations over the last couple of years, but one question that is being posed with increasing frequency is how these workloads (training in particular) will scale to many nodes. While different companies, including Baidu and others, have managed to get their deep learning training clusters to scale across many GPU-laden nodes, for the non-hyperscale companies with their own development teams, this scalability is a sticking point.
The answer to deep learning framework scalability can be found in the world of supercomputing. For the many nodes required for large-scale jobs, the de facto …Read more
Copyright © 2017 The Next Platform