December 11, 2017 Jeffrey Burt
Burst buffers are carving out a significant space for themselves in the HPC arena as a way to improve data checkpointing and application performance at a time when traditional storage technologies are struggling to keep up with the increasingly large and complex workloads including traditional simulation and modeling and new things like as data analytics.
The fear has been that storage technologies such as parallel file systems could become the bottleneck that limits performance, and burst buffers have been designed to manage peak I/O situations so that organizations aren’t forced to scale their storage environments to be able to support …Read more
December 11, 2017 Jeffrey Burt
In his keynote at the recent AWS re:Invent conference, Amazon vice president and chief technology officer Werner Vogels said that the cloud had created a “egalitarian” computing environment where everyone has access to the same compute, storage, and analytics, and that the real differentiator for enterprises will be the data they generate, and more importantly, the value the enterprises derive from that data.
For Rob Thomas, general manager of IBM Analytics, data is the focus. The company is putting considerable muscle behind data analytics, machine learning, and what it calls more generally cognitive computing, much of it based …Read more
December 7, 2017 Jeffrey Burt
Object storage may not have been born in the cloud, but it was the major public cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform that have been its biggest drivers.
The idea of object storage wasn’t new; it had been around for about two decades. But as the cloud service providers began building out their datacenters and platforms more than a decade ago, they were faced with the need to find a storage architecture that could scale to meet the demands brought on by the massive amounts of data being created, and as well as the …Read more
December 6, 2017 Jeffrey Burt
VMware jumped into burgeoning software-defined networking (SDN) field in a big way four years ago when it bought started Nicira for $1.26 billion, a deal that led to the launch of VMware’s NSX offering a year later. NSX put the company on a crash course with other networking vendors, particularly Cisco Systems, all of whom were trying to plot their strategies to deal with the rapid changes in what had been a relatively staid part of the industry.
Many of these vendors had made their billions over the years selling expensive appliance-style boxes filled with proprietary technologies, and now faced …Read more
December 4, 2017 Jeffrey Burt
In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.
Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub – …Read more
December 1, 2017 Jeffrey Burt
Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.
Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and …Read more
November 28, 2017 Jeffrey Burt
Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.
The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the …Read more
November 27, 2017 Jeffrey Burt
NVM-Express isn’t new. Development on the interface, which provides lean and mean access to non-volatile memory, first came to light a decade ago, with technical work starting two years later through a work group that comprised more than 90 tech vendors. The first NVM-Express specification came out in 2011, and now the technology is going mainstream.
How quickly and pervasively remains to be seen. NVM-Express promises significant boosts in performance to SSDs while driving down the latency, which would be a boon to HPC organizations and the wider world of enterprises as prices for SSDs continue to fall and adoption …Read more
November 24, 2017 Jeffrey Burt
Each year, at the ISC and SC supercomputing conference shows every year, a central focus tends to be the release of the Top500 list of the world’s most powerful supercomputers. As we’ve noted in The Next Platform, the 25-year-old list may have some issues with it, but it still captures the imagination, with lineups of ever-more powerful systems that reflect the trend toward heterogeneity and accelerators and illustrate the growing competition between the United States and China for dominance in the HPC field, the continued strength of Japan’s supercomputing industry and the desire of European Union countries to …Read more