February 22, 2018 Jeffrey Burt
The demands for more compute resources, power and density in HPC environments is fueling the need for innovative ways to cool datacenters that are churning through petabyte levels of data to run modern simulation workloads that touch on everything from healthcare and climate change to space exploration and oil and gas initiatives.
The top cooling technologies for most datacenters are air and chilled water. However, Lenovo is promoting its latest warm-water cooling system for HPC clusters with its ThinkSystem SD650 systems that the company says will lower datacenter power consumption by 30 to 40 percent of the more traditional cooling …Read more
February 21, 2018 Jeffrey Burt
Much of the focus of the recent high-profile budget battle in Washington – and for that matter, many of the financial debates over the past few decades – has been around how much money should go to the military and how much to domestic programs like Social Security and Medicare.
In the bipartisan deal struck earlier this month, both sides saw funding increase over the next two years, with the military seeing its budget jump $160 billion. Congressional Republicans boasted of a critical win for the Department of Defense (DoD) that will result in more soldiers, better weapons, and improved …Read more
February 20, 2018 Jeffrey Burt
IBM’s systems hardware business finished 2017 in a stronger position than it has seen in years, due in large part to the continued growth of the company’s stalwart System z mainframes and Power platform. As we at The Next Platform noted, the last three months of last year were also the first full quarter of shipments of IBM’s new System z14 mainframes, while the first nodes of the “Summit” supercomputer at Oak Ridge National Laboratory and the “Sierra” system at Lawrence Livermore National Laboratory began to ship.
Not to be overlooked was the strong performance of the IBM’s storage …Read more
February 15, 2018 Jeffrey Burt
For several years, work has been underway to develop a standard interconnect that can address the increasing speeds in servers driven by the growing use of such accelerators as GPUs and field-programmable gate arrays (FPGAs) and the pressures put on memory by the massive amounts of data being generated and bottleneck between the CPUs and the memory.
Any time the IT industry wants a standard, you can always expect at least two, and this time around is no different. Today there is a cornucopia of emerging interconnects, some of them overlapping in purpose, some working side by side, to break …Read more
February 14, 2018 Jeffrey Burt
The field of competitors looking to bring exascale-capable computers to the market is a somewhat crowded one, but the United States and China continue to be the ones that most eyes are on.
It’s a clash of an established global superpower and another one on the rise, and one that that envelopes a struggle for economic, commercial and military advantages and a healthy dose of national pride. And because of these two countries, the future of exascale computing – which to a large extent to this point has been more about discussion, theory and promise – will come into sharper …Read more
February 13, 2018 Jeffrey Burt
Neural networks live on data and rely on computational firepower to help them take in that data, train on it and learn from it. The challenge increasingly is ensuring there is enough computational power to keep up with the massive amounts of data that is being generated today and the rising demands from modern neural networks for speed and accuracy in consuming the data and training on datasets that continue to grow in size.
These challenges can be seen playing out in the fast-growing autonomous vehicle market, where pure-play companies like Waymo – born from Google’s self-driving car initiative – …Read more
February 12, 2018 Jeffrey Burt
Google laid down its path forward in the machine learning and cloud computing arenas when it first unveiled plans for its tensor processing unit (TPU), an accelerator designed by the hyperscaler to speeding up machine learning workloads that are programmed using its TensorFlow framework.
Almost a year ago, at its Google I/O event, the company rolled out the architectural details of its second-generation TPUs – also called the Cloud TPU – for both neural network training and inference, with the custom ASICs providing up to 180 teraflops of floating point performance and 64 GB of High Bandwidth Memory. …Read more
February 8, 2018 Jeffrey Burt
Cloud datacenters in many ways are like melting pots of technologies. The massive facilities hold a broad array of servers, storage systems, and networking hardware that come in a variety of sizes. Their components come with different speeds, capacities, bandwidths, power consumption, and pricing, and they are powered by different processor architectures, optimized for disparate applications, and carry the logos of a broad array of hardware vendors, from the largest OEMs to the smaller ODMs. Some hardware systems are homegrown or built atop open designs.
As such, they are good places to compare and contrast how the components of these …Read more
February 5, 2018 Jeffrey Burt
DARPA has always been about driving the development of emerging technologies for the benefit of both the military and the commercial world at large.
The Defense Advanced Research Projects Agency has been a driving force behind U.S. efforts around exascale computing and in recent years has targeted everything from robotics and cybersecurity to big data to technologies for implantable technologies. The agency has doled out millions of dollars to vendors like Nvidia and Rex Computing as well as national laboratories and universities to explore new CPU and GPU technologies for upcoming exascale-capable systems that hold the promise of 1,000 …Read more
February 1, 2018 Jeffrey Burt
There is increasing pressure in such fields as manufacturing, energy and transportation to adopt AI and machine learning to help improve efficiencies in operations, optimize workflows, enhance business decisions through analytics and reduce costs in logistics.
We have talked about how industries like telecommunications and transportation are looking at recurrent neural networks for helping to better forecast resource demand in supply chains. However, adopting AI and machine learning comes with its share of challenges. Companies whose datacenters are crowded with traditional systems powered by CPUs now have to consider buying and bringing in GPU-based hardware that is better situated to …Read more