
Dirt Simple HPC: Making the Case for Julia
Choosing a programming language for HPC used to be an easy task. …
Choosing a programming language for HPC used to be an easy task. …
If the last year of stories here from research labs at the forefront of deep learning hasn’t made it clear, the accelerator of choice for training the models that will feed the next generation of speech and image recognition (not to mention a wealth of other application areas) is certainly GPUs. …
The high end of the computing industry has always captivated us, and we still find the forces at work in the upper echelons of the datacenters of the world, and the hardware and software that is created to run the largest and most complex workloads found there, fascinating. …
While many consumers in the U.S. might not have heard much about Baidu, when it comes to engineers and computer scientists, the Chinese company is on par with Google, Facebook, and their ilk when it comes to massively scaled distributed computing. …
There is a simple test to figure out just how seriously social network Facebook is taking machine learning, and it has nothing to do with research papers or counting cat pictures automagically with neural networks. …
For Google, Baidu, and a handful of other hyperscale companies that have been working with deep neural networks and advanced applications for machine learning well ahead of the rest of the world, building clusters for both the training and inference portions of such workloads is kept, for the most part, a well-guarded secret. …
Having the best compute engine – meaning the highest performance at a sustainable price/performance – is not enough to guarantee that it will be adopted in HPC, hyperscale, or enterprise settings. …
When it comes to traditional supercomputing, the tools, frameworks, and software stacks tend to be codified, especially within the various domains that use high performance computing. …
The future of hyperscale datacenter workloads is becoming clearer and as that picture emerges, if one thing is clear, it is that the content is heavily driven by a wealth of non-text content—much of it streamed in for processing and analysis from an ever-growing number of users of gaming, social network, and other web-based services. …
The progression in performance per watt for Nvidia’s Tesla line of GPU coprocessors is continuing apace now that the graphics chip maker is delivering two shiny new devices based on its “Maxwell” generation of chips. …
All Content Copyright The Next Platform