Emerging “Universal” FPGA, GPU Platform for Deep Learning
In the last couple of years, we have written and heard about the usefulness of GPUs for deep learning training as well as, to a lesser extent, custom ASICs and FPGAs. …
In the last couple of years, we have written and heard about the usefulness of GPUs for deep learning training as well as, to a lesser extent, custom ASICs and FPGAs. …
As we have written about extensively here at The Next Platform, there is no shortage of use cases in deep learning and machine learning where HPC hardware and software approaches have bled over to power next generation applications in image, speech, video, and other classification and learning tasks. …
Intel has finally opened the first public discussions of its investment in the future of machine learning and deep learning and while some might argue it is a bit late in the game with its rivals dominating the training market for such workloads, the company had to wait for the official rollout of Knights Landing and extensions to the scalable system framework to make it official—and meaty enough to capture real share from the few players doing deep learning at scale. …
This week at the International Supercomputing Conference (ISC ’16) we are expecting a wave of vendors and high performance computing pros to blur the borders between traditional supercomputing and what is around the corner on the application front—artificial intelligence and machine learning. …
The datacenter is going through tremendous change, and many long-held assumptions are now being called into question. …
For those in enterprise circles who still conjure black and white images of hulking supercomputers when they hear the name “Cray,” it is worth noting that the long-standing company has done a rather successful job of shifting a critical side of its business to graph analytics and large-scale data processing. …
Training a machine learning algorithm to accurately solve complex problems requires large amounts of data. …
Over the last year, stories pointing to a bright future for deep neural networks and deep learning in general have proliferated. …
Over the last year, we have focused on the role burst buffer technology might play in bolstering the I/O capabilities on some of the world’s largest machines and have focused on use cases ranging from the initial target to more application-centric goals. …
Over the past few years, IBM has been devoting a great deal of corporate energy into developing Watson, the company’s Jeopardy-beating supercomputing platform. …
All Content Copyright The Next Platform