A wide-ranging program for you today with everything from neuromorphic hardware and software research; some impressive FPGA acceleration for Caffe from Samsung AI Research; why the datacenter industry is booming (the answers might surprise you); the state of Lustre and OpenSFS; and where some unique opportunities are in HPC on the pandemic modeling front.
Rich Miller lives and breathes datacenters and business is booming, even within (and possibly due to) the current pandemic situation. Fresh infrastructure investments are coming in from those who normally pushed funds in airports, toll roads, and real estate and demand from gaming, streaming, and other services are also pushing this ahead. How does the investment play out over the years and how is it different now? This and more in the in-depth with Rich.
On today’s program we also talk to Dr. Stylianos Venieris, a researcher at Samsung AI who is focused on using FPGAs for AI training, specifically around the Caffe framework. We talk about the challenges and opportunities for large-scale training with mixed precision and how various optimizations, including a framework his team developed called Barista can make FPGAs more suitable than general purpose accelerators and in some cases, even custom ASICs for the AI training workload.
Dr. Kathy Yelick has been at the forefront of supercomputing over the course of her long career and now, from her vantage point as Professor of EECS at Berkeley is watching how HPC is reforming to meet the needs of researchers targeting COVID-19. What’s interesting here is that the “traditional” large-scale modeling and simulation isn’t quite as critical (at least at the moment) as some of the data-intensive computing techniques that have been honed over the last few years, especially for advanced epidemiological work. We talk about this and more during our segment.
Dr. Katie Schuman is well-known for her work on novel architectures at Oak Ridge National Lab. From quantum to other technologies, she’s an expert (particularly on the programming side) at finding ways to use these new approaches to computing alongside traditional supercomputers. In this discussion we talk about neuromorphic computing, mostly around the Intel Loihi chip and the work her team at Oak Ridge has done to make it more functional and ready for real-world applications.
We end the program with a discussion about Lustre and specifically, via the OpenSFS effort with Dr. Steven Simms. We talk about the past 21 years of the Lustre file system and now, with ten years under its belt, where OpenSFS will go in its second decade.
2:30 – Datacenter boom with Rich Miller (Datacenter Frontier)
14:38 – Accelerating Caffe with FPGAs (Samsung Research)
22:28 – Neuromorphic Architecture/Software Trends (Oak Ridge National Lab)
31:30 – A Decade of OpenSFS and Two of Lustre (OpenSFS)
39:25 – HPC Role in Covid-19 Analysis, Modeling (Berkeley)
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.