SC16 for HPC Programmers: What to Watch

An event as large and diverse as the annual Supercomputing Conference (SC16) presents a daunting array of content, even for those who specialize in a particular area inside the wider HPC spectrum. For HPC programmers, there are many sub-tracks to follow depending where on the stack on sits.

The conference program includes a “Programming Systems” label for easily finding additional relevant sessions, but we wanted to highlight a few of these here based on larger significance to the overall HPC programming ecosystem.

HPC programmers often have special considerations in how they program that other fields do not. For example, nothing ruins a good cluster like a thermal event, so power-awareness is important at the many-thousands of cores scale. These three sessions offer insight into unique challenges for HPC programmers.

Tooling and languages

The HPC world is more than just application software; the tools and languages programmers use are as varied and complex as the applications themselves. These sessions cover a variety of topics related to getting the job done.

GPUs and accelerators

GPUs aren’t just the domain of specialized HPC centers and bitcoin miners anymore. Recent investments by Amazon Web Services and Microsoft Azure into GPU offerings presages a mass-market  adoption. Microsoft is also making a big play for FPGAs in machine learning. HPC programmers who haven’t yet had to write code for accelerators will probably have to soon. It’s no surprise that SC 16 has a wealth of GPU and accelerator content.

Real-world applications

The point of HPC programs isn’t just to crunch numbers faster than anyone else. HPC is all about getting better answers to questions big and small. Seeing how others use HPC for their work is inspiring. Here are a few sessions on HPC in the “real world”:

The future

Conferences aren’t just a snapshot of the state of the profession, they’re an opportunity to examine trends and take a peek into the future. In addition to the Emerging Technologies track, SC 16 offers some sessions with an explicit focus on gazing into the crystal ball.

Programming environments, tools, and applications are the cornerstone upon which all of the future exascale efforts rest. After all, even with immense advances in hardware, without tuned and optimized programs, compilers, and tools it is without context. Over the course of SC16 week, keep an eye open for our writers and if you have a moment, stop and say hello and let them know what matters most to you from a programming perspective so we can tune future coverage of these issues over the course of 2017.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. An interesting sidelight of the Sunday Tutorial by the TACC team is that they measured core to core performance of MPI between cores on the KNL over OmniPath. Not surprisingly Performance appears uniform ( and very good ) EXCEPT for two pairs of cores which performance was way below the norm.
    This initially puzzled the team and on investigating with help from Intel the reason became clear – by default these pairs of cores are those on the chip which are handling the Omni- Path interrupts.
    Effectively this reduces the useful core count for at least OpenSource MPI on the chip to 68 cores.
    The finding highlights one of the most prevalent criticisms of Intel OmniPath via-a-vid Mellanox IB that OMP loads the processor unnecessarily.
    TACC will also measure MPI performance over Mellanox IB and its greatly desired that a commercial code vendor ( on behalf of TACC my firm Integral Engineering has solicited ANSYS with at least preliminary support ) can step up with licensss that the TACC team can run against both fabrics.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.