NSF Comes to SC24 With Money Map, AI Blueprint

You’d be forgiven for thinking the annual Supercomputing Conference (SC24) was a global AI-specific event this year. From the show floor to a sizable portion of sessions, much of the discussion was how, where, and to what extent AI will reshape good old fashioned, 64-bit HPC.

In addition to the technical program, some of the leadership-focused talks also emphasized AI with only the merest nod to traditional HPC.

This included a walkthrough on the National Science Foundation (NSF) approach to funding new efforts as detailed by Tess DeBlanc-Knowles, Special Assistant to the Director for Artificial Intelligence at NSF and former advisor to the White House Office of Science and Technology Policy (OSTP), where she co-chaired the National AI Research Resource Task Force.

Her address at the SC24 highlighted the scope and structure of NSF’s AI investments, emphasizing their central role in fostering what she outlined as a secure, inclusive, and innovative AI ecosystem in the US. Traditional supercomputing was not on the menu.

Of course, we do have to consider the source: DeBlanc-Knowles’ role is currently implementing the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, working across NSF’s eight directorates to align AI research with national priorities. On that note she outlined the current allocation of around $750 million annually for AI R&D in the U.S., addressing foundational research, infrastructure, workforce development, and “translational technologies”. Her role at the NSF doesn’t reflect the wide number of other well-funded academic supercomputing programs but the investments in AI do speak volumes about the future focus.

The AI R&D NSF funding is stretched far and wide in terms of tech priorities. DeBlanc-Knowles told the crowd that over 500 new projects are set to be funded yearly, tackling everything from combining statistical and logical AI systems, improving the cost efficiency and reasoning capabilities of LLMs, and touching on privacy and security.

Specific programs, such as the Responsible Design, Development, and Deployment initiative, embody NSF’s emphasis on embedding ethical and societal considerations into AI research and applications. As an example of how funding is allocated, this program, launched in 2022, has already provided $18 million to 44 projects targeting sectors like emergency response and transportation, she told the crowd at SC24.

The National AI Research Institutes, one of NSF’s flagship programs for AI, has received over $500 million in funding since its inception in 2019. The 27 institutes within this network fit AI in areas including agriculture, weather forecasting, public health, and education with an eye on keeping them hooked into various agencies and their goals.  For example, the Department of Agriculture and Department of Education are collaborators on these efforts focusing on sustainability and agricultural resilience, while the latter is prioritizing AI-driven educational tools.

DeBlanc-Knowles also detailed the Regional Innovation Engines program, which is an effort to move technology hubs outside traditional centers like Silicon Valley with regionally-appropriate aims. In 2023, NSF funded 10 such engines across 18 states, each eligible for up to $150 million over a decade. Seven of these regions are to focus on AI-driven techniques. She used the examples of the Great Lakes Water Engine’s intelligent water resource management systems and North Dakota’s agricultural engine leveraging AI for crop genetics and climate modeling to highlight region-specific efforts.

To the delight of hardware-focused HPC set, she also emphasized infrastructure development is another critical focus, noting that with the rising cost of computation and the growing divide between private-sector and academic researchers, the NSF can kick in.

On the computational resources front, DeBlanc-Knowles’ cited initiatives like the National AI Research Resource (NAIR) pilot. Established as a consortium of 12 federal agencies and 26 non-gov partners, NAIR provides researchers with access to computational resources, datasets, and pre-trained models, addressing disparities in access.

“We’re seeing a growing divide between private-sector researchers with access to large-scale computation and data and academic researchers who often don’t, particularly when it comes to large models,” she told the audience.

The above will be an increasingly big deal to labs and universities left without spare funds to buy bigger, badder GPU clusters or to justify the power consumption for anything beyond what they’re doing (a complaint we overheard at the show).

The above high-performance computing resources serving 20,000 researchers annually, she says. Additional initiatives include the creation of AI-ready testbeds for experimenting with AI applications in real-world scenarios like urban transportation and power grid management.

International collaboration is a growing dimension of NSF’s strategy, she added, reflecting the global nature of AI challenges and opportunities. DeBlanc-Knowles highlighted the NSF’s partnerships with Australia, Israel, and India for joint research on topics ranging from drought resilience to digital twins for urban infrastructure.

Supplementary funding for AI Institutes has expanded collaborations with countries like South Korea, Belgium, and New Zealand, addressing shared interests in areas such as severe weather modeling and eldercare technologies. The recently released Global AI Research Agenda, developed in coordination with the State Department, outlines U.S. priorities for fostering international partnerships in fundamental AI research and risk management.

NSF’s efforts are aligned with broader federal strategies, including the Biden administration’s Executive Order on AI. This order, the longest in U.S. history, outlines over 100 actions to build AI prowess while keeping an eye on risks. Complementary guidance from the Office of Management and Budget and the National Security Memorandum on AI further emphasizes responsible AI adoption across civilian and defense sectors, she added.

“From foundational research to applied and translational work, we’re committed to building the infrastructure, talent, and trustworthiness needed to ensure U.S. leadership in AI,” she said.

While there was little in the way of HPC as we know it, buckle up, this is the new priority.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

1 Comment

  1. Interesting report! I think that it is important for the US government (and others), here through NSF, to indeed direct some resources towards AI R&D, seeing how it is this emerging tech with potential for both serious good (it seems) and serious evil, much like quantum computing. The potentials are likely strategic in that, if they indeed develop into some sort of breakthrough tech, and our competitors or adversaries get there first, then we could end up being “sitting ducks” in, or just plain sitting out, the corresponding “revolution” (if any). It’s important to be proactive in tackling this Rumsfeldian known unknown in my view.

    As it is emerging, it is probably a lot easier to make advances in AI at this stage than it is in fields that have been studied thoroughly for a longer time, like classical HPC. However, the unique relationship that HPC has to actual reality, remains second to none, and should continue to dominate positive contributions to humanity in the long run. Interestingly, AI has contributed indirectly to HPC through the broad availability of reduced-precision hardware, for which mixed-precision methods (MxP), aimed at being fast, energy efficient, and accurate (plus stable and convergent), could themselves be developed, tested, and tuned. I’m not sure however if HPCG-MxP (for iterative methods), or some form of multi-frontal-MxP direct method, already exist (or are even feasible) … if so, they should be quite useful in helping us reach the Zettascale, which we need (at least) for accurate earth-scale weather modeling, especially in these times of changing climate.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.