It Takes a Lot of Supercomputing to Simulate Future Computing

The chip industry is quickly reaching the limits of traditional lithography in its effort to cram more transistors onto a piece of silicon at a pace consistent with Moore’s Law. Accordingly, new approaches, including using extreme ultraviolet light sources, are being developed. While this can promise new output for chipmakers, developing this technology to enhance future computing is going to take a lot of supercomputing.

Lawrence Livermore National Lab’s Dr. Fred Streitz and his teams at the HPC Innovation Center at LLNL are working with Dutch semiconductor company, ASML, to push advances in lithography for next-generation chips. Even as a physicist, he says what is required from extreme ultraviolet lithography is stunning, if not unbelievable. In essence, in order to keep miniaturizing and adding transistors at the 7nm level and below, the light wavelength has to be reduced as well, which is no small feat.

“You can’t create something that is 12 or 14nm wide if you’re using light at a wavelength of 40nm, for instance. Right now, EUV sources are down in the 10-12nm range and under. The trick is creating light at that wavelength, but that is its own great challenge.” This type of problem is exactly what U.S. based company, Cymer focuses on—they make the light sources used by ASML, which used to contract that technology before they purchased Cymer over a year ago. At that time, Streitz and team were already working with Cymer, and now, with ASML, they are seeking to refine the process by complex multi-physics simulations running on unclassified machines at LLNL, including one of the last-standing IB BlueGene machines, the “Vulcan” supercomputer, which while reaching the end of its lifespan, still sees 90-95% utilization at the lab.

It might sound simple to change the wavelength of light for this purpose, at least until you understand what has to happen to create the light—and do so at the idea wavelength. The light is made by spitting tiny droplets of molten tin and smacking those with a pre-pulse from a laser to flatten them into tiny pancakes. Along comes a much larger laser, which turns that blob into a plasma pancake. That plasma then radiated out into the correct wavelength. Then, light detectors collect it, focus it, and turn it into the light required for lithography to pattern transistors onto chip. Again, this is what Cymer did—now under the ASML banner. And it’s not simple.

“Since smacking small bits of molten metal with lasers and then understanding both the physics of creating a plasma in the generation of light and doing it efficiently—they are spitting out micron-sized drops at kilohertz rates, getting smacked with the laser, blown into plasmas, and then trying to get the light back out over and over again, there are a lot of ways this can go wrong or not be done efficiently,” Streitz tells The Next Platform. “In terms of doing this efficiently, that light has to be as bright as possible to make lithography efficient. Looking downstream, if you’re a company building a chip, you want bright light so you’re only making a single pass—time is money and the brighter the light, the fewer number of passes.”

Modeling this process to meet the real-world needs of science industry is what supercomputers are made to do, and Streitz’s group at the HPC Innovation Center jumped at the chance. After all, as a weapons laboratory that handles both classified and unclassified workloads, simulating the interaction of plasmas with materials is nothing new, and neither are complex multi-physics codes. “This was all quite a bit more complex than the folks at Cymer or ASML realized. Our mission here at the lab overall is certifying the reliability and safety of our stockpile, which means dealing with complex physics. This wasn’t crazy for us to do and has some similarities to things we do for the National Ignition Facility, he explains. Ultimately, research and simulation work like this are of incredible value to competitiveness—and it takes large-scale computing resources and expertise to do this. For Cymer in particular, this sort of research is what has powered their business for decades, but they reached the limits internally for solving the problems associated with EUV, which pushed them to look to other experts.

What is interesting here is that this is a case of U.S. funded supercomputers working for private industry. This is more frequent in Europe and in many Asian countries, particularly China, seeing the dividing line between public and private funds isn’t always simple. There are programs in the United States that match supercomputing sites with private industry (for example, the INCITE program at Oak Ridge National Lab) to help advance certain areas, there is an extensive vetting process. In short, the industry leaders need to approach those programs with a problem that might benefit their work, but have to wrap in the cloak of solving a wider scientific problem. Stretiz stresses that this is a beneficial model and program, but for some areas, specifically the plasma interactions they studied to assist Cymer and ASML, they simply solve problems. They are not committed to publishing the results publicly as science problems. In short, Cymer came to the lab with a problem, the HPC Innovation Center had the expertise, and they paid simply for the resources they needed. This is an alternate model—and one Streitz hopes will catch on.

As head of the HPC Innovation Center, Streitz says he had to look at how other countries navigate the public/private partnership waters. For the lab in particular, which is a weapons facility and therefore under tight lock and key, especially to outsiders from companies and foreign nations, having a center that is not under such strict guard where companies can bring real-world problems is of great value. Finding expertise to understand and model mission-critical industry problems on world-class supercomputers is not easy. Streitz’s group is working to change that—and without requiring companies to make their problem a science problem with published results. “This is the benefit of being a lab but also a not-for-profit organization,” he details.

While even these advancements might not be enough to rescue Moore’s Law in the long term, seeing cooperation between private industry and one of the few places that can model such problems at scale is important. At a time when crunched budgets are the norm, it also emphasizes the role of supercomputing in the future of computing—a future we all depend on in nearly every aspect of our lives.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.