One Step Closer to Easier Quantum Programming

For quantum computing to make the leap from theory and slim early use cases to broader adoption, a programmability jump is required. Some of the first hurdles have been knocked over in the last few weeks with new compiler and API-based development efforts that abstract some of the complex physics required for both qubit and gate-based approaches to quantum devices.

The more public recent effort was the open source publication of OpenFermion, a quantum compiler based on work at Google and quantum startup, Rigetti Computing, that is focused on applications in quantum chemistry and materials science. OpenFermion is more theoretical/simulation-driven than practical at this point since it less focused on hardware than planning coming problems in quantum chemistry and other areas, but it is a useful foundation for when quantum systems finally come online Google and Rigetti teams argue. As a side note, Rigetti has its own development environment for computing on its gate-based devices called Forest, which aims to bring high-level development tools and an open source base to a range of problems for hybrid (quantum and CPU) systems.

A less public but highly developed quantum programming project out of Oak Ridge National Lab’s Quantum Computing Institute takes similar steps toward bringing an open source framework to light for mapping quantum problems onto both D-Wave and gate-based (Rigetti and IBM, for example) devices. The Extreme-Scale Accelerator Programming (XACC) model takes a route similar to CUDA for GPUs today by emphasizing an offload model for specific pieces of a problem. It is also much like the companion OpenCL model, which is valued as a hardware agnostic approach to GPU acceleration.

Since there is no telling which device or quantum approach will win out over the next several years, the only smart approach is to build for the future problems versus hardware, according to one of XACC’s leads, Alexander McCaskey. Considering the many aspects of both the quantum physics and deep domain expertise in mapping problems to a device that must be abstracted, however, this is far easier said than done.

“The XACC framework provides a high-level API that enables applications to offload computational work represented as quantum kernels for execution on an attached quantum accelerator. This approach is agnostic to the quantum programming language and the quantum processor hardware, which enables quantum programs to be ported to multiple processors. The XACC model and its reference implementation may serve as a foundation for future HPC-ready applications, data structures, and libraries using conventional quantum-hybrid computing.”

Before working on XACC and other quantum-related problems at Oak Ridge, McCaskey was part of an early team working on the programming model and development environment for Lockheed Martin’s D-Wave quantum computer in 2011. He has continued to work with programming tools for other devices as well, looking for similar elements between two different approaches to quantum computing—D-Wave’s qubit/annealing based chips and gate based systems.

More specifically, McCaskey and team are focused on what role these devices might play as accelerators for traditional CPU-based systems in the next few years. He says early progress in quantum chemistry applications are highlighting a realistic opportunity to accelerate classical HPC problems for the post-exascale era.

When it comes to creating programming abstractions for quantum systems there is a two-way divide, even if the model is created to be hardware agnostic. D-Wave and the quantum annealing approach versus the IBM or Google way with gate-based quantum computing.

“We have to think about the entire stack differently than anything we’ve seen before. With the D-Wave approach, you’re specifying a list of numbers for the machine hardware values, setting magnetic fields and qubit coupler values—what you want here are abstractions for that. For gates, you’re describing an algorithm that is made of gate operations on a set of qubits and need abstraction there and have data structures that can be pulled from a library and executed on to run the algorithm for a problem without worrying about the underlying gates,” McCaskey explains. “You want to provide data structures at a high level and a programing model that is familiar, which XACC does.”

“We designed and demonstrated the XACC programming model within the C++ language by following a coprocessor machine model akin to the design of OpenCL or CUDA for GPUs. However, we take into account the subtleties and complexities inherent to the interplay between conventional and quantum processing hardware.”

“XACC is similar in concept to how we program GPUs to offload work onto. We wanted something similar for quantum. It is a new way of thinking about programming at from the ground up with XACC as the foundational layer. From there, it is possible to build libraries and applications on top of that.”

It is a big claim: creating a higher level abstraction to hide the physics complexity of two entirely different types of quantum accelerators with the same overarching API. Further, as with other high level programming tools to speak at a higher level to accelerators like OpenCL, the generalization that allows broader use comes with a hit on performance. In short, nothing comes close to the performance when programming to the metal. McCaskey says it is early enough in the quantum game that generalizing a programming framework is more feasible than it sounds.

“For quantum, we really don’t know which vendor or quantum model type will be the best in 20 years. At this point, we have to define our programming models in a way that is hardware agnostic. Are we losing performance because we don’t know the underlying architecture? Not with QPUs with such a low number of noisy qubits these. Keeping our model QPU agnostic provides abstractions for transforming and optimizing the compiled kernels—that is a good starting point.”

“I want people to be able to say they want to run a variational quantum Eigen solver for quantum chemistry. They have an input file that describes the target molecule and XACC does the rest; the quantum compilation producing the gate sequence, and offloading it to the chip. That is the goal for complexity and abstraction,” McCaskey says. As one might imagine, however, this is not something any programmer can pick up and run with in production for complex applications.

At a high level, it is possible to play around the with the software, but for real problem solving, deep domain expertise in the application area is critical and it is also important to understand quantum computers via a course, McCaskey says. “For gate based chips, even with XACC, you need to understand some quantum physics to understand what the gates are doing, but if you just want to try out examples with the frameworks that takes far less expertise.”

McCaskey sees a future for supercomputing that is heterogenous, except instead of offload to GPU accelerators, some of the application is handled on a QPU. Oak Ridge National Lab has already been exploring hyper-heterogeneous machines (including using neuromorphic, quantum, and supercomputing hybrid machines for deep learning). “We can leverage many different types of acceleration and think about a brand new hybrid classical quantum set of algorithms that push past current bottlenecks for existing intractable problems.”

There is quite a bit of detail about the programming model in this detailed paper on XACC.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.