Quantum computing is more powerful than classical computing because qubits can do exponentially more work than bits. Quantum computers with 72 qubits exist today and a few thousand qubits may be sufficient for quantum computers to outperform all current classical computers. Until recently, every quantum computer had its own programming language. Are we moving towards a quantum programming language that is universal, that is, one whose programs can run on all quantum computers? Will a winner emerge among the many languages that flourish at the moment? What are the challenges in making a universal language? I will argue that a key enabler will be better compilers that can translate quantum algorithms, overcome hardware differences, and handle combinatorial explosion.
Quantum algorithms
Let me begin with one of the drivers of language design, namely algorithms.
The most famous quantum algorithm is Shor’s algorithm for factoring integers. This algorithm has the promise to break some forms of cryptography but requires a quantum computer of a size that is many years into the future. Specifically, people have estimated that Shor’s algorithm requires more than 10,000 qubits before it can beat factorization on classical computers.

The Deutsch-Jozsa algorithm as a quantum circuit
As an example of what quantum algorithms look like, the diagram shows the Deutsch-Jozsa algorithm in a version that uses two qubits. Time goes from left to right, each qubit follows a line, and the boxes represent gates that transform or measure the qubits. In general, we can think of n qubits as a vector of 2^n complex numbers. Each step of computation is the application of a matrix (called a gate) to some of the state, and or a measurement of the final state. Each step takes between 10 and 100 nanoseconds on today’s quantum computers. Quantum computing is probabilistic in nature, so typically we will want to repeat the computation a large number of times, such as 10 million. This will give us a probability distribution over the observed final states.
Which other algorithms can motivate language design? The strongest motivators would be quantum algorithms that can make near-term quantum computers outperform all current classical computers. This is also known as a quantum advantage (aka quantum supremacy, something that Google’s quantum team demonstrated after this article was originally written). The Quantum Algorithm Zoo lists 60 algorithms. Most of them have better worst-case complexity than their classical counterparts but are unlikely to provide a quantum advantage any time soon. The paucity of promising quantum algorithms is a conundrum for language designers. Specifically, how can we know what programming style to support before a wide variety of algorithms is available?
Researchers have high hopes for three areas of quantum algorithms that may give a quantum advantage with fewer than 10,000 qubits. The most promising area consists of algorithms for optimization. Many optimization problems are NP-complete, which means that optimal algorithms on a classical computer scale poorly. Classical algorithms can also solve such problems approximately and so can quantum algorithms, such as the Quantum Approximate Optimization Algorithm (QAOA). Indeed, QAOA has generated optimism that it may gain a quantum advantage starting at problem sizes that require 3,000 qubits to represent. In 2019, DARPA plans to invest $33 million in finding out whether QAOA or related algorithms can give a quantum advantage within five years. Two other promising areas are algorithms for simulation and for machine learning. In particular, simulation of quantum chemistry is at the heart of what motivated quantum computing in the first place. Thus, in the near term, algorithms for optimization, simulation, and machine learning will be the strongest drivers of language design.
So far, language designers have moved forward with well-known ideas that have worked in other domains. For example, a PyQuil programmer can use Python to build a list that represents a quantum circuit and then give that list to an execution engine. The execution engine can be a quantum computer or a quantum simulator. A Cirq programmer can use Python and an execution engine in a similar manner. In contrast, Q# is a domain-specific language that mixes classical code and quantum code and leaves to the Q# compiler to produce a quantum circuit. Common across current quantum languages is that the level of abstraction is rather low. What are the right abstractions for quantum computing?
Hardware differences
Now let us look at the computers that quantum languages will run on.
The current quantum computers tend to be rather different at the hardware level. For example, they may use trapped-ion qubits or superconducting qubits. Additionally, the instruction-set architectures that they present to the software level are rather different. They all support gates, but different sets of gates. This means that a machine-level program for one quantum computer usually needs a significant amount of work before it can run on a different quantum computer. This challenge is increased by another factor, which is the physical layout of the qubits. Some gates operate on two qubits, but on some quantum computers, those two qubits must be physically connected. In some cases, a quantum algorithm may want to apply a gate to two unconnected qubits, which means that some swapping of qubits has to happen first. Different quantum computers tend to have different layouts so the needed swapping of data is different for each one.
As new quantum computers and new qubits have emerged, the trend is towards more diversity in the instruction sets. This increases the challenge for compilers that both want to optimize code and to target multiple quantum computers. Thus, if we are going to have a universal quantum programming language, then the compiler will be the key to adapting a program to these differences.
Combinatorial explosion
Now let us take a closer look at the compilation problem.
On a quantum computer, adding a qubit doubles the computational power, and yet the scarcity of qubits makes them a precious resource. One of the objectives of a compiler is to do qubits management with the goal of getting as much work as possible out of every qubit. In contrast to classical computing, this management must be done entirely at compile time. One of the challenges is that all quantum computing is reversible, which entails that destructive update of the state of a qubit is impossible. Additionally, a program cannot clone a qubit in an unknown state so making a temporary copy and later restoring is also impossible. Thus, using a qubit for multiple purposes is nontrivial.
Near-term quantum computers can execute up to 100 computation steps, after which the quantumness goes away, a phenomenon known as decoherence. At the hardware level, a quantum computer is a delicate and brittle device that operates at a temperature below 0.1 Kelvin to maintain quantumness. However, at some point, nature will do its thing, the quantumness will turn classical, and the ability to do quantum computing will be lost. A quantum compiler can fight decoherence by optimizing the number of computation steps.
Many recent papers on quantum compilers demonstrate a considerable interest in the area. The papers tend to focus on minimizing the number of computation steps, the number of swaps of qubits, and the count of slow gates. The papers use a wide variety of abstractions to model those minimization problems. Researchers have shown that the problem of minimizing the number of computation steps for a given qubit layout is NP-complete. A survey from 2018 found three open-source compilers for quantum languages.
Conclusion
Today a language designer has few algorithms for driving the language design, many different targets on which a language can run, and a huge combinatorial problem that a compiler must solve. If we are going to get a universal quantum language, most likely it has yet to be invented. Now is a great time for language designers and implementers to try new ideas.
Bio: Jens Palsberg is a professor and a former department chair of Computer Science at University of California, Los Angeles (UCLA). He is the Chair of SIGPLAN, former editor-in-chief of TOPLAS, and a former PC chair and a former general chair of POPL. In 2012 he received the SIGPLAN Distinguished Service Award.
Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.