When it comes to quantum computing, I usually get excited about experimental results rather than ideas for new hardware. New devices – or new ways of deploying old ones – may eventually be useful, but we’re not sure when the results will be in. If we want to judge existing ideas for their usefulness, then adiabatic quantum computing has got to be right up there, because now you can use it to do some calculations. And right now, adiabatic quantum computing has the best chance of increasing the number of qubits.

But qubits are not everything: you also need speed. So how exactly do you compare speeds between quantum computers? If you start looking into this problem, you’ll quickly discover that it’s much more complicated than anyone actually wanted. Even when you *can* When comparing speeds today, you also want to be able to estimate how much better you could do with an upgraded version of the same hardware. This often turns out to be even more difficult.

## It’s fast, honestly

Unlike classical computers, speed itself is not so easy to define for a quantum computer. If we take something like D-Wave’s quantum annealer as an example, it has no system clock and doesn’t use ports that perform specific operations. Instead, the entire computer goes through a continuous evolution from the state in which it was initialized to the state that, hopefully, contains the solution. The time required for this is called the glow time.

At this point, you can all say, “Chris you’re stupid, obviously the time from initialization to resolution counts.” Except I used the word hopeful in that sentence above for good reason. However a quantum computer is designed and operated, the readout process involves measuring the states of the qubits. That means there is a non-zero chance of getting the wrong answer.

This does not mean that a quantum computer is useless. First, for some calculations it is possible to check a solution very efficiently. Finding prime factors is a good example. I just multiply the factors together; if the answer doesn’t come to the number I initialized the computer with, I know it was wrong. If the answer is wrong, I simply repeat the calculation. If you can’t check the solution efficiently, you can rely on statistics: the correct answer is the most likely outcome of any measurement of the final state. I can just run the same calculation multiple times and determine the correct answer from the statistical distribution of the results.

So for an adiabatic quantum computer, this means that speed is the annealing time *multiplied by the number of runs* needed to determine the most likely outcome. While this isn’t the most satisfying answer, it’s still better than nothing.

Unfortunately, these two factors are not independent of each other. During annealing, the computation requires that all qubits remain in the ground state. However, rapid changes are more likely to disrupt the ground-state qubits, so reducing the annealing time increases the likelihood of an incorrect result. Do the work faster and you may need to do the calculation more often to correctly determine the most likely outcome. And as you shorten the annealing time, wrong answers eventually become so likely that they are indistinguishable from correct answers.

So determining the annealing time of an adiabatic quantum computer is something of a trial-and-error approach. The underlying logic is that slower is probably better, but we’ll go as fast as we dare. A new article published in *Physical assessment letters* shows that under the right circumstances it might be better to be careful and accelerate even more. However, that speed comes at the expense of high peak consumption.

## Adiabatic quantum computers ignore speed limits… or don’t they?

To summarize, in an adiabatic quantum computer, the qubits are all in the ground state of a simple global environment. That environment is then modified in such a way that the ground state is the solution to a problem you want to solve. Now, provided the qubits remain in the ground state while you change the environment, you get the right solution.

The key lies in how quickly you are allowed to adapt the environment. If you do it very slowly, someone with a slide rule might be ahead of you with the answer. If you do it really fast, your calculation will probably go wrong because the qubits leave the ground state. Rapid adjustments also require high peak power, so there is a trade-off between speed, power and accuracy.

Let’s use an example to understand the trade-off. Imagine the equivalent of a quantum ball and spring, otherwise known as the harmonic oscillator. In the lowest energy state, the oscillator bounces up and down at a natural frequency, which is determined by the stiffness of the spring and the mass of the oscillator. In this case, changing the environment would mean increasing or decreasing the stiffness of the spring. To complete the analogy, the jumps between different quantum states increase and decrease the oscillation amplitude, but those jumps do not change the frequency.

Next, imagine that we reduce the stiffness of the spring, making the system a bit slacker. The oscillation frequency slows down and the amplitude should drop as well, but it will take a while. If the rate of reduction is too high, the amplitude remains the same, which is more consistent with an excited state. This allows the oscillator to leave the ground state.

To avoid this, we need to change the spring stiffness at a rate slow enough for the oscillator to dissipate the excess energy. Similarly, when we tighten the spring, the process gives energy to the oscillator. If we give all that energy in one big chunk, then it is enough for the oscillator to jump to the excited state, even if only briefly.

You can also think of this in terms of power. While we can change the stiffness of the spring between two values, and therefore expend a certain amount of energy, the total power depends on how quickly we make that change. A short sharp change requires high power, while a long slow change requires low power. So you can think of three parameters that need to be optimized: the speed of the change, the power consumption to complete the change, and the probability that the change will drive the qubit out of its ground state.