Recuit quantique
Le recuit quantique est un modèle de calcul quantique qui, grosso modo, généralise le modèle de calcul adiabatique. Il a attiré l'attention populaire - et commerciale - à la suite des travaux de D-WAVE sur le sujet.
Precisely what quantum annealing consists of is not as well-defined as other models of computation, essentially because it is of more interest to quantum technologists than computer scientists. Broadly speaking, we can say that it is usually considered by people with the motivations of engineers, rather than the motivations of mathematicians, so that the subject appears to have many intuitions and rules of thumb but few 'formal' results. In fact, in an answer to my question about quantum annealing, Andrew O
goes so far as to say that "quantum annealing can't be defined without considerations of algorithms and hardware". Nevertheless, "quantum annealing" seems is well-defined enough to be described as a way of approaching how to solve problems with quantum technologies with specific techniques — and so despite Andrew O
's assessment, I think that it embodies some implicitly defined model of computation. I will attempt to describe that model here.
Intuition behind the model
Quantum annealing gets its name from a loose analogy to (classical) simulated annealing.
They are both presented as means of minimising the energy of a system, expressed in the form of a Hamiltonian:
HclassicalHquantum=∑i,jJijsisj=A(t)∑i,jJijσziσzj−B(t)∑iσxi
With simulated annealing, one essentially performs a random walk on the possible assignments to the 'local' variables
si∈{0,1}, but where the probability of actually making a transition depends on
- The difference in 'energy' ΔE=E1−E0 between two 'configurations' (the initial and the final global assignment to the variables {si}ni=1) before and after each step of the walk;
- A 'temperature' parameter which governs the probability with which the walk is allowed to perform a step in the random walk which has ΔE>0.
One starts with the system at 'infinite temperature', which is ultimately a fancy way of saying that you allow for all possible transitions, regardless of increases or decreases in energy. You then lower the temperature according to some schedule, so that time goes on, changes in state which increase the energy become less and less likely (though still possible). The limit is zero temperature, in which any transition which decreases energy is allowed, but any transition which increases energy is simply forbidden.
For any temperature T>0, there will be a stable distribution (a 'thermal state') of assignments, which is the uniform distribution at 'infinite' temperature, and which is which is more and more weighted on the global minimum energy states as the temperature decreases. If you take long enough to decrease the temperature from infinite to near zero, you should in principle be guaranteed to find a global optimum to the problem of minimising the energy. Thus simulated annealing is an approach to solving optimisation problems.
Quantum annealing is motivated by generalising the work by Farhi et al. on adiabatic quantum computation [arXiv:quant-ph/0001106], with the idea of considering what evolution occurs when one does not necessarily evolve the Hamiltonian in the adiabatic regime. Similarly to classical annealing, one starts in a configuration in which "classical assignments" to some problem are in a uniform distribution, though this time in coherent superposition instead of a probability distribution: this is achieved for time t=0, for instance, by setting
A(t=0)=0,B(t=0)=1
in which case the uniform superposition
|ψ0⟩∝|00⋯00⟩+|00⋯01⟩+⋯+|11⋯11⟩ is a minimum-energy state of the quantum Hamiltonian. One steers this 'distribution' (
i.e. the state of the quantum system) to one which is heavily weighted on a low-energy configuration by slowly evolving the system — by slowly changing the field strengths
A(t) and
B(t) to some final value
A(tf)=1,B(tf)=0.
Again, if you do this slowly enough, you will succeed with high probability in obtaining such a global minimum.
The
adiabatic regime describes conditions which are
sufficient for this to occur, by virtue of remaining in (a state which is very close to) the ground state of the Hamiltonian at all intermediate times. However, it is considered possible that one can evolve the system faster than this and still achieve a high probability of success.
Similarly to adiabatic quantum computing, the way that A(t) and B(t) are defined are often presented as a linear interpolations from 0 to 1 (increasing for A(t), and decreasing for B(t)). However, also in common with adiabatic computation, A(t) and B(t) don't necessarily have to be linear or even monotonic. For instance, D-Wave has considered the advantages of pausing the annealing schedule and 'backwards anneals'.
'Proper' quantum annealing (so to speak) presupposes that evolution is probably not being done in the adiabatic regime, and allows for the possibility of diabatic transitions, but only asks for a high chance of achieving an optimum — or even more pragmatically still, of achieving a result which would be difficult to find using classical techniques. There are no formal results about how quickly you can change your Hamiltonian to achieve this: the subject appears mostly to consist of experimenting with a heuristic to see what works in practise.
The comparison with classical simulated annealing
Despite the terminology, it is not immediately clear that there is much which quantum annealing has in common with classical annealing.
The main differences between quantum annealing and classical simulated annealing appear to be that:
In quantum annealing, the state is in some sense ideally a pure state, rather than a mixed state (corresponding to the probability distribution in classical annealing);
In quantum annealing, the evolution is driven by an explicit change in the Hamiltonian rather than an external parameter.
It is possible that a change in presentation could make the analogy between quantum annealing and classical annealing tighter. For instance, one could incorporate the temperature parameter into the spin Hamiltonian for classical annealing, by writing
H~classical=A(t)∑i,jJijsisj−B(t)∑i,jconst.
where we might choose something like
A(t)=t/(tF−t) and
B(t)=tF−t for
tF>0 the length of the anneal schedule. (This is chosen deliberately so that
A(0)=0 and
A(t)→+∞ for
t→tF.)
Then, just as an annealing algorithm is governed in principle by the Schrödinger equation for all times, we may consider an annealing process which is governed by a diffusion process which is in principle uniform with tim by small changes in configurations, where the probability of executing a randomly selected change of configuration is governed by
p(x→y)=max{1,exp(−γΔEx→y)}
for some constant
γ, where
Ex→y is the energy difference between the initial and final configurations.
The stable distribution of this diffusion for the Hamiltonian at
t=0 is the uniform distribution, and the stable distribution for the Hamiltonian as
t→tF is any local minimum; and as
t increases, the probability with which a transition occurs which increases the energy becomes smaller, until as
t→tF the probability of any increases in energy vanish (because
any of the possible increase is a costly one).
There are still disanalogies to quantum annealing in this — for instance, we achieve the strong suppression of increases in energy as t→tF essentially by making the potential wells infinitely deep (which is not a very physical thing to do) — but this does illustrate something of a commonality between the two models, with the main distinction being not so much the evolution of the Hamiltonian as it is the difference between diffusion and Schrödinger dynamics. This suggests that there may be a sharper way to compare the two models theoretically: by describing the difference between classical and quantum annealing, as being analogous to the difference between random walks and quantum walks. A common idiom in describing quantum annealing is to speak of 'tunnelling' through energy barriers — this is certainly pertinent to how people consider quantum walks: consider for instance the work by Farhi et al. on continuous-time quantum speed-ups for evaluating NAND circuits, and more directly foundational work by Wong on quantum walks on the line tunnelling through potential barriers. Some work has been done by Chancellor [arXiv:1606.06800] on considering quantum annealing in terms of quantum walks, though it appears that there is room for a more formal and complete account.
On a purely operational level, it appears that quantum annealing gives a performance advantage over classical annealing (see for example these slides on the difference in performance between quantum vs. classical annealing, from Troyer's group at ETH, ca. 2014).
Quantum annealing as a phenomenon, as opposed to a computational model
Because quantum annealing is more studied by technologists, they focus on the concept of realising quantum annealing as an effect rather than defining the model in terms of general principles. (A rough analogy would be studying the unitary circuit model only inasmuch as it represents a means of achieving the 'effects' of eigenvalue estimation or amplitude amplification.)
Therefore, whether something counts as "quantum annealing" is described by at least some people as being hardware-dependent, and even input-dependent: for instance, on the layout of the qubits, the noise levels of the machine. It seems that even trying to approach the adiabatic regime will prevent you from achieving quantum annealing, because the idea of what quantum annealing even consists of includes the idea that noise (such as decoherence) will prevent annealing from being realised: as a computational effect, as opposed to a computational model, quantum annealing essentially requires that the annealing schedule is shorter than the decoherence time of the quantum system.
Some people occasionally describe noise as being somehow essential to the process of quantum annealing. For instance, Boixo et al. [arXiv:1304.4595] write
Unlike adiabatic quantum computing[, quantum annealing] is a positive temperature method involving an open quantum system coupled to a thermal bath.
It might perhaps be accurate to describe it as being an inevitable feature of systems in which one will perform annealing (just because noise is inevitable feature of a system in which you will do quantum information processing of any kind): as Andrew O
writes "in reality no baths really help quantum annealing". It is possible that a dissipative process can help quantum annealing by helping the system build population on lower-energy states (as suggested by work by Amin et al., [arXiv:cond-mat/0609332]), but this seems essentially to be a classical effect, and would inherently require a quiet low-temperature environment rather than 'the presence of noise'.
The bottom line
It might be said — in particular by those who study it — that quantum annealing is an effect, rather than a model of computation. A "quantum annealer" would then be best understood as "a machine which realises the effect of quantum annealing", rather than a machine which attempts to embody a model of computation which is known as 'quantum annealing'. However, the same might be said of adiabatic quantum computation, which is — in my opinion correctly — described as a model of computation in its own right.
Perhaps it would be fair to describe quantum annealing as an approach to realising a very general heuristic, and that there is an implicit model of computation which could be characterised as the conditions under which we could expect this heuristic to be successful. If we consider quantum annealing this way, it would be a model which includes the adiabatic regime (with zero-noise) as a special case, but it may in principle be more general.