Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Quantum Algorithms and Applications A bedrock of the field of algorithms lies in the principle that the total number of computational steps required to solve a problem is (roughly) independent of the underlying design of the computerâ remarkably, to a first approximation what is designated a single step of computation is a matter of convenience and does not change the total time to solution. This basic principle, called the extended Church-Turing thesis, implies that to solve a computational problem faster, one may (1) reduce the time to implement a single step; (2) perform many steps in parallel; or (3) reduce the total number of steps to completion via the design of a clever algorithm. The discovery that quantum computers violate the extended Church-Turing thesis [1,2]âby solving certain computational tasks with exponentially fewer steps than the best classical algorithm for the same taskâshook up the foundations of computer science, and opened the possibility of an entirely new way of solving computational problems quickly.1 The practical potential of quantum computers was illustrated soon thereafter when Peter Shor created quantum algorithms for factoring large numbers and computing discrete logarithms that were exponentially faster than any developed for a classical computer [3-5]. These quantum algorithms generated serious concern in the security community, since the classical hardness of these two problems lie at the core of the public key âcryptosystemsâ that protect the vast majority of societyâs digital data. Indeed, algorithms for factoring large numbers have been studied over the centuries by mathematicians and very intensely over the last few decades by computer scientists. The main issue in these, and most other computational problems, is combinatorial explosion: the exponential number of potential solutions that the algorithm must choose between. In the case of factoring an n bit number N, the possible prime divisors of N include all prime numbers less than N, and there are exp(n) many such primes. Indeed, the fastest classical algorithm for actually finding the prime divisors of N takes exp(O(n1/3)) steps, while Shorâs quantum algorithm took only O(n3) steps, later improved to O(n2log[n]). A very general goal of the field of algorithms is to solve a computational task by an algorithm whose number of steps (colloquially called its ârunning timeâ) scales polynomially in the size, n, of the input, thereby bypassing the combinatorial explosion. Computational tasks for which such polynomial time (classical) algorithms exist are referred to as belonging to the complexity class P. The corresponding complexity class, bounded-error quantum polynomial time (BQP), contains all those computational tasks that a quantum computer would be able to solve in polynomial time. By contrast, algorithms whose running time scales exponentially in the size of the input very quickly become prohibitively expensive as the input size is scaled. 1 Note that quantum computers do not violate the original Church-Turing thesis, which defines the limits of what it is possible to compute at all (independent of time required to perform the computation). See D. Deutsch, 1985, Quantum theory, the Church-Turing principle and the universal quantum computer, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences (1934-1990), 400(1818):97-117. The extended Church-Turing thesis is sometimes referred to as âthe feasibility thesisâ or the âcomputational complexity-theoretic Church-Turing thesis.â PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-1

It is important to realize that quantum computers do not uniformly speed up all computational problems. One of the most important classes of computational problems, the NP-complete problems [6], have been described as looking for a needle in an exponentially large haystack. About the same time as Shorâs announcement, Bennett et al. [7] proved that quantum algorithms require exponential time to solve NP-complete problems in the black box modelâthat is, if the algorithm ignores the detailed problem structureâand are therefore unlikely to provide exponential speedups for such problems. More precisely, if N denotes the size of the haystack, Bennett et al. showed that any quantum algorithm to find the needle must take at least N1/2 steps. A few years later, Grover showed that there is a quantum algorithm that can find the needle in O(N1/2) steps [8]. The class NP is characterized by the requirement that a classical computer should be able to check the correctness of a solution in polynomial time (no matter how hard it is to actually find the solution). NP-complete problems are the hardest problems in NP, and include the famous Traveling Salesman Problem, as well as thousands of problems from every field in science. It is widely conjectured that P â NP (this is one of the famous seven Clay Millennium Problems), and that any classical algorithm must require exp(n) steps to solve NP-complete problems [9]. The design of quantum algorithms follows completely different principles from those of classical algorithms. To begin with, even classical algorithms have to be cast in a special formâas reversible algorithmsâbefore they can be run on a quantum computer. Algorithms that achieve quantum speedups use certain quantum algorithmic paradigms or building blocks that have no classical counterparts. There is an extensive literature on quantum algorithms that has been developed in the quarter century since the first algorithms discussed above. All of these algorithms rely on a handful of quantum building blocks that are described in the next section and are designed to run on an idealized quantum computer. Real quantum devices are noisy, so an elaborate theory of quantum error correcting codes and fault-tolerant quantum computing has been developed to convert noisy quantum computers to ideal quantum computers. However, this conversion incurs an overhead both in number of qubits as well as running time. The field is now entering the era of noisy intermediate-scale quantum (NISQ) devices [10]âthe race to build quantum computers that are sufficiently large (tens to hundreds or a few thousand qubits) that they cannot be efficiently simulated by a classical computer, but are not fault tolerant and so cannot directly implement the algorithms developed for ideal quantum computers. While the enormous interest and funding for building NISQ computers has undoubtedly moved up the calendar for scalable, fault- tolerant quantum computers, significant work remains before each milestone is met. The biggest upcoming challenges are algorithmic; in the near-term, this includes the search for computational tasks that such computers can speed up. Developing algorithms that run on NISQ computers are as important as creating the physical devices, since without both, the machine is not useful. In the longer term, much work remains to be done in the field of quantum algorithms for ideal (scalable, fault-tolerant) quantum computers. The next section describes major building blocks for quantum algorithms, as well as known algorithms for idealized quantum computers that provide speedups over the best classical algorithms for the same computational tasks. The subsequent section describes quantum error-correction and fault-tolerance techniques for converting a noisy quantum computer to an idealized quantum computer. The chapter concludes with a discussion of the major algorithmic challenge presented by NISQ computers, and the most promising leads in the search for such algorithms. Finding: Progress in quantum algorithms is essential for quantum computing success. In the new term, developing algorithms that work on NISQ machines is critical. 3.1 QUANTUM ALGORITHMS FOR AN IDEAL GATE-BASED QUANTUM COMPUTER The power of quantum algorithms ultimately derives from the exponential complexity of quantum systemsâthe state of a system of n entangled qubits is described by (and can thus encode) N = 2n complex coefficients, as discussed in the previous chapter. Moreover, the application of each elementary PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-2

gate on, say, two qubits updates the 2n complex numbers describing the state, thereby seeming to perform 2n computations in a single step. On the other hand, at the end of the computation, when the n qubits are measured, the result is just n classical bits. The challenge of designing useful and advantageous quantum algorithms derives from the tension between these two phenomenaâone must find tasks whose operational solutions both make use of this parallelism and yield a final quantum state that has a high probability of returning valuable information upon measurement. Successful approaches take advantage of the phenomenon of quantum interference for generating useful results. In the following, some of the major building blocks for quantum algorithms are described, as well as several foundational quantum algorithms and how they can be used to solve different kinds of abstract problems. 3.1.1 The Quantum Fourier Transform and Quantum Fourier Sampling One of the most basic building blocks for quantum algorithms is the quantum Fourier transform (QFT) algorithm. The Fourier transform, a critical step in many classical calculations and computations, is an operation that transforms one representation of a signal of interest into a different representational form. The classical Fourier transform turns a signal represented as a function of time into its corresponding signal represented as a function of frequency. For example, this could mean transforming a mathematical description of a musical chord in terms of air pressure as a function of time into the amplitudes of the set of musical tones (or notes) that combine to form the chord. This transformation is reversible via the inverse Fourier transform, so involves no information lossâa key requirement for any operation on a quantum computer. Concretely, the input is an N-dimensional vector with complex entries (a1, a2, â¦, aN), and the output is an N-dimensional vector with complex entries (b1, b2, â¦, bN) which is obtained by multiplying the input vector with the N Ã N Fourier transform matrix. Given the utility of the Fourier transform, many clever algorithms have been developed to implement it on classical computers. The best, the fast Fourier transform (FFT), takes O(NlogN) time, which is only slightly longer than it takes to read the input data [O(N)]. While the classical FFT is quite efficient, quantum Fourier transform (QFT) is exponentially faster, requiring only O(log2 N) = O(n2) time (where N = 2n) in its original formulation, later improved to O(nlogn) [11]. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-3

FIGURE 3.1 An illustrative example of the quantum Fourier transform (QFT) applied to a three-qubit system. The three qubits must be initially prepared such that the eight (23 = 8) complex coefficients encode the system state corresponding to the sequence of values to be transformed. Since the number of coefficients, N, is 2n, where n is the number of qubits, only log(N) bits are needed: 3 qubits can represent the 8 complex values shown. The QFT effectively finds patterns in the input sequence and identifies their frequency of repetition. In this example, all the input states have similar probability, with the real components of coefficients alternating sign four times. The coefficients of the output state (shown on the right) capture this pattern: the coefficient of the ith state, , is large if there are icycles in the input sequence.2 Thus, in this example all outputs are all small except for one state, 100, corresponding to the input pattern frequency. Thus, measuring this output is likely to provide the index of this strong pattern in the input sequence. Before describing the QFT, it is important to understand how the input and output are represented as quantum states. The input , .â¦ is represented as the quantum state â | , and the output 2 Actually, for a sequence of N, the highest number of repetitions possible is N/2, or 4 in the example. The values of 5, 6, and 7 âaliasâ back to tones that repeat 3, 2, and 1 times respectively, so these tones can be represented by either location. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-4

, ,â¦ is represented as the quantum state â | . Thus, the input and output are represented as states of just n qubits, where n = log N. This is shown in Figure 3.1. Exponential speedup is possible only if the input data has already be encoded into a compact quantum state, or can be encoded into this state in O(log N) steps. The quantum circuit that carries out this transformation has total number of gates that scales as O(n log n). Another caveat is that one of course cannot access the amplitudes bi through measurement. Indeed, if the output of the QFT is measured, it yields the index i with probability |bi|2. Thus, measuring this algorithmâs output only yields the index of a probable output, which is called quantum Fourier sampling (QFS). QFS is an important primitive in quantum algorithms, and entails applying the QFT and measuring the output state, resulting in the sampling of an index i from a certain probability distribution. First, since O(N) time is required to read the input data, the quantum algorithm can be completed only in O(log2N)âthat is, it can yield speedup only compared to its classical analogueâif the input data is preencoded into logN qubits and not read in directly from a file of data. These logN qubits are in a superposition of N quantum states, and the coefficient on each state represents the data sequence to be transformed. This is shown in Figure 3.1. Applying the QFT algorithm to this input changes the state of the log N qubits such that the new coefficients are the Fourier Transform of the input coefficients. Of course, since the output is a quantum state, there is no way to directly read these values. When the output is measured, only one of the N possible classical output states is observed. The probability that any of the N states will be observed is the square of the absolute value of the coefficient of that state, which is also the square of its Fourier transform value. Performing a QFT on a set of qubits and then measuring their final state accomplishes the same task as what is referred to classically as Fourier sampling. It turns out that sampling the output of the Fourier transform is useful in some cases for finding structure in a sequence of numbers, as illustrated in Figure 3.1. Notice that the coefficients of the input data are periodic, with four periods in this sequence. This periodicity causes the amplitude of state |100> to be much larger than all the others, so with high probability, measuring the final system state will return 100 (binary for 4), revealing the input sequence repeated 4 times, or had a repeat distance of 2. This example illustrates the power and pitfalls of quantum computing. If the initial input superposition already exists, the Fourier transform can be performed on the superposition coefficients exponentially faster than would be possible classically. However, at the end of this operation, one samples only one of the N states, rather than obtaining the entire set of output coefficients. Furthermore, it is not clear in general how to create the input superposition without taking O(N) timeâalthough this becomes less of a problem if QFT is performed on a preloaded input quantum state as one step in a longer algorithm. The QFT, which cleverly leverages the characteristics of quantum computation, is useful in constructing a host of quantum algorithms. Examples include quantum factoring, finding hidden structure, and quantum phase estimation. 3.1.2 Quantum Factoring and Finding Hidden Structures Shorâs discovery of polynomial time algorithms for factoring and calculating discrete logarithms [12] was a major breakthrough for the field of quantum algorithms, both because of the apparent speedup compared to the classical algorithms and because of the implications of this speedup for known applications. At their heart, both algorithms may be seen as an ingenious way of exploiting the exponential speedup in the QFT, even given the input and output limitations of Fourier sampling. To be able to use the power of the QFT, Shor first converted the problem of finding the factors of a number to a problem that involves finding a repeating patternâexactly what the FT detects. Shor was able to show that the factoring problem was equivalent to the problem of finding the period in a sequence of numbers, albeit a sequence of numbers that is exponentially longer than the number of bits of the corresponding number to be factored. Thus, while this equivalency does not provide any help in solving the problem on a classical computer (since it would need to generate this sequence of 2n numbers for an n-bit number to factor, which would take an exponential amount of time), it is a perfect problem for a PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-5

quantum computer. In a quantum computer, the exponentially long sequence can be encoded into merely n qubits, and generated in a time that is polynomial in n. Once that sequence is generated, the QFT can be used to find the period. The fact that the returned result is only a sample of the output FT amplitudes is not limiting, since the desired information is highly likely to be what a measurement would sample. Shorâs algorithm, if deployed on a perfect quantum computer, would make it possible to compute the secret key of the most widely used public key cryptosystems, RSA. In addition, it would be able to compute the secret key of other widely used public-key cryptosystems, such as Diffie-Hellman and elliptic curve cryptography. The implications of quantum computing for cryptography are discussed in more detail in Chapter 4. Shorâs quantum algorithm for factoring and discrete log can both be regarded as examples of finding hidden algebraic structure, related to a well-known mathematical problem called the âhidden subgroup problemâ [13,14]. Currently, there are quantum approaches for solving some cases of this problem efficiently, specifically for so-called Abelian and closely related groups (characterized by their symmetry properties). On the other hand, the problem is expected to be hard for the so-called dihedral symmetry group. This hard problem is closely related to another, called the shortest vector problemâthe basis of the learning with errors (LWE) cryptosystem, one of the proposed post-quantum (that is, quantum-resistant) cyphers described in Chapter 4. 3.1.3 Groverâs Algorithm and Quantum Random Walks While the QFT underlies many quantum algorithms, another class of algorithms take advantage of a different method, called the âquantum random walk.â This method is analogous to classical random walk methods, which probabilistically simulate progress in traversing some terrain. Groverâs algorithm addresses the specific problem of finding the unique inputs to a given function that will yield a certain output [15].3 Classically, this is a basic NP-hard search problemâthat is, there are no known polynomial time solutions. In the absence of information about the nature of the function, the fastest known classical algorithm for this problem is exhaustive search, or exploration of all possible inputs to find the answerâa process that takes O(N) = O(2n) steps, where n is the number of bits required to represent the input. Groverâs algorithm solves this problem in â steps. While this is only a polynomial speedup over the best classical approach, it could nonetheless be significant in practice. As will be discussed in the next chapter, this could be sufficient to compromise additional cryptographic operations. Moreover, this is the optimal quantum algorithm for this problem, in light of the result of Bennett et al. [16] showing that any quantum algorithm must take at least â steps to solve this problem in the black box model. The problem with the classical exhaustive search approach is that the systematic testing of each possible answer is a blind guess-and-check: each query provides no information about the answer until it is actually found. To get around this problem, Groverâs algorithm proceeds by iterating a set of two operations on the qubits. The first effectively flags the state corresponding to the correct answer by changing the sign of its coefficient. The second, called the Grover diffusion operator, is then able to slightly increase the magnitude of this coefficient. Together, the two steps comprise a so-called Grover iteration, each application of which increases the probability that the correct answer will be read-out upon measurement. This procedure to increase the amplitude of the state(s) containing the correct answer is an example of a general algorithmic approach called amplitude amplification [17], which is useful in a number of quantum algorithms. The sequence of amplitude amplification operations can be viewed as a sort of quantum random walk; however, Groverâs algorithm does the âwalkâ backward, from a distributed state (analogous to all possible endpoints of a random walk from a given starting point) back to a state focused around the single 3 The problem can be more formally phrased as: if f is an efficiently computable binary function that operates on strings of length n, find x such that f(x) = 1. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-6

correct component (analogous to the starting point of the walk). A classical random walk approach can explore an area proportional to the square root of the number of steps; the quantum random walk can explore an area proportional to the number of steps. Hence, the quantum algorithm provides quadratic speedup. This technique is very versatile and has led to a number of quantum algorithms providing polynomial speedups for specific computational tasks. For example, there is a quantum walk-based algorithm for solving the basic problem of determining whether the player making the first move has a winning strategy in a combinatorial game (such as chess). The naÃ¯ve classical algorithm involves an exponential search of the possible moves and outcomes, called the âgame tree,â while the quantum algorithm provides the quadratic speedup described above. More generally, the quantum algorithm provides a quadratic speedup for evaluating any AND-OR formula [18,19]. While Groverâs algorithm is often referred to as quantum âsearch,â this is not really a valid application of the technique. To perform a true quantum search, the set of data to be searched must first be represented as a superposition of quantum states and for a quantum algorithm to provide any speedup, this representation would need to be created in a time much less than the number of data points, Nâ somewhere between O(1) to O(logN). In the classical case, this data would simply be stored in random access memory (RAM) and called when needed. However, while RAM is a key element of classical computing, there is currently no robust practical RAM equivalent that generates the needed quantum superposition state for a quantum computer. It has been proposed that a quantum version of random access memory (RAM), or QRAM, could generate this data in O(log N) time [20], although this has not been practically demonstrated. To achieve this, a classical data storage unit would be supplemented with quantum logic around the memory cells. A classical analogue to this structure, called a content addressable memory, or CAM, exists, and solves this search problem in O(log N) time. However, with CAM and with QRAM, getting the data into the device in the first place still takes O(N) time, so either approach will only be useful when multiple queries are performed on the same data setâthat is, the utility of CAM and QRAM, if it can be built, grows in direct proportion to the number of times the input can be reused. 3.1.4 Hamiltonian Simulation Algorithms Simulating the dynamics of quantum systems is the most natural and obvious application of quantum computers and was the motivation for Richard Feynmanâs pioneering exploration of quantum computing [21]. Quantum algorithms can exponentially outperform classical ones when simulating a system with many quantum degrees of freedom, with applications including problems in chemistry, materials science, condensed matter, nuclear physics, and high-energy physics. The general objective in simulating a quantum system is to determine its structure or behavior, given knowledge of its components and the environment in which it exists. For example, a simulation could be used to elucidate the structure of a substance, or the behavior over time of a collection of interacting particles. These problems can have a variety of applications, from informing the development of new industrial materials to solving important physics problems. In general, these simulations require knowledge of the Hamiltonian (energy operator) describing all elements and interactions of the system. From there, one can either solve for the ground-state wave function for that system (in the time- independent picture), or, given some initial state of the system at a time t0, compute a close approximation to the quantum state at a future time t. Scientists have been performing classical simulations of quantum systems for decades, either restricting attention to small systems or relying upon approximate methods that can trade accuracy for computational efficiency. Accurate models are so compute-intensive (given the intrinsic high dimensionality of quantum systems) as to be inadequate for all but small systems. A quantum, rather than classical, simulation is naturally better equipped to explore the state space spanned by quantum systems. In principle, quantum simulation can proceed via at least three general approaches, each of which promises more efficient solutions in certain scenarios. The first approach PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-7

involves implementation of time-evolution algorithms on a gate-based quantum computer, commonly referred to as âHamiltonian simulation.â The second is a variational approach for obtaining approximations to quantum states using quantum computers and will be discussed later in this chapter. Last, in the field of analog quantum simulation, dedicated quantum systems, although not full-blown quantum computers, are built to emulate specific Hamiltonians. While this hardware is likely to be much simpler than gate-based machine solving the same problem, the downside of the analog simulation approach is that the hardware has limitations on the Hamiltonians it can create, so the resulting system is special-purpose and the application and simulator need to be co-designed. In addition, unlike digital quantum computations that can be protected using fault-tolerant protocols, the ability to perform analog quantum simulation in realistic, noisy environments is less well understood. In the time-evolution Hamiltonian simulation algorithms, the form of the Hamiltonian, and potentially its own dependence on time, as well as the initial state of the system must be provided as inputs. The algorithm begins by initializing the qubits into the initial system state or an approximation to it. Then, the system is advanced through time, or âpropagated,â according to its Hamiltonian, in discrete intervals, ât, for the number of iterations required to arrive at the time of interest, tf. In practice, the overall Hamiltonian is usually represented as a sum of smaller, so-called local Hamiltonians, each of which act on only a component of the larger system, which provides a useful decomposition (more generally, the Hamiltonian can be simulated efficiently provided it is sparse and the nonzero entries in any given row can be efficiently located and computed). For the process to proceed efficiently, the method of encoding of the initial state in qubits and of representing the time propagation as a gate sequence must be carefully chosen for the system in question. The first concrete quantum algorithms for gate-based Hamiltonian simulation were developed in the mid-1990s [22], and additional methods for different kinds of quantum systems has followed, along with algorithmic insights that have yielded significant reductions in time [23-29].4 Efficient Hamiltonian simulation on a quantum computer would enable important speedups for problems in quantum chemistry and materials simulation [30,31]. In particular, the electron correlation problem has been one of the most challenging problems for classical methods to tackle [32]. To understand and predict complex reaction mechanisms involved in, for example, a transition-metal catalyzed chemical transformation, requires extremely accurate electronic structure approaches. Classically, even molecules with fewer than one hundred strongly correlated electrons are beyond the scale of classical ab initio methods at chemical accuracy. Quantum computers promise exponential speedups for the simulation of the electronic structure problem and it has been shown that they would enable efficient elucidation of chemical reaction mechanisms [33]. Here, a quantum computer could enable researchers to compute or confirm the energies of chemical intermediates and transition states, and in turn help to determine accurate activation energies for chemical processes, which are important for understanding the kinetics of chemical reactions [34]. Strongly correlated species involved in chemical reactions where classical approaches presently fail include problems such as photochemical processes, nitrogen fixation, C-H bond breaking, carbon dioxide fixation and transformation, hydrogen and oxygen production, and other transition-metal catalysis problems. These applications extend to important industrial applications including fertilizer production, polymerization catalysis and clean energy processes. [35]. Hamiltonian simulation is also used within quantum algorithms for solving complex correlated material problems [36], which may have application in, for example, the search for a high- temperature superconductor. Quantum computers promise an exponential speedup over classical approaches for time evolution of quantum systems. Thus, quantum computers may have the most impact in their application to problems in quantum chemistry, for example as applied in pharmaceutics, and materials science [37]. However, there are many types of Hamiltonians that will require new methods if they are to become efficiently solvable on a quantum computer. For example, to model the electronic structure for 4 Specifically, discrete-time random walks; see A.M. Childs, 2010, On the relationship between continuous- and discrete-time quantum walk, Communications in Mathematical Physics 294(2):581-603. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-8

applications in quantum chemistry [38], the Hamiltonian of an n-orbital system involves O(n4) terms, which means a low-error quantum computer will be required for its computation. Classical approaches to solving such problems have leveraged an understanding of the systemâs physical structure to create tailored techniques [39]. Researchers have recently combined these techniques with the existing framework for quantum Hamiltonian simulation that has led to rapid progress in these quantum algorithms for such problems [40-48]. Hamiltonian simulation has also proven to be a powerful tool for developing quantum algorithms for problems with no immediate connection to quantum mechanics. A prominent example is the recent development of a new class of quantum algorithms that directly perform linear algebra on the quantum state; this is discussed next. 3.1.5 Quantum Algorithms for Linear Algebra Linear algebra, a foundational area of mathematics, can be useful in a range of contexts, from the science of quantum mechanics to computer graphics design to methods in machine learning. The general task of linear algebra is to find the solution to a system of linear equations, that is, one or more equations of the form where the sum of a set of independent variables, each scaled by some coefficient, is equal to a constant value. Mathematically speaking, such a problem can be written in matrix form as A x = b, where A is an N Ã N matrix whose elements are the coefficients on the variables in the equations, x is a column vector whose elements are each of the variables to be solved for, and b is a column vector of the constants. A quantum algorithm for such applications, termed HHL after its developers Harrow, Hassidim, and Lloyd, makes use of methods from Hamiltonian simulation [49]. It assumes that the input vector b is given as a quantum state of logN qubits | â | . It also assumes that the matrix A is sparse and its entries are accessible via an easy to compute function. Moreover, it computes the output vector x in the form of a quantum state of logN qubits | â | . At the heart of the HHL algorithm lies one of the basic quantum algorithmic building blocks: Kitaevâs quantum phase estimation algorithm. This is a procedure for exponentially fast estimation of the eigenvalue (or phase) of an eigenvector of a unitary operator. This is relevant to linear algebra, since inverting the matrix A is easy if its eigenvalues are known. The running time of the HHL algorithm scales as a polynomial in logN and the condition number of A. Of course, the access to the solution x is restricted to information that can be readily accessed from the quantum state | . For a given A and b, the algorithm would output a quantum state for which the values of the N coefficients are proportional to the N elements of solution x. Although the solution is present in the quantum computer, the rules of quantum mechanics also prevent it from being directly read out. However, if one is interested in only finding certain expectation values of the solution, one can obtain this result with a number of gates that has poly(logN) cost [50]. Linear algebra problems can be solved with a classical computer using memory and running time that scale as poly(N) so a quantum computer would use exponentially fewer resources and time for solving this more restricted problem. Recent related work has shown similar results for solving linear differential equations [51] and performing convex optimization [52], under the assumption that the input matrix A is very sparseâthat is, that most of the coefficients are zeroâsince the algorithmâs running time is polynomial in the number of nonzero elements per row. As with the preceding algorithms, this exponential speedup comes with a number of important caveats. As previously mentioned, reading the output provides only an index i with probability proportional to | | . Thus, one major issue in using this algorithm is finding settings where this limited information is useful. One example of such a setting is recommendation systems, where past ratings of several products by a group of users (specified by a matrix) are used to provide personalized recommendations to individual users. The recommendation is a product, which is specified by an index. A quantum algorithm for this problem was found with an exponential speedup over existing classical PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-9

algorithms [53]. Recently, this quantum algorithm inspired a new classical algorithm which is only polynomially slower than the quantum algorithm [54].5 Another issue is that this exponential speedup is only true if both the input b and the matrix A are already encoded in logN qubits, or if they can be encoded into qubits in poly(logN) time. This precludes reading in this data, since simply reading the data to create this state would take at least O(N) time. This exponential speedup is only possible if this data was already prepared as a quantum state before the start of the algorithm, or if some method is found to prepare it efficiently. As previously mentioned, for exponential speedup the current ability for a quantum processor to read-in large amounts of data efficiently is a common challenge in quantum algorithm development; an efficient solution to this problem will likely be required for many algorithms to be useful in practice. Of course, even if this issue is not resolved, quantum algorithms can still provide polynomial speedup where classical algorithms require O(N2) or higher steps to process the input, since a quantum computer can read the data in O(N) steps. 3.1.6 Required Machine Quality The algorithms described in this section illustrate the types of tasks which when executed on a quantum computer would lead to an enormous computational advantage. For interesting problem sizes, they mostly require thousands of qubits, a few orders of magnitude larger than current machines. Unfortunately, these algorithms need to do a very large number of qubit gate operations, requiring order 1012 or even as high as 1018 operations [55]. In order for these results to be correct in the end, the gate error rate must be very small (on the order of 10-12 to 10-18). As explained in Chapter 2, unlike todayâs classical computers, whose gates can achieve these low error rates by directly rejecting noise and producing outputs with less noise than contained in their inputs, quantum gates have much higher error rates. As shown in Chapter 5, current quantum computers have error rates in the 10-2-10-3 range, and are unlikely to natively reach the required error needed to run these quantum algorithms. Quantum error correction is one way to overcome this limitation, and is described next. 3.2 QUANTUM ERROR CORRECTION AND MITIGATION Two general approaches have been developed to reduce errors in quantum systems: correction and mitigation. Of the two, quantum error correction (QEC) is the only way to dramatically lower effective error rates. This approach involves encoding the quantum state using many redundant qubits, and the using a QEC code (QECC) that exploits this redundancy of information to emulate stable qubits with very low error rates, often called âfault-tolerantâ or âlogicalâ qubits. The state of some of these additional qubits are periodically measured and a classical computing device âdecodesâ this information to determine which qubits have errors. Given this information, the errors can be corrected. Each logical qubit requires a large number of physical qubits and many quantum gate operations (and classical computation) to achieve and maintain its state. Gate operations on the more robust logical qubit, which exists only as an abstraction, must be translated into operations on the underlying physical qubits. Thus, QEC incurs costs, or âresource overheads,â of both additional qubits for each logical qubit, and additional quantum gates for each logical operation. Quantum error correction is an active area of quantum algorithm research, with the goal of dramatically lower the overheads in qubits and time to achieve fully error free operation. Much of this research has focused on studying surface codes and the larger class of topological codes of which they are a part. Current codes for gates with error rates of 0.1 percent still have high overheads (15,000Ã) to create a logical qubit. Until a breakthrough in either gate error rate or QEC code overhead, near-term machines 5 In this manner, progress in quantum algorithms has commonly spurred new advances in classical algorithms. This is discussed further in Chapter 7. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-10

will not be able to achieve logical qubits, leading to machines that must deal with noise and errors (NISQ computers). In the shorter term, researchers have turned to approaches for quantum error mitigation (QEM) and may use QEC to lower, but not eliminate errors, as error rates fall. 3.2.1 Quantum Error Mitigation Strategies Compared to QEC, QEM has the more modest goal of reducing the effective error rate of the quantum calculation to support simple computations, or for non-gate-based quantum approaches to extend the coherence of imperfect qubits [56,57] for durations long enough to complete short algorithms. Since a lower error rate lowers the overhead when using QEC, many of these mitigation strategies would also be used with error correction. Two useful error-mitigation approaches that are widely used today include the application of composite pulses and dynamical decoupling methods. Although such techniques do not suppress all types of errors, they can be designed to mitigate known systematic errors (composite pulses) or coherent dephasing errors (dynamical decoupling sequences). For both analog and digital quantum computers, error suppression techniques are being developed based on âenergy penaltiesâ to suppress specific types of errors. These approaches work by encoding the qubits strategically in ways for which these errors are less energetically favorable and therefore less likely. In addition, both types of computers may take advantage of âdecoherence-free subspaces,â where multiqubit architectures are designed in ways that make the qubit system insensitive to certain channels of noise. Since these techniques suppress only certain types of errors, the error-rate improvement will depend on the system and may be modest. QEM is expected to be particularly important for analog QCs, as full QEC is not currently understood to be practically achievable on these systems. While QEC is correctiveâthat is, it measures errors and then fixes themâQEM methods are preventative and attempt to reduce the adverse impacts of noise and the probability of errors. 3.2.2 Quantum Error Correction Codes The first quantum error correction codes were developed in the mid-1990s [58,59]. Further work has provided practical insights into the error thresholdâthat is, the maximum allowable error rate of every physical gate in an actual device for which QEC will correct more errors than it introduces [60],,[61]. However, achieving both the number and fidelity of qubits required to successfully implement QEC and enable fault-tolerant computing has proven challenging. In classical computing, one of the simplest types of error correction codes, called a ârepetition code,â copies each bit of information into several bits to preserve the information through redundancy. All gate operations are also replicated to maintain this redundancy. These bits all have the same value, unless an error occurs, which would result in one of the bits being set to the wrong value. Since the likelihood of any error arising is small, the correct value can be identified as that held by the majority of the copies. The âdistanceâ of an error correcting code is the minimum number of errors that are needed to convert one valid representation of data to another valid data representation. A repetition by 3 code (each bit is either 000, or 111) is a distance 3 code, since one need change all three bits to go from one valid representation, 111, to another valid representation, 000. In general, a distance D code can correct (Dâ1)/2 errors, so the replication 3 code can correct one error. This makes sense, since if only a single error occurred, the majority of the bits will still represent the right value. Approaches to QEC are similar to this classical approach. However, the precise implementation of QEC requires vastly different techniques than the classical repetition code because quantum information cannot be directly copied, as described in the no cloning theorem [62], and owing to the additional types of errors that can occur in quantum gates. Nonetheless, QEC protocols have been developed that enable the encoding of a logical qubit into a distributed fabric of physical qubits. Since PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-11

these qubits hold the quantum state, none of them can be directly measured: any measurement would cause the quantum state to collapse and destroy the computation. Instead, two qubits, which should have the same value, are compared to each other, and all one reads out is whether these two qubits agree or disagree. This measurement does not reveal the value of the qubit, so it does not cause the quantum state to collapse. The qubits that are measured are sometimes called the âsyndromeâ or âancillaâ qubits (Box 3.1). From all these comparison measurements and knowledge about the QECC used, a classical computer can compute which qubits have errors, and what type of error a qubit has. Thus, it can provide the quantum operation that need to be applied to remove the errors in the quantum state. While these operations can be directly applied to the physical qubits, it is often more efficient for the software to âvirtuallyâ apply these corrections, modifying future operations to account for these errors rather than adding a separate step just to correct them. The classical algorithm, also called a âdecoding algorithmâ or âdecoder,â which takes syndrome measurements as its input and computes which qubits have errors grows in complexity as the distance increases to handle higher error rates. If the error rate is close to the error threshold, not only are the overheads very high but the decoding algorithm is more complex as well. If the error rates are low or there are very few logical qubits required to run the algorithm, then a small lookup table can be used as the decoder. BOX 3.1 The Use of Ancilla Qubits for Quantum Error Correction For error correction, one needs to replicate the state of a qubit onto a number of qubits. While the no cloning theorem prevents one from copying the state of a qubit directly onto another, one can create a redundant entangled qubit state of many qubits. The key is that the qubits to be entangled must start out in a known state. Qubits with a known state (for purposes of this discussion, it will be the state |0 ), called âancilla qubits,â may be added to a computation for this purpose. Since the state of ancilla qubits are known, it is possible to create a simple circuit that makes the output state of all these ancilla qubits match the protected qubit: run each ancilla through a controlled NOT gate, where the control is driven by the qubit that needs to be replicated. Assume that there is a qubit with state Ï that we want to protect, where | represents an arbitrary superposition state | |0 |1 . In the CNOT gate, the ancilla |0 state will remain a |0 state by the |0 component of | , but it will be converted to |1 by the |1 component of | . The result of this operation is the newly entangled two-qubit state |00 |11 , creating a system in which the ancilla qubit is now perfectly entangled with the first qubit. Adding more ancillas increases the distance of the repetition code. The computational complexity of the error decoder might be an issue, since running QEC tightly couples the qubits of a quantum computer and the classical control processor that decodes the errors and selects the next quantum gate operations to be performed. At a high level, the following operations are needed. First, the control processor sends a quantum operation to the qubits, and some time is needed to perform the operations. Second, the syndrome qubit must be measured and sent back to the control processor. Third, the control processor must then use these measurements to decode which errors are present, and, fourth, update its future operations to account for these errors. It is simplest for the quantum computer if the classical computer can decode the error state without slowing down the next quantum operation. For a superconducting QC, this means the classical computer has only a few hundred nanoseconds (on the order of a thousand instructions on a modern processor) to decode the errors. If this is not possible, either custom hardware, to speed up the computation, or changing the QEC algorithm to allow additional quantum operations to occur before the error information is decoded, could be used to address the issue. If these techniques are not done, the added time will slow down the effective speed of a quantum computer, with delays between gates leading to additional decoherence and higher error rates. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-12

3.2.3 Quantum Error Correction Overhead The number of physical qubits required to encode a fault-tolerant, logical qubit depends on the error rates of the physical quantum device and the required distance, or the protection capacity, of the quantum error-correcting code chosen. As a simple example, consider the so-called Steane quantum error correction code. This approach encodes a single logical qubit into seven physical qubits, and is has a distance of three,6 which means that it can correct a single error. To achieve a higher-distance protocol (one which can correct additional errors) using the Steane code, one can use a recursive approach called âconcatenation.â This essentially entails applying the Steane code to a set of physical qubits, and then applying it again to the corrected qubits, using the output of the first level of corrections as the better qubits to be used in a subsequent level. Multiple levels can be stacked until the desired degree of error protection is achieved. In general, concatenating a QECC that encodes qubits into physical qubits and has distance , written as , , , scales to a , , â code for levels of concatenation, where â . That is, it requires physical qubits per logical qubit. For example, three levels of concatenation of the Steane code would require 343 physical qubits to encode a single logical qubit and achieve a distance of at least 27. This qubit overhead is smaller than many other QEC approaches. However, a Steane code requires error rates lower than 10â5, which is much lower than current machines. Other concatenation codes have higher qubit overheads but can accommodate higher error rates. Finding better codes is an active area of research. Another approach to a QECC, the so-called surface code, is less sensitive to physical qubit error rate, with the potential to protect against errors even for quantum device error rates as high as 10â2 (one percent), meaning it corrects more errors than it adds if all gates and measurements fail at most 1 in 100 times on average. The surface codeâs error threshold of one percent applies for a device architecture where each physical qubit interacts only with its four nearest-neighboring qubits, whichâas Chapter 5 will showâis common in some current quantum computer designs. However, a high error threshold comes at the price of high overhead. A distance d surface code requires a lattice of 2 1 2 1 physical qubits in order to encode a single logical qubit. As apparent from the formula, a surface code with a distance of threeâthe smallest possible codeârequires 25 physical qubits to encode a logical qubit.7 While a distance three code will not fully correct all errors, since two errors generate an incorrect output, this code reduces the effective error rate. As QCs grow in size and decrease in error rates, these smaller codes can be used to improve the effective error rate of the machine, but with a significant reduction in the number of effective qubits. Of course, to completely remove errors, most quantum algorithms are extensive enough to require a distance of greater than three. For example, to fault-tolerantly perform Shorâs algorithm or Hamiltonian simulation for quantum chemistry, the required distance is closer to 35, meaning that approximately 15,000 physical qubits are required to encode a logical qubit, assuming a starting error rate of 10â3 [63,64]. Beyond the Steane and surface codes, other more resource-efficient QECCs have been developed; however, in 2018 such codes either lack efficient decoding algorithms or require error rates that are too low for the NISQ era. Work in this area is essential to reach the goal of creating a fully error corrected quantum computer. In addition to the physical qubit overhead of QEC, in order to operate on fault-tolerant, logical qubits, there must be software available at compile time to convert the desired gate on the logical qubits to gates on the actual physical qubits that encode them. This translation would occur directly in the compilation of a quantum algorithm, with each logical qubit and each logical operation replaced according to a QECC and a distance-specific fault-tolerant replacement rule. The replacement rule 6 One needs more than 3 qubits for a distance 3 since the syndrome measurements cannot reveal any information about the actual quantum state (which would force it to collapse). 7 There are some improvements to the encoding cost; however, they are minimal, enabling a distance 3 surface code to use only 13 or 17 qubits. See for example Y. Tomita and K.M. Svore, 2014, Low-distance surface codes under realistic quantum noise, Physical Review A 90:062320. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-13

accounts for the implementation of both the logical gate and the error correction algorithm, including the syndrome measurements and the corresponding classical decoding algorithm. The number of gates and time-steps required to implement each logical gate depends on the logical gate and the QEC algorithm, details of such calculations may be found elsewhere [65]. Finding: Quantum error correction (QEC) algorithms would enable the emulation of a perfect quantum computer using noisy physical qubits in order to deploy practical algorithms. However, QEC incurs significant overheads in terms of both the number of physical qubits required to emulate a logical qubit and the number of primitive qubit operations required to emulate a logical quantum operation. Arguably the most daunting and costly challenge in quantum error correction is that of achieving a fault-tolerant âuniversalâ set of operations. Existing QEC schemes have developed very cost efficient replacement rules and other methods for achieving fault-tolerant logical gate operations in the so-called Clifford group (consisting of the Pauli operations, controlled-NOT [CNOT], Hadamard [H], the phase gate S, and their products), as well as measurement in the computational basis. However, achieving universality also requires fault-tolerant implementation of non-Clifford gates (such as the Toffoli gate, or the pi/8 gate also known as T). To do so, one can invoke a variety of techniques. For example, so-called magic state distillation enables improvement of the error rate of a logical non-Clifford operation such as the logical T gate. Another more newly developed technique, âcode switching,â switches back and forth between a code that is efficient for Clifford gates and a code that is optimized for non-Clifford gates to achieve universality. Both approaches incur overhead in the form of additional physical qubits, quantum gates, and classical decoding complexity. The substantial overhead introduced going from fault-tolerant Clifford gates to a universal set of fault-tolerant gates has been a major driver of research in quantum error correction codes and fault-tolerance schemes. In the case of magic state distillation, several methods have been developed to lower the overhead cost [66]. In its simplest form, albeit not the optimal form in terms of resource overhead, magic state distillation for the T gate can transform a physical T gate with error rate to a logical T gate with an error rate of roughly 35 . If this is still too high to be able to implement an algorithm of interest, then the procedure can be recursed, achieving 35 35 , and so on for rounds resulting in 35 . In turn, each round requires 15 qubits to perform one improved T gate; thus, rounds require 15 qubits (physical or logical qubits may be used, depending on the desired output error rate on the T gate). Thus, while the QEC protocol is costly for Clifford operations and logical qubit encoding, the most costly procedure by far is the fault-tolerant implementation of the non-Clifford gate required for universality [67]. To convey a sense of the resource requirements of Clifford- and non-Clifford gates, Table 3.1 provides estimates of the requirements for carrying out an error-corrected quantum simulation of the molecular system FeMoco. This example should be seen as a snapshot of capabilities as of 2017. Progress in quantum chemistry and simulation algorithms is ongoing, and these numbers will likely be improved.8 Finding: The performance of an error-corrected quantum algorithm will be limited by the number of operations which are the most expensive to error correct required for its implementationâfor example, in the case of surface code QECC, the ânon-Clifford Groupâ operations require many primitive gate operations to error correct and dominates the overall time (operations) that an algorithm requires. TABLE 3.1 Estimates of the Resource Requirements for Carrying Out Error-Corrected Simulations of a Chemical Structure (FeMoco in Nitrogenase) Using a Serial Algorithmic Approach for Hamiltonian Simulation and the Surface Code for Error Correction 8 For a recent review of progress in the field, see S. McArdle, S. Endo, A. Aspuru-Guzik, S. Benjamin, and X. Yuan, 2018, "Quantum Computational Chemistry," arXiv preprint arXiv:1808.10402. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-14

Physical qubit error rate 10-3 10-6 10-9 Physical qubits per logical qubit 15,313 1,103 313 Total physical qubits in processor 1.7 Ã 106 1.1 Ã 105 3.5 Ã 104 Number of T state factories 202 68 38 Number of physical qubits per factory 8.7 Ã 105 1.7 Ã 104 5.0 Ã 103 Total number of physical qubits including T state 1.8 Ã 108 1.3 Ã 106 2.3 Ã 105 factories NOTE: The table illustrates the trade-offs, for three specific physical qubit error rates, between the number and quality of the physical qubits required to achieve a fault-tolerant implementation of the algorithm. Estimates are based on a requirement of 111 logical qubits for the algorithm instance and physical gate frequencies of 100 MHz. Note that the requirements for distillation (T factories) are far greater than those for the rest of the error corrections. The cost of achieving an error-free, non-Clifford gate is orders of magnitude higher than encoding the qubits and their other Clifford operations with this particular QECC (surface code and magic state distillation). SOURCE: Adapted from Reiher et al., 2017, Elucidating reaction mechanisms on quantum computers, PNAS 114:7555-7560. Research continues on developing new quantum error-correcting codes and new quantum fault- tolerance schemes with the aim of dramatically lowering the resource overheads required to achieve fault- tolerant quantum computation. Much of this work has coalesced on studying surface codes and variants thereof, a class of code called topological codes [68].9 Owing to the numerous unresolved questions about surface codes, researchers continue to find better ways of using these codes [69], and better ways of evaluating and decoding these codes [70]. When experimental systems reach the size at which interesting fault-tolerance experiments can be run, and these machines can interleave quantum operations and measurement, QEC schemes can be tested in order to verify the theory and analyses. The real benefit of these experiments will be that researchers working on QEC will see the effects and sources of ârealâ system errors, rather than using theoretical noise models. Insights about the actual errors could enable the development of more efficient QEC codes tailored for the error statistics of the actual machine. Again, minimizing the overhead is critical for deploying fault-tolerance schemes, especially on early quantum devices which will have a limited number of high-quality qubits. Early demonstration of limited QEC operation on devices dates to as early as 2005, and the basic features of such protocols have been implemented on both superconducting qubit and trapped ion qubit devices. Such experiments have not yet yielded fault-tolerant logical qubits, given the generally poor gate fidelity of physical qubit operations [71-73]. Recently, quantum error detection codesâsmaller precursors to QECsâhave been implemented in available quantum processors, with some evidence of success [74,75]. As will be discussed in Chapter 7, successful demonstration of QEC to emulate practical, fault-tolerant logical qubits remains a significant milestone yet to be reached. 3.3 QUANTUM APPROXIMATION ALGORITHMS Given that the high cost of error correction will preclude its use in early quantum computers, researchers have looked for other approaches for taking advantage of early quantum computers. A promising approach is to forgo the desire to obtain an exact solution for the computational problem and 9 Topological codes are relatively good performers in terms of noise tolerance and qubit overhead, and they have the advantage of being naturally geometrically local in two dimensions, making them a promising class of codes for physical implementationâalthough some of the important variants live naturally in three or more dimensions. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-15

instead use an approximate, or heuristic approach to solve the problem. This approach has given rise to a number of quantum and hybrid quantum-classical algorithms for tasks that range from the simulation of many-body systems such as molecules and materials [76-84] to optimization [85] and machine learning applications [86-88]. The goal of these methods is to provide approximate but useful solutions to the problem at hand, with lower resource requirements than other approaches. 3.3.1 Variational Quantum Algorithms Many problems of interest, in particular, problems in quantum chemistry, can be framed as so- called eigenvalue problems. According to the variational principle of quantum mechanics, the computed energy of the ground (lowest-energy) state of a quantum chemical system decreases as the approximations to the solution improve, asymptotically approaching the true value from above. This principle has given rise to iterative classical algorithms for solving these problems, where a crude guess of the solution is the input, and a somewhat-improved approximation is the output. This output is then used as the guess for the next iteration, and, with each cycle, the output gets closer and closer to the true solution, but never overshooting. This approach can be split between a classical and a quantum algorithm, with the optimization step performed by the quantum processor, and subsequently read out, with a classical control unit deciding whether to perform another iteration. The ability to separate the quantum processing among many small, independent stepsâwith coherence required only over the course of a single stepâmakes these approaches a clever way to minimize the qubit fidelity requirements and obtain a useful result. For this reason, quantum variational algorithms have been suggested as applications for digital NISQ computers. It is worth noting that, of course, these algorithms are readily carried out using fully error- corrected quantum computers as well. One specific example is the variational quantum eigensolver (VQE) [89-97], where the problem is broken into the sum of set of smaller problems that can each be approximated independently, with the sum of all outputs corresponding to the approximate solution of interest. The process is repeated until a heuristic stopping criteria is reached, usually corresponding to the achievement of an energy threshold. The computational power of VQE depends on the form of the assumed quantum state, or ansatz, employed. Some ansatz are purely defined by convenient circuit forms that can be readily accessible by hardware, whereas others are designed to capture specific types of quantum correlations. The VQE algorithm is believed to become competitive with a classical computer at the similar task of approximating the wave function and properties of a many-body system of interest when the number of qubits in the quantum register and the depth of the quantum circuit employed generate states that are intractable to prepare in a classical computer. The specific number of gates and qubits where this occurs is heavily dependent on the type of algorithm, but a very rough estimate for quantum simulation applications could consist of hundreds of qubits and tens of thousands of quantum gates [98]. A related approach is the quantum approximate optimization algorithm (QAOA) [99], an algorithm for preparing a variational guess of a wave function that satisfies an optimization problem, such as the satisfiability problem. The algorithm follows a similar procedure as the VQE algorithmânamely, a series of preparation and measurement experiments followed by optimization by a classical computer. The resulting quantum state, when sampled, provides approximate or exact solutions to the computational problem. 3.3.2 Analog Quantum Algorithms In addition to algorithms that require a gate-based quantum computer, there are set of approaches that work by directly representing the task in terms of a Hamiltonian, which may or may not vary with time. The desired result is encoded in the system state at the end of the simulation run. âDirect quantum simulation,â where the Hamiltonian created is analogous to that of the quantum system being explored, is PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-16

one example of this type of approach, and a type of analog quantum computation. Examples of direct quantum simulation include the realization of spin Hamiltonians [100] or the study of quantum phase transitions [101-103]. Quantum annealing and, more specifically, adiabatic quantum optimization, also take this âanalogâ approach and provide a general-purpose schema for designing quantum algorithms without requiring the abstraction layer of logical operations, or gates. These two approaches are closely related: adiabatic quantum optimization is simply quantum annealing at zero temperature. Adiabatic quantum computation is interesting because one can in principle convert any gate-based quantum computation to an equivalent adiabatic quantum computation (although it might not be an efficient solution method) [104]. These methods require mapping an optimization problem of interest into a Hamiltonian, Hf, such that finding the lowest energy, or ground state, of a system defined by that Hamiltonian is equivalent to solving the problem. The algorithm for quantum adiabatic optimization is implemented as follows: a set of qubits begins with a Hamiltonian Hi for which the ground state is known, and Hi is then slowly transformed into Hf. Since a quantum system will remain in its ground state if the Hamiltonian is changed slowly enough (adiabatically), this procedure drags the system from the ground state of Hi to the ground state of Hf. Measurement of the final state provides the answer sought with a high probability [105,106]. There was a great deal of excitement about the prospects of such algorithms following work by Farhi et al. [107], giving evidence suggesting that these algorithms could be fast on random instances of 3SAT, a logic satisfiability problem that is equivalent to many other hard problems. The theoretical analysis of this algorithm was quite challenging, since its running time was governed by the spectral gap (the difference in energy of states near the ground state), of the time evolving Hamiltonian. A sequence of papers analyzed this gap in a number of cases, establishing there are classes of 3SAT formulae and other NP-complete problems for which the spectral gap for an adiabatic algorithm is exponentially small, which means for these problems this approach will take time exponential in the size of the problem [108,109]. As a result, the formal power of this type of computing is still not known. Thus, the approach to establishing the speedup of quantum annealing algorithms is largely empirical; researchers literally compare the time required to complete a given task on a quantum annealer with the best times of optimal classical computer systems for arriving at the same result. All real quantum computers operate at a finite temperature. When that temperature corresponds to an energy greater than the spectral gap, an analog quantum computer can only implement quantum annealing rather than quantum adiabatic computation. Quantum annealing is particularly attractive from the viewpoint of experimental realization, with the caveat that theoretical analysis of these algorithms is difficult, and there is no clear theory of fault-tolerance for this model. Adiabatic optimization devices, in particular the D-Wave machines, have overcome significant engineering challenges and scaled rapidly to thousands of qubits, albeit with some trade-offs in qubit fidelity. While it initially looked like these devices demonstrated promising speedups for some applications, further work on new classical algorithms for these specific problems have erased these speedups [110]. Recent work suggests that this reflects the relatively high temperature at which the D-Wave processors operate [111] and the presence of certain analog errors in these devices [112], although this does not rule out the possibility that there could be other fundamental limitations of quantum annealers. 3.4 APPLICATIONS OF A QUANTUM COMPUTER As is apparent from the preceding discussions, many quantum algorithms have been developed, both for gate-based quantum computers and for quantum annealers. A comprehensive online catalogue of quantum algorithms is maintained by the U.S. National Institute of Standards and Technology (NIST) [113]. While this collection includes a host of algorithms that theoretically offer quantum speedup, this speedup is often the result of a few basic techniques at their coreâin particular, quantum Fourier transform, quantum random walks, and Hamiltonian simulation. Furthermore, most algorithms require a PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-17

large number of high-quality qubits in order to be useful, most likely requiring quantum error correctionâfar beyond the quantum resources available in known prototypical devices. In addition, the current inability to load large quantities of input data efficiently suggest that many of these would be difficult to implement in practice. Furthermore, algorithms are generally not in and of themselves applications; rather, they are building blocks that must be combined in order to perform a useful task. As experimental efforts at realizing quantum computers gain momentum, the near-term challenge is to identify or create quantum applications and the algorithms they requireâpreferably useful ones which provide dramatic speedup over classical approachesâthat can be deployed on non-error-corrected devices. 3.4.1 Near-Term Applications of a Quantum Computer The potential near-term utility of a quantum computer is currently an active area of research. It is expected that such applications are likely to be those that require few qubits, can be implemented with a relatively shallow code (that is, they require relatively short sequences of gates), and can work on NISQ computers. The approximate algorithms discussed in Section 3.3 are considered to be leading prospects for implementation on near-term analog or digital NISQ machines. While there are many potential commercial10 applications for this class of machine, as of the time of publication of this report (2018), none are certain to provide an advantage over classical approaches when run on a NISQ computer. All of the researchers who spoke to the committee, including those from startups, agreed this was a critical area for research. Finding: There is no publicly known application of commercial interest based upon quantum algorithms that could be run on a near-term analog or digital NISQ computer that would provide an advantage over classical approaches. 3.4.2 Quantum Supremacy A necessary milestone on the path to useful quantum computers is quantum supremacyâa demonstration of any quantum computation that is prohibitively hard for classical computers, whether or not the computation is useful. In essence, quantum supremacy is an experimental demonstration that quantum computers violate the extended Church-Turing thesis. Quantum supremacy would also address skepticism about the viability of quantum computers, as well as provide a test of quantum theory in the realm of high complexity. To achieve this, one would need both to create a quantum computer large enough to demonstrate supremacy and to find a simple problem that it can perform but that is hard for a classical machine to compute. A common type of such problems is those where operations are performed on qubits to generate an entangled quantum state, and then to sample that state to estimate its probability distribution [114]. The first proposal for a good test problem is owing to Aaronson and Arkhipov in 2010, in their Boson Sampling proposal [115], building on earlier work on the classical complexity of sampling problems [116,117].11 They were able to prove that computing the output probabilities of a random system of noninteracting bosons was in a complexity class (#P-hard) corresponding to computations thought to be difficult for classical computers to run. Moreover, under the plausible conjecture that these probabilities remain #P-hard to approximate, it would follow that classical computers cannot even sample 10 A commercial application is one where someone is willing to pay money for the answer it can provide. It is an application that will bring revenue into quantum computing. 11 The term âquantum supremacyâ was coined by John Preskill in 2012, although work in this area began earlier. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-18

a random output of a typical linear-optical network. For a quantum computer, providing such a sample (referred to as âqubit samplingâ) could amount to demonstration of quantum supremacy. While boson sampling has been popular with experimentalists, and small-scale implementations have already been achieved in a number of labs, including a 6-photon experiment [118], it remains challenging to push these experiments to the roughly 50 photons necessary to establish quantum supremacy [119]. A different approach for demonstrating quantum supremacy in superconducting qubits was proposed by the Google theory group in 2016 [120]. It was experimentally inspired, with quantum supremacy playing the role of a milestone on the way to building superconducting NISQ computers. Concretely, the proposalâRandom Circuit Sampling (RCS)âcalled for implementing a random quantum circuit and measuring the output of the circuit. They conjectured that sampling from the output distribution of such random circuits is a hard problem classically. Recently, strong complexity-theoretic evidence for the classical hardness of RCS, on par with that for boson sampling, was given by Bouland et al. [121]. There are two main parts to a quantum supremacy proposal: the first is the definition of a computational task that could be experimentally realized in the near term, but which is prohibitively difficult for any algorithm running on a classical computer. The second is an efficient method for verifying that the quantum device actually carried out the computational task. This is particularly complicated, since the proposed algorithms are computing samples from a certain probability distribution (namely, the output distribution of the chosen quantum circuit). The first simplification to get around this validation problem is to choose n, the number of qubits, to be small enough (n ~ 50) so that a classical supercomputer can actually calculate the output distribution of the chosen quantum circuit. This still leaves the challenge of verifying that the outputs of the quantum device are actually drawn from this (or a close-by) distribution. This too can be difficult to prove. For this, the RCS supremacy model [122] proposes the computation of a score in the form of the cross-entropy between the distribution sampled from the device and the true output distribution of the chosen quantum circuit. It turns out that the cross-entropy score verifies that the two distributions are close, provided a simple condition is metânamely, that the entropy of the distribution sampled from the device is at least as large as the entropy of the true output distribution of the chosen quantum circuit. [123]. Unfortunately, it is not possible to verify this entropy condition using any reasonable number of samplesâalthough it holds for many noise models, such as local depolarizing noise. A different proposal for verification uses the concept of heavy output generation (or HOG)[124] and can be provably shown to verify supremacy under a (nonstandard) complexity assumption. Last, a third verification proposal, binned output generation (BOG), simultaneously verifies HOG and cross-entropy, and is information theoretically optimal in some formal model [125]. A proof-of-concept test for this quantum supremacy algorithm was performed in 2017 on a 9- qubit device [126]. The error rate was shown to be proportional to the number of operations multiplied by the number of qubits, with an average error per 2-qubit gate of about 0.3 percent. Simple extrapolation to a qubit device with around 50 qubits indicates that a quantum supremacy result should be possible with this architecture, and the Google hardware team (and others) are working hard to achieve this goal. The approaches leave two questions unanswered. The first is how to perform verification without the entropy assumption (or a nonstandard complexity assumption). The second is the possibility of establishing quantum supremacy beyond the limit of the computing power of classical supercomputers, currently understood12 to correspond to on the order of about 50 qubits. A recent proposal shows how to provably carry out quantum supremacy based on post-quantum cryptography. Specifically, based on the 12 While the exact number depends upon the specifications and approximations of the particular simulation, and this number will increase as classical methods improve, it is expected to remain at this order of magnitude for a significant amount of time. Recently, researchers have used a new classical approach to perform a single instance of the quantum supremacy task that would be achievable by a 70-qubit quantum device; however, this does not correspond to the full 100,000-instance quantum supremacy experiment proposed for a 50-qubit device, which has not yet proven achievable on a classical computer. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-19

hardness of the learning with errors (LWE) problem, the proposal gives a way of provably testing quantum supremacy for quantum computers with arbitrarily large numbers of qubits [127]. Finding: While several teams have been working to demonstrate quantum supremacy, this milestone has not yet been demonstrated (as of the time of publication of this report). Its achievement will difficult to establish definitively, and this target may continue to move as improvements are made to classical approaches for solving the chosen benchmark problem. In summary, the pursuit of quantum supremacy has already achieved an interesting goal: the development of theoretical tools useful for rigorously analyzing the computational hardness of certain quantum problems that may soon be experimentally implementable. However, owing both to the uncertain nature of the hardness results (i.e., the reliance on nonstandard hardness conjectures) and to the restrictive nature of the noise models addressed by these results, there is much work remaining to be done. 3.4.3 Applications for an Ideal Quantum Computer In the event of development of a robust, large-scale error-corrected quantum computer, the existing algorithms with known speedup are likely to be useful for solving any number of practical problems, or parts of problems. Perhaps the best-understood application of quantum algorithms is in the field of cryptography (specifically, defeating it), an application based directly on mathematics; these applications will be discussed in the next chapter. Quantum simulation, for both foundational and applied science, is also commonly raised as a potential âkiller app,â especially in the field of quantum chemistry [128]. The electronic structure problem has received much attention, owing to its centrality to the fields of chemistry and materials science. This problem requires solving for the ground state energies and wave functions of electrons interacting in the presence of some external field, usually arising from atomic nuclei. Electronic structure defines chemical properties and the rates and products of chemical reactions. While classical computing approaches to this problem (such as density functional theory) are quite effective in many contexts (such as predicting molecular geometries), they often fail to reach the level of accuracy required to predict chemical reaction rates or distinguish between competing phases of correlated materials. This is especially true when the system involves transition metal elements (which are present in most catalysts). Quantum computers could enable efficient solutions to this problem in the classically intractable regime. In fact, one early quantum algorithm offers exponential speedup over classical approaches to calculation of chemical reaction rate constants [129]. This and other algorithms could open the door to significant insights about chemical reactions and phases of matter that have long eluded description by a systematic and predictive theory. Such results could also have commercial applications in areas such as energy storage, device displays, industrial catalysts, and pharmaceutical development. 3.5 THE POTENTIAL ROLE OF QUANTUM COMPUTERS IN THE COMPUTING ECOSYSTEM While quantum chemistry, optimization (including machine learning), and defeating cryptography are the best-understood potential applications of an ideal quantum computer, the field is still in early stagesâ in terms of both algorithms, as discussed in this chapter, and devices, as will be discussed in Chapter 5. Existing algorithms may be modified or implemented in ways not yet anticipated; new algorithms will likely emerge as research continues. As a result, except for cryptography, it is not possible to predict the implications of quantum computers on various commercial sectorsâthe field is so young that these changes are not even on the horizon. For cryptography, the potential of a future quantum PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-20

computer running Shorâs algorithm is sufficient to affect action today. These issues are described in Chapter 4. As is clear from this chapterâs discussions, the ability to deploy known quantum algorithms could render some previously intractable problems efficiently solvable. However, even a large error-corrected quantum computer would not be generally superior to a classical computer. In fact, quantum computers do not speed up many classes of problems, and the maturity of the classical computing ecosystem (including hardware, software, and algorithms) means that for these classes of problem, classical computing will remain the dominant computing platform. Even applications accelerated by a quantum computer, the parts accelerated are likely to comprise only a small component of the broader task in question. For the foreseeable future, a quantum processor is thus likely to be useful for performing only certain parts of certain tasks, with the remaining operations more efficiently carried out on a classical computer. Thus, a quantum computer is expected to serve as a co-processor to, rather than a replacement for, a classical computer. Furthermore, as will be discussed in Chapter 5, the physical implementation of any quantum computation will require a host of complex gating operations to be performed upon qubits maintained in a controlled environment, which will require the use of classical computers. Finding: Quantum computers are unlikely to be useful as a direct replacement for conventional computers, or for all applications; rather, they are currently expected to be special-purpose devices operating in a complementary fashion with conventional processors, analogous to a co-processor or accelerator. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-21

NOTES [1] E. Bernstein and U. Vazirani, âQuantum complexity theory.â In Symposium on the Theory of Computing, 1993 Proceedings, 27th Annual Symposium on, ACM, 1993. SIAM Journal on Computing 26, no. 5 (1997): 1411- 1473. [2] D. Simon, âOn the power of quantum computation.â In Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on, pp. 124-134. IEEE, 1994. SIAM journal on computing 26, no. 5 (1997): 1474-1483. [3] P. Shor, 1994, âAlgorithms for quantum computation: Discrete logarithms and factoring.â In Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on, pp. 124-134; [4] R.J. Anderson and H. Wolf, 1997, âAlgorithms for the Certified Write-All Problems,â IEEE, SIAM journal on computing, vol. 26, no. 5: 1484-14. [5] Shorâs algorithm for factorization scales asymptotically as O(n3), compared to O(exp[n1/3]] for the best classical approach, the general number field sieve algorithm. [6] R.M. Karp, 1975, âOn the computational complexity of combinatorial problems,â Networks 5, no. 1: 45-68. [7] C.H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, 1997, âStrengths and Weaknesses of Quantum Computing,â Technical Report, ISI Foundation, 1994, SIAM Journal on Computing, 26, no. 5. [8] Grover, Lov K. âA fast quantum mechanical algorithm for database search.â In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pp. 212-219. ACM, 1996. [9] S. Cook, 2006, âThe P versus NP problem,â The millennium prize problems: 87-104. [10] J. Preskill, 2018, âQuantum Computing in the NISQ era and beyond,â preprint arXiv:1801.00862 [11] L. Hales and S. Hallgren. âAn improved quantum Fourier transform algorithm and applications.â In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pp. 515-525. IEEE, 2000. [12] P.W. Shor, 1994, âAlgorithms for quantum computation: Discrete logarithms and factoring,â in Foundations of Computer Science, 1994 Proceedings, 35th Annual Symposium on: 124-134. [13] R. Jozsa, 2001, âQuantum factoring, discrete logarithms, and the hidden subgroup problem.â Computing in science and engineering 3, no. 2: 34-43. [14] A. Y. Kitaev, 1995, âQuantum measurements and the Abelian stabilizer problem.â arXiv preprint quant- ph/9511026. [15] L. K. Grover, âA fast quantum mechanical algorithm for database search.â In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pp. 212-219. ACM, 1996. [16] C.H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, 1997, âStrengths and Weaknesses of Quantum Computing,â Technical Report, ISI Foundation, 1994, SIAM Journal on Computing, 26, no. 5. [17] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, 2002, âQuantum amplitude amplification and estimation.â Contemporary Mathematics 305: 53-74. [18] E. Farhi, J. Goldstone, and S. Gutmann, 2007, âA quantum algorithm for the Hamiltonian NAND tree.â arXiv preprint quant-ph/0702144. [19] A. Ambainis, A.M. Childs, B.W. Reichardt, R. Å palek, and S. Zhang, 2010, âAny AND-OR formula of size N can be evaluated in time N^1/2+o(1) on a quantum computer.â SIAM Journal on Computing 39, no. 6: 2513- 2530. [20] V. Giovannetti, S. Lloyd, and L. Maccone, 2008, âQuantum random access memory,â Physical review letters, 100, 16: 160501. [21] R.P. Feynman, 1982, âSimulating physics with computers,â International journal of theoretical physics 21, no. 6-7: 467-488. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-22

[22] D.S. Abrams and S. Lloyd, 1997, âSimulation of many-body Fermi systems on a universal quantum computer,â Physical Review Letters 79, no. 13: 2586. [23] D. Aharonov and A. Ta-Shma, 2003, âAdiabatic quantum state generation and statistical zero knowledge.â In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, ACM: 20-29. [24] D.W. Berry, A.M. Childs, R. Cleve, R. Kothari, and R.D. Somma, 2015, âSimulating Hamiltonian dynamics with a truncated Taylor series.â Physical review letters, vol. 114, no. 9: 090502 [25] R. Babbush, D.W. Berry, I.D. Kivlichan, A. Scherer, A.Y. Wei, P.J. Love, and A. Aspuru-Guzik, 2017, âExponentially more precise quantum simulation of fermions in the configuration interaction representation,â Quantum Science and Technology, 3: 015006. [26] G.H. Low, and I.L. Chuang, 2016, âHamiltonian simulation by qubitization,â arXiv preprint arXiv:1610.06546. [27] See, for example: G.H. Low and I.L. Chuang, 2017, âOptimal Hamiltonian simulation by quantum signal processing.â Physical review letters, vol. 118, no. 1: 010501. [28] R. Babbush, D.W. Berry, I.D. Kivlichan, A.Y. Wei, P.J. Love, and A. Aspuru-Guzik, 2016, âExponentially more precise quantum simulation of fermions I: Quantum chemistry in second quantization,â New Journal of Physics, vol. 18: 033032. [29] D.W. Berry, A.M. Childs, and R. Kothari, 2015, âHamiltonian simulation with nearly optimal dependence on all parameters,â Proceedings of the 56th IEEE Symposium on Foundations of Computer Science, pp. 792-809, arXiv:1501.01715. [30] S. McArdle, S. Endo, A. Aspuru-Guzik, S. Benjamin, and X. Yuan, 2018, âQuantum computational chemistry.â arXiv preprint arXiv:1808.10402. [31] D. Wecker, M.B. Hastings, N. Wiebe, B.K. Clark, C. Nayak, and M. Troyer, 2015, âSolving strongly correlated electron models on a quantum computer.â Physical Review A 92, no. 6: 062318. [32] C. Dykstra, G. Frenking, K.S. Kim, and G.E. Scuseria, 2005, âTheory and Applications of Computational Chemistry: The First Forty Years,â Elsevier, Amsterdam. [33] M. Reiher, N. Wiebe, K.M. Svore, D. Wecker, and M. Troyer, 2017, âElucidating reaction mechanisms on quantum computers,â Proceedings of the National Academy of the Sciences of the United States of America, 114: 7555-7560. [34] G. Wendin, 2017, "Quantum information processing with superconducting circuits: a review." Reports on Progress in Physics 80, no. 10: 106001. [35] M. Reiher, N. Wiebe, K.M. Svore, D. Wecker, and M. Troyer, 2017, "Elucidating reaction mechanisms on quantum computers." Proceedings of the National Academy of Sciences: 201619152. [36] B. Bauer, D. Wecker, A.J. Millis, M.B. Hastings, and M. Troyer, 2016, âHybrid quantum-classical approach to correlated materials,â Physical Review X, 6:031045. [37] J. Olson, Y. Cao, J. Romero, P. Johnson, P.-L. Dallaire-Demers, N. Sawaya, P. Narang, I. Kivlichan, M. Wasielewski, and A. Aspuru-Guzik, 2017, âQuantum Information and Computation for Chemistry,â preprint: arXiv:1706.05413. [38] R. Babbush, D.W. Berry, I.D. Kivlichan, A.Y. Wei, P.J. Love, and A. Aspuru-Guzik, 2017, âExponentially more precise quantum simulation of fermions in the configuration interaction representation,â Quantum Science and Technology, 3,:015006. [39] J. Olson, Y. Cao, J. Romero, P. Johnson, P.-L. Dallaire-Demers, N. Sawaya, P. Narang, I. Kivlichan, M. Wasielewski, and A. Aspuru-Guzik, 2017, âQuantum Information and Computation for Chemistry,â preprint: arXiv:1706.05413 [40] I.D. Kivlichan, J. McClean, N. Wiebe, C. Gidney, A. Aspuru-Guzik, G. Kin-Lic Chan, and R. Babbush, 2018, âQuantum Simulation of Electronic Structure with Linear Depth and Connectivity,â Physical Review Letters, 120: 11501. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-23

[41] R. Babbush, C. Gidney, D.W. Berry, N. Wiebe, J. McClean, A. Paler, A. Fowler, and H. Neven, 2018, âEncoding Electronic Spectra in Quantum Circuits with Linear T Complexity.â arXiv preprint arXiv:1805.03662. [42] G.H. Low and N. Wiebe, 2018, âHamiltonian Simulation in the Interaction Picture.â arXiv preprint arXiv:1805.00675. [43] D.W. Berry, M. KieferovÃ¡, A. Scherer, Y.R. Sanders, G.H. Low, N. Wiebe, C. Gidney, and R. Babbush, 2018, âImproved techniques for preparing eigenstates of fermionic Hamiltonians.â npj Quantum Information 4, no. 1: 22. [44] D. Wecker, M. B. Hastings, N. Wiebe, B.K. Clark, C. Nayak, and M. Troyer, 2015, âSolving strongly correlated electron models on a quantum computer.â Physical Review A 92, no. 6: 062318. [45] D. Poulin, M.B. Hastings, D. Wecker, N. Wiebe, A.C. Doherty, and M. Troyer, 2014, âThe Trotter step size required for accurate quantum simulation of quantum chemistry,â arXiv preprint arXiv:1406.49. [46] M.B. Hastings, D. Wecker, B. Bauer, and M. Troyer, 2014, âImproving quantum algorithms for quantum chemistry.â arXiv preprint arXiv:1403.1539. [47] D. Poulin, A. Kitaev, D.S. Steiger, M.B. Hastings, and M. Troyer, 2018, âQuantum Algorithm for Spectral Measurement with a Lower Gate Count.â Physical review letters 121, no. 1: 010501. [48] D. Wecker, B. Bauer, B.K. Clark, M.B.. Hastings, and M. Troyer, 2014, âGate-count estimates for performing quantum chemistry on small quantum computers,â Physical Review A 90, no. 2: 022305. [49] A.W. Harrow, A. Hassidim, and S. Lloyd, 2009, âQuantum algorithm for linear systems of equations.â Physical review letters 103, no. 15: 150502. [50] A.M. Childs and W.V. Dam, 2010, âQuantum algorithms for algebraic problems.â Reviews of Modern Physics 82, no. 1: 1. [51] D.W. Berry, A.M. Childs, A. Ostrander, and G. Wang, 2017, âQuantum algorithm for linear differential equations with exponentially improved dependence on precision.â Communications in Mathematical Physics, vol. 356, no. 3: 1057-1081. [52] F.G.S.L. Brandao and K. Svore, 2017, âQuantum speed-ups for semidefinite programming,â https://arxiv.org/abs/1609.05537 [53] Kerenidis, Iordanis, and Anupam Prakash. âQuantum recommendation systems.â arXiv preprint arXiv:1603.08675(2016). [54] Tang, Ewin. âA quantum-inspired classical algorithm for recommendation systems.â arXiv preprint arXiv:1807.04271(2018). [55] See, for example, Figure 3.2 and Table 4.1 for runtime estimates for illustrative problems in chemistry and cryptanalysis. [56] P.D. Johnson, J. Romero, J. Olson, Y. Cao, and A. Aspuru-Guzik, 2017, âQVECTOR: an algorithm for device- tailored quantum error correction,â preprint: arXiv:1711.02249. [57]Kandala, K. Temme, A.D. Corcoles, A. Mezzacapo, J.M. Chow, and J.M. Gambetta, 2018, âExtending the computational reach of a noisy superconducting quantum processor,â arXiv:1805.04492 [58] A.R. Calderbank and P.W. Shor, 1997, âGood quantum error-correcting codes exist,â Physical Review A, 54:1098-1106, arXiv:quant-ph/9512032. [59] A. Steane, 1996, âSimple quantum error correcting codes,â Physical Review A, 54:4741, arXiv:quant- ph/9605021. [60] See, for example, E. Knill, 2005, âQuantum computing with realistically noisy devices,â Nature, 434:39-44. [61] P. Aliferis, D. Gottesman, and J. Preskill, 2006, âQuantum accuracy threshold for concatenated distance-3 codes,â Quantum Information and Computation, 6:97-165, arXiv:quant-ph/0504218. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-24

[62] W.K. Wootters and W.H. Zurek, 1982, âA single quantum cannot be cloned.â Nature 299, no. 5886: 802-803. [63] M. Reiher, N. Wiebe, K.M. Svore, D. Wecker, and M. Troyer, 2017, âElucidating reaction mechanisms on quantum computers,â Proceedings of the National Academy of the Sciences of the United States of America, 114: 7555-7560. [64] A.G. Fowler, M. Mariantoni, J.M. Martinis, and A.N. Cleland, 2012, âSurface codes: Towards practical large- scale quantum computation,â Physical Review A, 86, 032324. [65] For details on low-distance codes, see Y. Tomita and K.M. Svore, 2014, âLow-distance Surface Codes under Realistic Quantum Noise,â https://arxiv.org/pdf/1404.3747.pdf. For general replacement rules for the surface code, see for example, https://arxiv.org/abs/1208.0928. For concatenated and block codes, see for example, P. Aliferis, D. Gottesman, and J. Preskill, 2005, âQuantum accuracy threshold for concatenated distance-3 codes,â https://arxiv.org/abs/quant-ph/0504218 and K.M. Svore, D.P. DiVincenzo, and B.M. Terhal, 2006, âNoise Threshold for a Fault-Tolerant Two-Dimensional Lattice Architecture,â https://arxiv.org/abs/quant-ph/0604090. [66] See for example, M.B. Hastings, J. Haah, âDistillation with sublogarithmic overheadâ https://arxiv.org/abs/1709.03543; J. Haah and M.B. Hastings, 2017, âCodes and Protocols for Distilling T, controlled S, and Toffoli Gates, https://arxiv.org/abs/1709.02832.; J. Haah, M.B. Hastings, D. Poulin, and D. Wecker, 2017, âMagic State Distillation at Intermdediate Size,â https://arxiv.org/abs/1709.02789; and J. Haah, M.B. Hastings, D. Poulin. and D. Wecker, 2017, âMagic State Distillation with Low Space Overhead and Optimal Asymptotic Input Count, https://arxiv.org/abs/1703.07847. [67] See for example Table II in https://arxiv.org/pdf/1605.03590.pdf for detailed numbers on the cost of the T implementation for understanding reaction mechanisms on a quantum computer using Hamiltonian simulation. [68] H. Bombin and M. A. Martin-Delgado, 2006, âTopological quantum distillation,â Physical Review Letters, 97:180501, arXiv:quant-ph/0605138. [69] See, for example, J.E. Moussa, 2016, âTransversal Clifford gates on folded surface codes,â Physical Review A, 94:042316, arXiv:1603.02286; C. Horsman, A. G. Fowler, S. Devitt, R. Van Meter, 2012, âSurface code quantum computing by lattice surgery,â New Journal of Physics, 14:123011, arXiv:1111.4022; S. Bravyi and A. Cross, 2015, âDoubled color codes,â arXiv:1509.03239; H. Bombin, 2015, âGauge color codes: Optimal transversal gates and gauge fixing in topological stabilizer codes,â New Journal of Physics, 17:083002, arXiv:1311.0879; T. J. Yoder and I. H. Kim, 2017, âThe surface code with a twist,â Quantum 1:2, arXiv:1612.04795. [70] See, for example, S. Bravyi, M. Suchara, and A. Vargo, 2014, âEfficient algorithms for maximum likelihood decoding in the surface code,â Physical Review A 90:032326, arXiv:1405.4883; G. Duclos-Cianci and D. Poulin, 2014, âFault-tolerant renormalization group decoder for abelian topological codes,â Quantum Information and Computation, 14:721-740, arXiv:!304.6100. [71] J. Chiaverini, Di. Leibfried, T. Schaetz, M.D. Barrett, R.B. Blakestad, J. Britton, W.M. Itano, et al., 2004, âRealization of quantum error correction.â Nature 432, no. 7017: 602. [72] D. Nigg, M. Mueller, E.A. Martinez, P. Schindler, M. Hennrich, T. Monz, M.A. Martin-Delgado, and R, Blatt, 2014, âQuantum computations on a topologically encoded qubit.â Science: 1253742. [73] S. Rosenblum, P. Reinhold, M. Mirrahimi, Liang Jiang, L. Frunzio, and R.J. Schoelkopf, 2018, âFault-tolerant measurement of a quantum error syndrome,â arXiv preprint arXiv:1803.00102. [74] N.M. Linke, M. Gutierrez, K.A. Landsman, C. Figgatt, S. Debnath, K.R. Brown, and C. Monroe, 2017, âFault- tolerant quantum error detection,â Science advances 3, no. 10: e1701074. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-25

[75] R. Harper and S. Flammia, 2018, âFault tolerance in the IBM Q Experience,â arXiv preprint arXiv:1806.02359. [76] A. Peruzzo, J.R. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P.J. Love, A. Aspuru-Guzik, and J.L. OâBrien, 2013, âA variational eigenvalue solver on a quantum processor,â arXiv:1304.3061. [77] D. Wecker, M.B. Hastings, and M. Troyer, 2015, âProgress towards practical quantum variational algorithms,â Physical Review A, 92:042303. [78] J.R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, 2016, âThe theory of variational hybrid quantum- classical algorithms,â New Journal of Physics, 18:023023. [79] P.J.J. OâMalley, R. Babbush, I.D. Kivlichan, J. Romero, J.R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, A. Megrant, J. Y. Mutus, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis, 2016, âScalable quantum simulation of molecular energies,â Physical Review X, 6:031007. [80] R.Santagati, J. Wang, A.A. Gentile, S. Paesani, N. Wiebe, J.R. McClean, S.R. Short , P.J. Shadbolt, D Bonneau, J W Silverstone, D P Tew, X Zhou, J L OBrien, and M G Thompson, 2016, âQuantum simulation of Hamiltonian spectra on a silicon chip,â arXiv:1611.03511. [81] G.G. Guerreschi and M. Smelyanskiy, 2017, âPractical optimization for hybrid quantum-classical algorithms,â arXiv:1701.01450. [82] J.R. McClean, M.E. Kimchi-Schwartz, J. Carter, and W.A. de Jong, 2017, âHybrid quantum-classical hierarchy for mitigation of dechoherence and determination of excited states,â Physical Review A, 95:042308. [83] J.R. Romero, R. Babbush, J.R. McClean, C. Hempel, P. Love, and A. Aspuru-Guzik, 2017, âStrategies for quantum computing molecular energies using the unitary coupled cluster ansatz,â arXiv:1701.02691. [84] Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, 2017, âQuantum implementation of the unitary coupled cluster for simulating molecular electronic structure,â Physical Review A, 95:020501. [85] E. Farhi, J. Goldstone, and S. Gutmann, 2014a, âA quantum approximate optimization algorithm,â arXiv:1411.4028.; E. Farhi, J. Goldstone, and S. Gutmann, 2014b, âA quantum approximate optimization algorithm applied to a bounded occurrence constraint problem,â arXiv:1412.6062.; Lin et al., 2014; E. Farhi and A.W. Harrow, 2016, âQuantum Supremacy through the Quantum Approximate Optimization Algorithm,â arXiv:1602.07674. [86] J. Romero, J. Olson, and A. Aspuru-Guzik, 2017, âQuantum autoencoders for efficient compression of quantum data,â Quantum Science and Technology, 2: 045001. [87] M. Benedetti, D. Garcia-Pintos, Y. Nam, and A. Perdomo-Ortiz, 2018, âA generative modeling approach for benchmarking and training shallow quantum circuits,â https://arxiv.org/abs/1801.07686 [88] G. Verdon, M. Broughton, and J. Biamonte, 2017, âA quantum algorithm to train neural networks using low- depth circuits,â arXiv:1712.05304 [89] A. Peruzzo, J.R. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P.J. Love, A. Aspuru-Buzik, and J.L. Oâbrien, 2013, âA variational eigenvalue solver on a quantum processor,â arXiv:1304.3061. [90] D. Wecker, M.B. Hastings, and M. Troyer, 2015, âProgress towards practical quantum variational algorithms,â Physical Review A, 92:042303. [91] J.R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, 2016, âThe theory of variational hybrid quantum- classical algorithms,â New Journal of Physics, 18:023023. [92] P.J.J. OâMalley, R. Babbush, I.D. Kivlichan, J. Romero, J.R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, A. Megrant, J. Y. Mutus, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-26

P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis, 2016, âScalable quantum simulation of molecular energies,â Physical Review X, 6:031007. [93] R.Santagati, J. Wang, A.A. Gentile, S. Paesani, N. Wiebe, J.R. McClean, S.R. Short , P J Shadbolt, D Bonneau, J W Silverstone, D P Tew, X Zhou, J L OBrien, and M G Thompson, 2016, âQuantum simulation of Hamiltonian spectra on a silicon chip,â arXiv:1611.03511. [94] G.G. Guerreschi and M. Smelyanskiy, 2017, âPractical optimization for hybrid quantum-classical algorithms,â arXiv:1701.01450. [95] J.R. McClean, M.E. Kimchi-Schwartz, J. Carter, and W.A. de Jong, 2017, âHybrid quantum-classical hierarchy for mitigation of dechoherence and determination of excited states,â Physical Review A, 95:042308. [96] J.R. Romero, R. Babbush, J.R. McClean, C. Hempel, P. Love, and A. Aspuru-Guzik, 2017, âStrategies for quantum computing molecular energies using the unitary coupled cluster ansatz,â arXiv:1701.02691. [97] Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, 2017, âQuantum implementation of the unitary coupled cluster for simulating molecular electronic structure,â Physical Review A, 95:020501. [98] P.-L. Dallaire-Demers, J. Romero, L. Veis, S. Sim, and A. Aspuru-Guzik, 2018, âLow-depth circuit ansatz for preparing correlated fermionic states on a quantum computer,â preprint: arXiv:1801.01053. [99] Farhi, Edward, Jeffrey Goldstone, and Sam Gutmann. âA quantum approximate optimization algorithm.â arXiv preprint arXiv:1411.4028 (2014). [100] J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Monroe, 2015, âMany-body localization in a quantum simulator with programmable random disorder,â arXiv: 1508.07026. [101] A. Mazurenko, C.S. Chiu, G. Ji, M.F. Parsons, M. KanÃ¡sz-Nagy, R. Schmidt, F. Grusdt, E. Demler, D. Greif, and M. Greiner, 2017, âA cold-atom Fermi-Hubbard antiferromagnet,â Nature, 545: 462-466. [102] R. Harris, Y. Sato, A. J. Berkley, M. Reis, F. Altomare, M.H. Amin, K. Boothby, et al., 2018, âPhase transitions in a programmable quantum spin glass simulator,â Science 361, no. 6398: 162-165. [103] A.D. King, J. Carrasquilla, J. Raymond, I. Ozfidan, E. Andriyash, A. Berkley, M. Reis, et al., 2018, âObservation of topological phenomena in a programmable lattice of 1,800 qubits,â Nature 560, no. 7719: 456. [104] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev, 2004, âAdiabatic Quantum Computation is Equivalent to Standard Quantum Computation,â arXiv:quant-ph/0405098 [105] Kadowaki, Tadashi, and Hidetoshi Nishimori. âQuantum annealing in the transverse Ising model.â Physical Review E58, no. 5 (1998): 5355. [106] Albash, Tameem, and Daniel A. Lidar. âAdiabatic quantum computing.â arXiv preprint arXiv:1611.04471 (2016). [107] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, 2001, âA quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem,â Science 292, no. 5516: 472-475. [108] W. Van Dam, M. Mosca, and U. Vazirani, 2001, âHow powerful is adiabatic quantum computation?,â Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pp. 279-287. [109] A.P. Young, S. Knysh, and V.N. Smelyanskiy, 2008, âSize dependence of the minimum excitation gap in the quantum adiabatic algorithm.â Physical review letters 101, no. 17: 170503. [110] A. Selby, http://www.archduke.org/stuff/d-wave-comment-on-comparison-with-classical-computers/; Evidence for quantum annealing with more than one hundred qubits, S. Boixo, T.F. RÃ¸nnow, S.V. Isakov, Z.Wang, D. Wecker, D.A. Lidar, J.M. Martinis and M. Troyer, 2014, Nature Physics volume 10, pages 218- 224; Defining and detecting quantum speedup, T.F. RÃ¸nnow, Z. Wang, J. Job, S. Boixo, S.V. Isakov, D. Wecker, J.M. Martinis, D.A. Lidar, M. Troyer, Science 345, 420 2014; PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-27

Benchmarking a quantum annealing processor with the time-to-target metric, J. King, S. Yarkoni, M.M. Nevisi, J.P. Hilton, C.C. McGeoch, https://arxiv.org/abs/1508.05087; Probing for quantum speedup in spin-glass problems with planted solutions, I. Hen, J. Job, T. Albash, T.F. RÃ¸nnow, M. Troyer, and D.A. Lidar, Physical Review A 92, 042325 2015; Strengths and weaknesses of weak-strong cluster problems: A detailed overview of state-of-the-art classical heuristics versus quantum approaches, S.MandrÃ , Z. Zhu, W. Wang, A. Perdomo-Ortiz, and H.G. Katzgraber, Phys. Rev. A 94, 022337 2016; What is the Computational Value of Finite-Range Tunneling?, V.S. Denchev, S. Boixo, S.V. Isakov, N. Ding, R. Babbush, V. Smelyanskiy, J. Martinis, and H. Neven, Physical Review X 6, 031015 2016; The pitfalls of planar spin-glass benchmarks: raising the bar for quantum annealers (again), S. MandrÃ , H.G. Katzgraber, Creighton Thomas, Quantum Science and Technology, Volume 2, Number 3 2017; Quantum Annealing amid Local Ruggedness and Global Frustration, J. King, S. Yarkoni, J. Raymond, I. Ozfidan, A.D. King, M.M. Nevisi, J.P. Hilton, C.C. McGeoch, https://arxiv.org/abs/1701.04579; A deceptive step towards quantum speedup detection, S. MandrÃ , H.G. Katzgraber, Quantum Science and Technology, 3, 04LT01 2018; Demonstration of a scaling advantage for a quantum annealer over simulated annealing, T. Albash and D.A. Lidar, Physical Review X 8, 031016 2018 [111] T. Albash, V. Martin-Mayor, and I. Hen, 2017 âTemperature scaling law for quantum annealing optimizers,â Physical review letters 119, no. 11: 110502. [112] T. Albash, V. Martin-Mayor, and I. Hen, 2018, âAnalog Errors in Ising Machines.â arXiv preprint arXiv:1806.03744. [113] S. Jordan, 2018, âAlgebraic and Number Theoretic Algorithms,â National Institute of Standards and Technology, last updated January 18, 2018, http://math.nist.gov/quantum/zoo/ [114] A.W. Harrow and A. Montanaro, 2017, âQuantum computational supremacy.â Nature 549, no. 7671: 203. [115] S. Aaronson and A. Arkhipov, 2011, âThe computational complexity of linear optics.â In Proceedings of the forty-third annual ACM symposium on Theory of computing, pp. 333-342. ACM, 2011. [116] M.J. Bremner, R. Jozsa, and D.J. Shepherd., 2010, âClassical simulation of commuting quantum computations implies collapse of the polynomial hierarchy.â In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, p. rspa20100301. The Royal Society. [117] B.M. Terhal and D.P. DiVincenzo, 2001, âClassical simulation of noninteracting-fermion quantum circuits,â arXiv:quant-ph/0108010. [118] J. Carolan, C. Harrold, C. Sparrow, E. MartÃn-LÃ³pez, N.J. Russell, J.W. Silverstone, P.J. Shadbolt, N. Matsuda, M. Oguma, M. Itoh, G.D. Marshall, M.G. Thompson, J.C.F. Matthews, J.L. OâBrien, and A. Laing, 2015, âUniversal linear optics,â Science, 349(6249): 711-716. [119] P. Clifford and R. Clifford, 2018, âThe classical complexity of boson sampling,â In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM: 146-155. [120] S. Boixo, S.V. Isakov, V.N. Smelyanskiy, R. Babbush, N. Ding, Z. Jiang, M.J. Bremner, J.M. Martinis, and H. Neven, 2017, âCharacterizing Quantum Supremacy in Near-Term Devices,â arXiv:1608.00263. [121] A. Bouland, B. Fefferman, C. Nirkhe, and U. Vazirani, 2018, âQuantum Supremacy and the Complexity of Random Circuit Sampling,â arXiv:1803.04402. [122] S. Boixo, S.V. Isakov, V.N. Smelyanskiy, R. Babbush, N. Ding, Z. Jiang, M.J. Bremner, J.M. Martinis, and H. Neven, 2017, âCharacterizing Quantum Supremacy in Near-Term Devices,â arXiv:1608.00263. [123] A. Bouland, B. Fefferman, C. Nirkhe, and U. Vazirani, 2018, âQuantum Supremacy and the Complexity of Random Circuit Sampling,â arXiv:1803.04402. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-28

[124] S. Aaronson and L. Chen, 2017, Complexity-theoretic foundations of quantum supremacy experiments. In Ryan OâDonnell, editor, 32nd Computational Complexity Conference, CCC 2017, July 6-9, 2017, Riga, Latvia, volume 79 of LIPIcs, pages 22:1-22:67. Schloss DagstuhlâLeibniz-Zentrum fuer Informatik. [125] A. Bouland, B. Fefferman, C. Nirkhe, and U. Vazirani, 2018, âQuantum Supremacy and the Complexity of Random Circuit Sampling,â arXiv:1803.04402. [126] C. Neill, P. Roushan, K. Kechedzhi, S. Boixo, S.V. Isakov, V. Smelyanskiy, R. Barends, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. Fowler, B. Foxen, R. Graff, E. Jeffrey, J. Kelly, E. Lucero, A. Megrant, J. Mutus, M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, H. Neven, and J.M. Martinis, 2017, âA blueprint for demonstrating quantum supremacy with superconducting qubits,â arXiv:1709.06678 [127] Z. Brakerski, P. Christiano, U. Mahadev, U. Vazirani, and T. Vidick, 2018, ``Certifiable randomness from a single quantum device,â arXiv:1804.00640. [128] K. Bourzac, 2017, âChemistry is quantum computing's killer app,â Chemical and Engineering News, 95, 43: 27-31. [129] D.A. Lidar and H. Wang, 1999, âCalculating the thermal rate constant with exponential speedup on a quantum computer.â Physical Review E 59, no. 2: 2429. PREPUBLICATION COPY â SUBJECT TO FURTHER EDITORIAL CORRECTION 3-29