Xavier Waintal responds (tl;dr Grover is still quadratically faster)

This morning Xavier Waintal, coauthor of the new arXiv preprint “””refuting””” Grover’s algorithm, which I dismantled here yesterday, emailed me a two-paragraph response. He remarked that the “classy” thing for me to do would be to post the response on my blog, but: “I would totally understand if you did not want to be contradicted in your own zone of influence.”

Here is Waintal’s response, exactly as sent to me:

The elephant in the (quantum computing) room: opening the Pandora box of the quantum oracle

One of the problem we face in the field of quantum computing is a vast diversity of cultures between, say, complexity theorists on one hand and physicists on the other hand. The former define mathematical objects and consider any mathematical problem as legitimate. The hypothesis are never questioned, by definition. Physicists on the other hand spend their life questioning the hypothesis, wondering if they do apply to the real world. This dichotomy is particularly acute in the context of the emblematic Grover search algorithm, one of the cornerstone of quantum computing. Grover’s algorithm uses the concept of “oracle”, a black box function that one can call, but of which one is forbidden to see the source code. There are well known complexity theorems that show that in this context a quantum computer can solve the “search problem” faster than a classical computer.

But because we closed our eyes and decided not to look at the source code does not mean it does not exist. In https://arxiv.org/pdf/2303.11317.pdf, Miles Stoudenmire and I deconstruct the concept of oracle and show that as soon as we give the same input to both quantum and classical computers (the quantum circuit used to program the oracle on the actual quantum hardware) then the *generic* quantum advantage disappears. The charge of the proof is reversed: one must prove certain properties of the quantum circuit in order to possibly have a theoretical quantum advantage. More importantly – for the physicist that I am – our classical algorithm is very fast and we show that we can solve large instances of any search problem. This means that even for problems where *asymptotically* quantum computers are faster than classical ones, the crossing point where they become so is for astronomically large computing time, in the tens of millions of years. Who is willing to wait that long for the answer to a single question, even if the answer is 42?

The above explicitly confirms something that I realized immediately on reading the preprint, and that fully explains the tone of my response. Namely, Stoudenmire and Waintal’s beef isn’t merely with Grover’s algorithm, or even with the black-box model; it’s with the entire field of complexity theory. If they were right that complexity theorists never “questioned hypotheses” or wondered what did or didn’t apply to the real world, then complexity theory shouldn’t exist in CS departments at all—at most it should exist in pure math departments.

But a converse claim is also true. Namely, suppose it turned out that complexity theorists had already fully understood, for decades, all the elementary points Stoudenmire and Waintal were making about oracles versus explicit circuits. Suppose complexity theorists hadn’t actually been confused, at all, about under what sorts of circumstances the square-root speedup of Grover’s algorithm was (1) provable, (2) plausible but unproven, or (3) nonexistent. Suppose they’d also been intimately familiar with the phenomenon of asymptotically faster algorithms that get swamped in practice by unfavorable constants, and with the overhead of quantum error-correction. Suppose, indeed, that complexity theorists hadn’t merely understood all this stuff, but expressed it clearly and accurately where Stoudenmire and Waintal’s presentation was garbled and mixed with absurdities (e.g., the Grover problem “being classically solvable with a linear number of queries,” the Grover speedup not being “generic,” their being able to “solve large instances of any search problem” … does that include, for example, CircuitSAT? do they still not get the point about CircuitSAT?). Then Stoudenmire and Waintal’s whole objection would collapse.

Anyway, we don’t have to suppose! In the SciRate discussion of the preprint, a commenter named Bibek Pokharel helpfully digs up some undergraduate lecture notes from 2017 that are perfectly clear about what Stoudenmire and Waintal treat as revelations (though one could even go 20+ years earlier). The notes are focused here on Simon’s algorithm, but the discussion generalizes to any quantum black-box algorithm, including Grover’s:

The difficulty in claiming that we’re getting a quantum speedup [via Simon’s algorithm] is that, once we pin down the details of how we’re computing [the oracle function] f—so, for example, the actual matrix A [such that f(x)=Ax]—we then need to compare against classical algorithms that know those details as well. And as soon as we reveal the innards of the black box, the odds of an efficient classical solution become much higher! So for example, if we knew the matrix A, then we could solve Simon’s problem in classical polynomial time just by calculating A‘s nullspace. More generally, it’s not clear whether anyone to this day has found a straightforward “application” of Simon’s algorithm, in the sense of a class of efficiently computable functions f that satisfy the Simon promise, and for which any classical algorithm plausibly needs exponential time to solve Simon’s problem, even if the algorithm is given the code for f.

In the same lecture notes, one can find the following discussion of Grover’s algorithm, and how its unconditional square-root speedup becomes conditional (albeit, still extremely plausible in many cases) as soon as the black box is instantiated by an explicit circuit:

For an NP-complete problem like CircuitSAT, we can be pretty confident that the Grover speedup is real, because no one has found any classical algorithm that’s even slightly better than brute force. On the other hand, for more “structured” NP-complete problems, we do know exponential-time algorithms that are faster than brute force. For example, 3SAT is solvable classically in about O(1.3n) time. So then, the question becomes a subtle one of whether Grover’s algorithm can be combined with the best classical tricks that we know to achieve a polynomial speedup even compared to a classical algorithm that uses the same tricks. For many NP-complete problems the answer seems to be yes, but it need not be yes for all of them.

The notes in question were written by some random complexity theorist named Scot Aronsen (sp?). But if you don’t want to take it from that guy, then take it from (for example) the Google quantum chemist Ryan Babbush, again on the SciRate page:

It is well understood that applying Grover’s algorithm to 3-SAT in the standard way would not give a quadratic speedup over the best classical algorithm for 3-SAT in the worst case (and especially not on average). But there are problems for which Grover is expected to give a quadratic speedup over any classical algorithm in the worst case. For example, the problem “Circuit SAT” starts by me handing you a specification of a poly-size classical circuit with AND/OR/NOT gates, so it’s all explicit. Then you need to solve SAT on this circuit. Classically we strongly believe it will take time 2^n (this is even the basis of many conjectures in complexity theory, like the exponential time hypothesis), and quantumly we know it can be done with 2^{n/2}*poly(n) quantum gates using Grover and the explicitly given classical circuit. So while I think there are some very nice insights in this paper, the statement in the title “Grover’s Algorithm Offers No Quantum Advantage” seems untrue in a general theoretical sense. Of course, this is putting aside issues with the overheads of error-correction for quadratic speedups (a well understood practical matter that is resolved by going to large problem sizes that wouldn’t be available to the first fault-tolerant quantum computers). What am I missing?

More generally, over the past few days, as far as I can tell, every actual expert in quantum algorithms who’s looked at Stoudenmire and Waintal’s preprint—every one, whether complexity theorist or physicist or chemist—has reached essentially the same conclusions about it that I did. The one big difference is that many of the experts, who are undoubtedly better people than I am, extended a level of charity to Stoudenmire and Waintal (“well, this of course seems untrue, but here’s what it could have meant”) that Stoudenmire and Waintal themselves very conspicuously failed to extend to complexity theory.

96 Responses to “Xavier Waintal responds (tl;dr Grover is still quadratically faster)”

  1. Hm Says:

    Thanks Scott. I am sure that having to deal with this type of stuff is draining. But your clear and direct language is widely appreciated.

    Also, “our classical algorithm is very fast and we show that we can solve large instances of any search problem” seems like a strong claim, at least to me.

  2. on the bright side... Says:

    …this seems exciting https://scirate.com/arxiv/2303.13012. “As an example application, we apply our quantum algorithm to efficiently estimate the kinetic energy of an oscillator at any time, for a specification of the problem that we prove is BQP-complete. Thus, our approach solves a potentially practical application with an exponential speedup over classical computers.”

  3. P N Says:

    It’s clear that the title and abstract of the paper should be revised, as it is provocative to anyone who understands the nuances
    I’m sorry for you Scott that you have to deal with this (especially the latter comments in the previous blog post).

  4. Justin Says:

    I mean, even if they are right theoretically (they are not), they failed to address the “amount” of time their classical simulator needs (which depends on programming language and what computer they use) vs the quantum diffusion and oracle.

    1) the time needed to multiply some 32×32 matrices on a laptop usually takes more than 1ms. So maybe a 5 qubit grover algorithm already beats their classical simulator.
    2) In grover, the diffusion operator and oracle requires many* Toffoli gates, definitely not something that can be easily classically simulated (Each Toffoli gate must be decomposed into an odd number of T gates (7), unlike the case in https://www.scottaaronson.com/chp/)

    One can read more about these actual time comparisons (Circuit Obfuscation Using Braid) at:

  5. Christian Says:

    @Justin: did you look in Table 1 and table 2 of the preprint? Or read the paper? They are not multiplying 5×5 matrices for 5 qubit systems but they use clever tensor network techniques

  6. Ted Says:

    I think that one source of the confusion for non-experts is the use of language like “one is forbidden to see the source code” of the oracle. As I explained in my comment in the previous post, I think it’s more transparent to instead say something like “one chooses not to use any information about the source code”.

    To me, the latter language more clearly conveys various nuances, like:
    (a) in a typical practical implementation, the coder not only knows the exact implementation of the oracle, but in fact codes it up herself along with the rest of the source code,
    (b) in the Grover context, we typically choose to ignore the details of the oracle function not because we’re perverse masochists who enjoy making problems harder for ourselves, but because in many contexts the oracle doesn’t give us any structure that we know how to usefully exploit anyway, and
    (c) for any concrete instantiated oracle, the Grover speedup is provisional and always has the potential to be beaten if you do choose to look at the details of the instantiation (which you’re free to do “in the real world”).

    I understand why the more common language (“forbidden”) is useful to experts – it clarifies which (meta-level) information you’re allowed to use in your proofs of optimality – but it is a little confusing to beginners.

  7. Ted Says:

    Also: I think that many of the objections raised by the paper authors and by various commentators can be addressed by clarifying the context in which Grover is useful. As I discussed in my comment on the previous post, there is a very simple criteria for when Grover gives a real-world speedup: when (a) the best classical algorithm that you can think of involves calling some subroutine many times in a generic way that doesn’t exploit its structure, and (b) the algorithm runtime is dominated by those calls to that subroutine, so that (to leading order) the total runtime scales linearly with the number of subroutine calls.

    This formulation shows that many of the questions that various commentators have raised have pretty straightforward answers:
    1. “What if I come up with some algorithm for which the ‘oracle subroutine’ calls aren’t the bottleneck?”
    Then the Grover speedup doesn’t apply in that context, because the conditions listed above aren’t satisfied.

    2. “What if I only call the ‘oracle’ once, but it takes exponentially long to evaluate, so my algorithm is still slow?”
    Then again, the Grover speedup doesn’t apply, because the conditions listed above aren’t satisfied.

    3. “Isn’t it critical to specify exactly which computational steps go ‘inside’ the oracle instantiation and which steps I leave ‘outside’?”
    No, not really. As long as you scope the “oracle” subroutine such that the conditions above are satisfied, then Grover’s algorithm will give you an approximately quadratic speedup.

  8. Nadya Says:

    I just wanted to thank you for this and the previous post.
    As someone who is just starting their journey in quantum computing, the message of the paper in question seemed very weird to me and just didn’t feel right. Now I know why.

  9. Scott Says:

    Ted #6 and #7: Thank you so much!! Who needs GPT when I’ve got commenters like you to help share the burden? 🙂

  10. Scott Says:

    Nadya #8: You’re welcome!!

  11. Kerem Says:

    The individuals who policed Scott’s tone could take a look at Waintal’s response and note of the physicist’s arrogance.

    In chess, you learn very early on the importance of respecting one’s opponent. If a GM appears to be falling into a two-move queen trap, it’s worth taking a moment to reconsider if you might be the one overlooking something. This principle holds true regardless of whether you’re facing Petrosian or Tal, and irrespective of your opponent’s background or style.

    If you believe that a particular field has gone 30 years without questioning its hypotheses, it would be wise to pause for a few minutes, or perhaps, you know, consult with a colleague in computer science, before assuming you’re the first to uncover something profound.

    I would like to invite Waintal, as the physicist he is, to tackle an instance of Integer Factorization expressed as a Circuit SAT problem (it’s a basic undergraduate task to formulate this using a digital multiplier) and see how his approach fares.

    Regrettably, despite this elementary misunderstanding, it’s still possible that we might see this paper published in a reputable journal like Physical Review X.

  12. Topologist Guy Says:

    Scott, sorry for the off-topic comment. I’m curious where, pedagogically, you’d draw the line between “undergraduate” quantum computing concepts and “graduate-level” results. Grover’s and Shor’s algorithms are simple enough that they’re routinely taught in undergraduate intro to quantum information (or some such) courses. Is there a quantum algorithm, or a theorem in quantum complexity theory, sufficiently complicated or abstract that it’s “definitively” a graduate-level result? Just by analogy, for instance, I’d teach point-set topology to undergraduates, but study of algebraic invariants (like cohomology groups of spaces) is more traditionally a graduate-level subject.

  13. Ronald de Wolf Says:

    To Stoudenmire or Waintal, if you’re reading this:
    Do you have a classical 2^{n/2}*poly(n)-time algorithm for SAT? No? Then *please* change the title of your paper! We’ll be forced to explain to gullible journalists for years to come why it’s highly misleading and doesn’t cover what you actually do in this paper.

  14. Scott Says:

    Ronald de Wolf #13: I second, third, and fourth that question!

  15. Scott Says:

    Topologist Guy #12: There’s no standardized curriculum for a graduate QC course, but some possible quantum algorithms that could be covered in such a course include HHL, quantum walk algorithms (eg for Element Distinctness), the Farhi-Goldstone-Gutmann NAND-tree algorithm, the adiabatic algorithm, the Forrelation algorithm, block encodings, Hamiltonian simulation, algorithms for various nonabelian hidden subgroup problems (including Kuperberg’s)…

  16. Lane M Says:

    Ah, there is an xkcd for this: https://xkcd.com/793/

  17. Mateus Araújo Says:

    Argh this is some obstinate refusal to learn! Never mind that any serious scientist should have done their homework before before embarrassing themselves publicly on the arXiv. This answer comes after their misunderstanding has been explained in nauseating detail both here and SciRate.

    In the case of Bell denial the community has caught on and when the perennial denialist paper appears on the arXiv it is completely ignored (except by Richard Gill, of course). It might be time we extend the same treatment to complexity theory denial.

  18. Karen Morenz Korol Says:

    Wow, yesterday I thought you could have afforded to give these authors a little more benefit of the doubt (eg, maybe they were ignorant of existing complexity theory knowledge by accident rather than wilfully, or maybe they meant to be pedagogical to clear up confusion in their own field but garbled the write up). I see now that any right to benefit of the doubt has been thoroughly forfeited and I’m a bit flabbergasted at this doubling down. Anyways thanks for helping clarify things for the rest of us! It may feel annoyingly pedantic to you but it’s really helpful for beginners like me to be able to check my understanding 🙂

  19. Angry Physicist Says:

    Scott 15: I looked up that “HHL” algorithm on wikipedia and…wow. I have literally no background in quantum computation or complexity theory (nothing beyond coding in Python and C++), but I *still* read through the explanation and proofs easily. I could probably learn the whole thing in like 45 minutes. Imagine if the tables were flipped—could you read and understand an article on, say, renormalization group flow in Yang-Mills in 45 minutes? Very much doubt it.

    HHL is so pathetically simple that I could teach it to undergraduates in the linear algebra course here. It’s literally just applying some easy-to-define unitary transformations on a finite-dimensional vector space. Compare to QFT, which I learned in senior year undergrad (not to brag…) which concerns infinite-dimensional Hilbert spaces. The vectors here are not superpositions of your “qubits” but superpositions of entire field configurations. The mathematics and abstraction here is SO MUCH more advanced than your “field” that I’m struggling to restrain myself from laughter at your pretence to present yourself as some kind of accomplished expert in anything. Remember when you posted here that you were STILL struggling to learn QFT? 🤣

    Maybe I’ll reconsider if you can solve this problem for me:

    Using either the Lagrangian/path-integral or Hamiltonian, derive from first principles the first few terms in the perturbation expansion for a three-gluon scattering amplitude in QCD (without quarks/fermions).

    You’re a fool if you think Yang-Mills is remotely comparable to your “HHL” thing by any measure of difficulty or complexity. I could have come up with Shor’s algorithm myself so easily if pointed in the right direction.

    You know, it really angers me that I’m a struggling academic with almost no shot at a permanent tenure-track job, yet I am capable of such greater abstraction and actual physics than the great Professor Arrowsome. It’s a fucking joke that you, who belongs to a field of this “caliber,” is a spokesman to the journalists and the public of anything. There’s no funding for people like me anymore and yet “quantum information science” for actual dummies is overflowing with money. I can only hope that karma bites you in the ass someday. I hate everything.

  20. Scott Says:

    Angry Physicist #19: So then why don’t you switch fields to quantum information and amaze us all with your toweringly superior contributions, if you’re so angry and jealous about all these billion-dollar bills that we’re able to find lying on the ground? Like seriously, why not? You wouldn’t be the first former higher-energy physicist to make the switch, or even the 20th.

    Yes, HHL is almost comically simple (although the question of what it’s good for is subtle enough that people are constantly getting it wrong). But the truth is that, it’s not just that I didn’t happen to think of it, neither did Preskill or Farhi or Laflamme or any of the other former HEP physicists (far greater ones than you, I might hazard?) who populate our field.

  21. Scott Says:

    FYI, Xavier Waintal emailed me the following reply to my reply. I’ll reply to it later, if someone else hasn’t satisfactorily done so by then.


    Scott has been kind enough to post my paragraph, but apparently
    we’re still not on the same page, so I have written a more detailed reply. There are many points on which I think that we all agree, so I’d like to clear them first and identify statements that we share. I am interested to know where exactly the disagreement starts. I suspect it’s purely a semantic issue.

    #A We all agree that within the oracle framework, Grover gives
    a quadratic speed-up. By oracle framework, I mean that we can call the function but we have no access to its source code.

    #B However, for the problems that we are interested in (at least in the immense majority of the literature), the starting point is not an oracle but a classical problem that we want to solve. For instance a satisfiability problem like 3SAT. In any reasonable way one wants to use Grover, one will have to eventually code the oracle (by e.g. using reversible logic to transform the classical oracle into a quantum one). I think most people in this audience agree with that. So I take the starting point not as an oracle but as its source code, the quantum circuit (the list of gates really) that defines it.

    #C As soon as one defines the problem as in #B, then the generic quantum advantage is trivially gone. For instance, we might look into the source code and find that it contains 1 line, the quantum version of “if x=0 return 1 else return 0”. This is easy to solve classically. I think we all agree to that too.

    #D So we are left with one question. Are there some problems left where Grover gives a parametric speed up and more importantly how do we identify them? I think we all believe
    that the answer to the first part of the question is yes (CircuitSAT is a candidate) .

    #E Our work is concerned with the second part of the question.
    We built a classical algorithm that takes the quantum circuit of the oracle as an input and solves the associated search problem. It uses tensor network voodoo (which I highly recommend, it is an elegant and deep formalism). Our technical statement is the following:1) if one can simulate the quantum oracle, one can solve the search problem. 2) one must do the simulation of just one oracle, not an exponential number of them in series as in Grover. It is a non-trivial statement. It is agnostic with respect to the oracle’s circuit, one can use ones’ preferred simulation technique (e.g. tensor networks, mpo/mps…).

    #F Assuming we agree that #E is correct, does it change the game from a complexity theory perspective? In general, I think we agree that the answer is no since we already established in #C that within our definition of the problem there is no generic quantum advantage. However, #E places new constraints that a particular problem must fulfill in order to possibly have a quantum advantage. From a physics perspective, something that I find interesting is that it makes a direct connection between algorithm complexity and the entanglement barrier of the oracle.

    #G Now comes the important bit. Suppose that we have found a problem, any problem, where Grover provides a genuine quadratic speed up over its classical counterpart. For large instances of this problem, the quantum computer will deliver the solution faster than the classical one. However for small instances the classical computer is faster, it has a much smaller prefactor. Using #E, we can estimate the prefactor for the classical computer in the worst case situation. Using physics, we can estimate the prefactor for quantum computers. For which problem size does the quantum computer become faster than the classical one? The answer, under over optimistic assumptions on the future of quantum hardware, lies in the tens of millions of years. I repeat: the only problems where a quantum computer would be faster would be ones that would take the quantum computer tens of millions of years to solve.

    #H Now some of you are unhappy with the title of our article that might seem to indicate that we contradict #A (we don’t) or that we just discovered #C (we did not). No, the title stems from #G. To some people, #G might seem like an insignificant detail. Who cares about prefactors? To a physicist (like me) scales do matter: a cm² quantum computer is easier to build than a km² one. To the engineers that are trying to build this stuff, it matters too. And I am sure that the people who pay for quantum computing would share my definition of quantum advantage.

  22. Kerem Says:

    Many things to unpack in Waintal’s re-response, one thing is that he at the very least seems to start to take the other side seriously, rather than launching ad hominems like “you are never questioning your assumptions” or “you don’t understand the physical world” or like some of his colleagues (Giuseppe Carleo) who hurl veiled insults on Twitter towards this blog and TCS in general. Great start!

    After this clarification, it appears, just like the original critique that was published in this blog clearly stated, Waintal’s entire argument is a prefactor one against Grover. But much of this is well-known (see, for example, Focus beyond quadratic speedups for error-corrected quantum advantage, arXiv:2011.04149, or see, Is Quantum Search Practical?, arXiv:quant-ph/0405001).

    So what is the *real* goal of such a clickbaity title that captures the Twittersphere and every layperson’s imagination? If the entire point is one of practicality, why not put this loud and clear in the title instead of hiding behind tensor network obscurantism?

    Finally, if Waintal is giving moral lectures about who “pays” for quantum computing, perhaps he should take some morality lessons about intellectual honesty. But a lesser title might not pass the PRX editorial check, am I right?

    I am all for bringing prefactors to the TCS headspace and a healthy skepticism towards quantum computing, but if you are going to do it, do it right, with intellectual honesty.

  23. OhMyGoodness Says:

    Is this a reasonable instantiation of an oracle-


    This is what you are creating at OpenAI-an agent sympathetic to humankind?

  24. Andrew Says:

    Scott #21, or really Xavier #21
    I think the fundamental problem here is that while your points may be mostly technically correct (no doubt someone will point out some problems), you’re simply not saying anything new, but think that you are.

    You seem to place the greatest emphasis on point #G, that in practice constants of proportionality can matter more than the asymptotic behaviour. I’m not a CS complexity theorist, but I am a software developer, and I have sometimes used a bubblesort rather than a quicksort. The former scales as n^2, the latter as n log n, but for low n, the bubblesort is faster. I knew this long before you put your paper on the arxiv, and I’m surely not alone in that.

  25. Topologist Guy Says:

    1. Obviously the title of their article is provocative and plain wrong. Stripping away the misunderstandings, though, is there anything substantively new in their tensor network algorithm?

    2. You filed these posts under CS/Physics deathmatch, and this whole Grover’s debacle strikes me as karmic justice for what you It From Qubit types did to high-energy theory, with your claims of making wormholes and testing quantum gravity in a lab using quantum computers (not you personally, but the quantum information / It From Qubit community was complicit in promoting an “experiment” at least as idiotic as this Grover’s thing).

  26. Clarification Says:

    Concretely: is there a class of problems for which quantum advantage is practically ruled out by these new tensor network methods that were not practically ruled out (barring some major advancements in FTQC) in the papers that Kerem linked to? What do these tensor network methods add (not trying to be combative – genuinely curious)?

  27. Ted Says:

    Xavier (via Scott) #21:

    I’m trying to place your argument into the larger context of the existing literature on resource estimates for Grover’s algorithm. (My apologies, but I haven’t finished reading your preprint yet, so I’m just going off of your summary here in the comments.) I believe that there is already a significant literature of existing papers making resources estimates for running Grover’s algorithm on an error-corrected QC. And because of the massive (although still constant!) overhead that QEC requires, they all predict an intimidatingly large crossover point for the size of problem instances when the quantum resource curve finally crosses below the classical resource curve.

    You seem to be arguing that that crossover point is even further out than was previously projected. In order for that to be true, you must be either claiming that (a) quantum computers require even more resources than previously predicted to solve a problem instance of a given size, or (b) classical computers require less resources than previously predicted.

    If I understand your argument, then you are arguing the latter, and your novel contribution is really a proposal for a new classical optimization heuristic. You claim that, given a difficult optimization/combinatorics problem, one can draw (not actually physically build!) the quantum circuit for running Grover’s algorithm on that problem, and then classically simulate that circuit using tensor network voodoo, which can be done reasonably efficiently as long as the state of the qubits in the oracle circuit do not become highly entangled.

    (I’m a little unclear on your exact claim – is it that the state never becomes highly entangled for any instance? This would seem to constitute a universal efficient solver for NP-complete problems, which is … unlikely. That the entanglement remains low for average-difficulty instances?)

    In order for this proposed classical procedure to change the conclusions of the previous literature, it must be a more powerful classical heuristic than whichever classical heuristics (maybe just brute-force search?) the previous literature had considered. But if you’re claiming that you’ve discovered a broadly useful new classical numerical heuristic, then it almost seems to be “burying the lede” to discuss Grover’s algorithm at all. It seems to me that the thing to do would be to compare it against the existing state of the art in classical optimization.

  28. Boaz Barak Says:

    Just joining this discussion now but Waintal #21: here is a simple question. Consider the task of finding an input x such that H(x) starts with n zeroes where H is a cryptographic hash function such as SHA256. Can you explain in concrete non asymptotic terms what is the performance of your algorithm to solve this problem as a function of n?

    If we assume that your algorithm takes c*2^n steps and Grover C*2^{n/2} where c,C depend on the (not that complex) circuit for SHA256, then Grover will beat your algorithm as soon as 2^{n/2} > C/c. At this point Grover will require C^2/c steps. Why would that be tens of millions of years?

  29. Noel Says:

    Clarification #26, is the classical tensor network algorithm they propose faster than the best classical algorithms for a class of problems? Maybe this adds something , even if the classical algorithms have always been better than the quantum algorithms for this class of problems?

    Scott #21 or Xavier – I did not understand why I should be convinced that your equation (19) does not blow up and become exponential for all cases below 80 qubits; or are you just saying that 2^80 calls to f are solvable on a supercomputer?

  30. Jud Says:

    It is interesting to me how many scientists who understand intimately the complexities of their own fields are nevertheless prepared to assume, based on an absence of knowledge, that (1) entire other fields of scientific endeavor can be adequately described in broad generalizations; and (2) no one in these other fields has ever been quite smart enough to think of their “new” results. It is rather surprising they would be brave enough to so publicly demonstrate their willingness to arrive at conclusions based on insufficient information.

  31. Justin Says:

    Christian #5: Yes I read the paper, but they are not showing the time for quantum operations. Also 32=2^5 for 5 qubits. I know they are using MPS, but still for 5 qubits I already think that Grover on quantum computers is already faster that their MPS.

  32. Justin Says:

    Boaz Barak #28: No need to wait that long. If you look at n=30, it already takes 21 second to compute the post-oracle state. And 6.5 hours for n =40! And why are they not doing n=50? Probably it will take a year??

  33. Y Says:

    Any physical experiment for measuring a physical constant is essentially a black box (oracle) problem with some prior knowledge. There’s no way to optimize the “circuit” implementing the black box (the physical system) without knowing what the black box is. The motivation for studying oracles should have came from experimentalists.

  34. Mitchell Porter Says:

    Angry Physicist #19 (who sounds identical to “String Theorist” from last year):

    Two physicists wrote a paper about a quantum algorithm, and a computer scientist says they got it wrong. Your contribution to the discussion is… to brag about how physics is much harder than computer science. I’m sorry, but I don’t even see what point you’re making.

  35. ad Says:

    I really don’t understand how seeing the source code or circuit matters in this case(I read the paper, too):
    let us say f(x) = sin(x), then marking a state would be:
    if sin(x) == 1:
    mark input x

    On a quantum computer; by applying a CZ gate on an ancilla |1> controlled by the output of f(x) would mark the solution from the superpositioned input. I really could not understand how seeing inside the oracle (Or and And gates that implementing some logical output based on the given input x) or the code help us to design a faster classical algorithm.
    Seeing inside the function or the oracle does not mean we know on which input it gives the right answer.

  36. Scott Says:

    Topologist Guy #25: I’d like to think that, not only was I not complicit in the ridiculous claims about a “Google creating a wormhole in the lab,” I was one of the main people speaking out against them! (While not dismissing the possibility that future quantum computations actually could tell us something interesting about models of quantum gravity.) WTF do you want from me? 🙂

  37. CM Theorist Says:

    Full disclaimer: I am a condensed matter theorist, not a QC person. I have no idea whether there is something new in what Miles and Xavier found (and I enjoyed reading their paper, it is clearly written for my community).
    I can answer some comments on the TNs/algo woodoo that some people seem to misunderstand/not know. I also have a few comments/questions for Xavier

    1) If we have a MPS rep of the quantum state after one application of the gates, it is trivial to obtain the solution of the search problem in linear time (polynomial in the number of solution, linear in the number of bits). There is no violation of any theorem here, it is just that the state is low-entangled, and so we can probe it and evaluate it well. There is also no need to “know” the solution or anything like that. In that sense, the classical algorithm does need only _one_ call of the oracle. The question is just: can we actually obtain the MPS representation of that state in polynomial time?

    2) Building the MPO of the unitary is super hard indeed in general. It is far less obvious (to me) how generically hard it is to build U |psi> given that both the starting state AND the end state are trivially representable.
    Note that there are obvious hard cases, and P!=NP so in the end (as mentioned in the paper), something will break.

    3) Comments #28 and #29: there is no 2^n calls to make. Using tensor networks woodoo, the complexity of applying a gate is purely dependent on the local entanglement created during the step (to simplify, applying a 2 site gate will cost something like O(chi^3 d^2), where chi is morally an exponential of the entanglement between the relevant parts of the circuits). So it can be much worse than 2^n (the maximal chi will be 2^(n/2)), but it will be much smaller if the entanglement remains small/local. I do not do time evolution, which is harder, but we routinely work with systems with a few hundred to thousands qubits for one dimensional low-energy physics where MPS are amazing.

    4) One big claim is that the hard (interesting) configurations are also hard experimentally, because their algorithm can deal with a quite large amount of gates (as long as they are not too long range I suppose, though long range is also very hard experimentally). That seems hard to deny, whatever the number. Though I suspect it was probably kindof known. Also the evaluation of the required error rate seems to be simple enough to be correct. I would be curious to see whether someone can present a counterpoint to that argument.

    5) For me, the technical part is relatively trivial (and entirely correct as far as I can tell in a short reading), with the novelty of the algorithm mainly being that MPS/MPO/TNs could provide some interesting prospects/directions in the QC community too. And some interesting explicit look between information layout and hardness of classical problems.

    Some questions for Xavier:
    1) is the precision needed on the application not killing you in practice for large problems (truncation schemes might need to be ultra accurate + a need to work with more than float). If we have to be practical, it kinda cuts both ways

    2) I support Ted question in #27 on the generality of the entanglement barrier. In particular, I would be interested in some non trivial criteria that tells me that something is hard/easy (beyond it s entangled/it has local 1d structure). It is in particular interesting because the actual MPO of the Grover unitary is very easy to write. The difficulty is going there.

    3) Following up on 2): would it not be that as soon as my problem becomes fundamentally not one dimensional in the register (the n bits representing a 3D structure; with the search problem a 3D cypher applied to that for example), we know _for sure_ that the MPS should breakdown somehow somewhere along the way?

    Finally, as a researcher, I found this series of blog posts disappointing professionally and intellectually. The language used is highly unprofessional, and only some comments engaged with the actual contents of the paper. Which is a shame, because I personally was very curious on the QC point of view. I thank in particular Alex Meiburg in the previous post and Ted in this one for their insights.
    And for people saying Xavier is arrogant: well he is a little bit (I know him), but what kind of saint do you expect after having your work being called a pile of dung shit.

  38. Job Says:

    From #21,

    #B However, for the problems that we are interested in (at least in the immense majority of the literature), the starting point is not an oracle but a classical problem that we want to solve. For instance a satisfiability problem like 3SAT. In any reasonable way one wants to use Grover, one will have to eventually code the oracle (by e.g. using reversible logic to transform the classical oracle into a quantum one). I think most people in this audience agree with that. So I take the starting point not as an oracle but as its source code, the quantum circuit (the list of gates really) that defines it.

    If you’re going to do that, then consider dropping “oracle queries” as any comparative measure of performance. From the paper:

    We have constructed a quantum inspired version of Grover’s algorithm that can solve quantum problem in a single call to the oracle, requiring exponentially fewer steps than Grover’s algorithm, provided one is able to compute individual amplitudes of the oracle.

    What does a “single call to the oracle” mean when you don’t need to formally query the oracle at all?

    That’s like claiming that I can guess what you had for lunch by asking a single yes/no question, while also having your brain available for inspection. Like Scott says, can’t you do it with zero queries?

    Also, this part is a bit evasive:

    In this article, we focus on the case where the problem has a fixed number of solutions S (or more generically where S grows at most polynomially with n). For problems that have an exponential number of solutions, our algorithm would have to be revisited, but we conjecture that easy classical solutions exist in that case.

    I can imagine instances with a number of solutions greater than any fixed polynomial that are still outside the reach of an efficient classical solver, so i’d like to know more about this.

  39. Scott Says:

    CM Theorist #37:

      what kind of saint do you expect after having your work being called a pile of dung shit.

    But I didn’t start this! I didn’t! I was perfectly happy for condensed matter people to do condensed matter, or (better yet) build bridges to quantum algorithms, talking to quantum algorithms people and engaging with what they know. Instead, Stoudenmire and Waintal launched an attack on my field that, were it upheld, would (I would say) imply that my field shouldn’t exist at all, since it would mean we’d overlooked boneheadedly simple yet fatal objections to the Grover speedup for 27 years.

    The good news is that, when examined, the attack is immediately revealed to be a mishmash of the forehead-bangingly wrong (eg, classical algorithms can beat Grover’s algorithm in query complexity) with the obvious and well-known (eg, constant prefactors matter, error-correction is important).

    And even then I trued to ignore it! Until then journalists effectively threatened me that, if I did, they were going to run completely credulous stories about the preprint. WTF am I supposed to do in this situation?

  40. Frequent Reader, Rare Commenter Says:

    Can you explain why your attitudes are so different in the AI comment sections and here? On the open questions of AI (the capabilities and limits of LLMs, etc) you moderated a vibrant discussion, entertained all different viewpoints. On the open questions of quantum computing, though (whether quantum computers can outperform classical, I guess) you’re launching ad hominem attacks at everyone who disagrees with you, dropping F-bombs like an edgy high schooler, insulting people who don’t believe in quantum computers, calling them trolls just because they’re making an argument you don’t like (like the “quantum man” from the last thread), and angrily wielding your ban hammer, banning people for perfectly good contributions just because you didagree with them. What gives? Is this like an ego thing?

  41. Scott Says:

    Frequent Reader, Rare Commenter #40: Because the nature of the Grover speedup is not an open question. Yes, there are open questions in the vicinity of that question—even profound ones, like P vs NP and the validity of the Strong Exponential Time Hypothesis—but those aren’t what’s being discussed here. If the query complexity of Grover search etc, etc were actually open questions, then the theory of quantum algorithms can have made no progress since its inception and might as well never have existed. That is what Stoudenmire and Waintal apparently believe, and that is what the rest of the world will believe too, if our community (like, eg, evolutionary biologists or climatologists) isn’t loud and forceful in explaining what it knows in the face of proudly ignorant attack.

    With AI, I have been scathing with the people who dismiss GPT as unimpressive, a scam, etc—a nutty view that requires either willful detachment from observable reality or else extreme ideological fixations—but I’ve tried to host a very open discussion about what it means, where it’s going, etc, because no one knows. I’d do the same with questions about quantum computing that were similarly open.

    Does that answer your question?

  42. That Annoying Guy Says:

    Scott comment 15,

    While it may be true that there is no one-size-fits-all approach to teaching quantum computing, there are certainly well-established concepts and techniques that are essential for any student hoping to gain a comprehensive understanding of the field.

    As for the algorithms you suggest, while they may be interesting and useful in their own right, I do not believe they should form the core of a graduate course. It is important to remember that quantum computing is not simply a collection of clever tricks and algorithms, but a fundamental shift in the way we approach computation.

    Therefore, any graduate QC course worth its salt should begin with a thorough grounding in the mathematical foundations of quantum mechanics, including linear algebra, complex analysis, and probability theory. From there, students should be introduced to the basics of quantum information theory, including qubits, gates, and measurement.

    Only once these concepts are firmly in place should we move on to more advanced topics like quantum algorithms. And even then, I would argue that the focus should be on the underlying principles and techniques, rather than the specific algorithms themselves.

    In short, while your suggested algorithms may have their place in a graduate QC course, they should not be the main focus. Instead, we should prioritize building a strong foundation in the core concepts and principles of quantum computing, so that students can approach any algorithm or problem with a deep understanding of the underlying theory.

  43. Scott Says:

    Everyone: “Angry Physicist” submitted a reply that was so insulting, pathetic, and self-pitying that, rather than letting it appear, I forwarded it to the Shtetl-Optimized Committee of Guardians, which overwhelmingly ruled that I should leave it in moderation (in fact, most thought I should’ve done the same with Angry Physicist’s original comment).

    The one amusing twist: Angry Physicist answered my question—why doesn’t he just switch fields to quantum algorithms, if it’s so easy compared to particle physics that he could revolutionize it in an afternoon?—by saying that, even if he did switch, it wouldn’t matter, since the computer scientists like me would pooh-pooh and dismiss his breakthroughs, just like I did with Stoudenmire and Waintal’s breakthrough on beating Grover’s algorithm classically (!!). The validity or (as in this case) invalidity of the actual content seems to have been irrelevant to him; all that matters is whether it came from physicists.

    Needless to say, the many, many physicists who have made fundamental contributions to quantum algorithms have nothing even remotely resembling this individual’s tribalistic, resentful, self-pitying, and self-defeating attitude.

    I normally feel a lot of sympathy for people struggling on the margins of academia, and try to help them if I can. Here’s someone who fully, thoroughly deserves his failures.

  44. It must be hard running a blog Says:

    Frequent Reader, Rare Commenter (#40):

    The reason that Scott isn’t “entertaining all different viewpoints” here is because there is no “open question of quantum computing”. Grover does offer a quadratic speedup in theory. No it is not practical unless there are significant advances in FTQC and the technologies underlying QC. AFAICT, the paper does not advance our understanding on either of those points, but does succeed in blurring them together with ambiguous language.

    The problem of “what types of oracles can be efficiently simulated with tensor networks” does seem interesting, but doesn’t seem to change the stakes. The title “Grover’s algorithm offers no quantum advantage” implicitly conveys the subtitle “because of something we discovered”. Since the current practical infeasibility is well-known, I assumed the paper was observing something fundamental. It was not.

  45. Angry Physicist Says:


    Your utter lack of sympathy is appalling. Contrary to what you claimed, you obviously have no compassion for people struggling in the margins of academia. You have no empathy or compassion for people struggling in life, all you do is sit in your ivory tower and judge them. You call people like me losers and failures. I have no money, no future, never had a girlfriend or friends really. How can you fucking bully me like this on your own blog like the popular kids bullied me in high school? Sue me that I’m not lucky and faddy enough to land a cushy high-paying OpenAI job like you, asshole. You’re literally like a rich asshole driving a Mercedes Benz and spitting on the homeless and telling them to “just get a job.” Fucking unbelievable.

  46. Scott Says:

    It must be hard running a blog #44: Everything you say is true and extremely well-put, including your handle — so thank you!!!

  47. Scott Says:

    Angry Physicist #45: That you’d come to this blog specifically to attack, belittle, and insult me, and seem genuinely to expect my sympathy in response (!), and be hurt and offended when you don’t receive it, provides all the answers to your questions that you could possibly need.

  48. CM Theorist Says:

    Job #38

    That is where I think there is a language misunderstanding (probably from the CM side?). What do you mean by “What does a “single call to the oracle” mean when you don’t need to formally query the oracle at all?”
    Formally, if you have the output of one single query of the oracle in a classical MPS form, you have your answer (again, you do move around the exponential from the number of calls to the difficulty of getting that form, potentially). I believe that is what X. and M. mean here. It is definitely a correct statement. It might not be what is meant by query in Q.C.

    Scott #39

    First, thanks for publishing my previous comment, given that it did include a critic.
    As a non-member of the QC community, I did not read it as an attack on you. The title is way too strong (an “experimental” or “practical” would have been lovely), but I cannot blame too much given the state of scientific publishing sometimes…
    I also do not know the history between X., M. and you so I cannot judge on that (and honestly I am not that interested).

    Basically, what I hoped to get from this blog was the comments by Alex in the first post (or Ted in this one). A clear, scientific reading, with points and criticisms which do not go to insults. Hell, you could have been just as strong a critic without the pile of shit. Once the bad words start, the genie is out and it is hard to go beyond. This bad impression was reinforced by the fact that either your reading was a bit quick or I misunderstood what you meant in some of the initial comments, with points that seem factually wrong (the “one query” thing for example, a classical algorithm able to apply U really only need one call to U because of the extra structure of the state at the end).

    That being said; we are all humans and I do not have to deal with journalists… 😉

  49. bystander Says:

    @Scott #36: All your blogging had zero effect on spreading the WH nonsense. And when you had a chance to change something, you were silent: grinding its teeth and turning red in the face right alongside me.

    But well, I understand that you need to feed your family, thus you play it safe. It is a self-preservation instinct and I do not blame you for following it.

    PS It is off topic here and I only write it as a followup on your exchange with Topologist Guy. I will not continue with it. It’s just that you asked him what to want from you.

    PPS As a comfort addition, I agree with your take on the Angry Physicist; he must have some psychical problems that he tries to cure by scolding you.

  50. Phillip Dukes Says:

    Scott #47: Are you sure Angry Physicist isn’t Lubos Motl?

  51. Job Says:

    I don’t understand #C, maybe someone can clarify:

    #C As soon as one defines the problem as in #B, then the generic quantum advantage is trivially gone. For instance, we might look into the source code and find that it contains 1 line, the quantum version of “if x=0 return 1 else return 0”. This is easy to solve classically. I think we all agree to that too.

    Why would a classical solver’s ability to solve a single instance, or even a subclass of instances, negate GA’s advantage?

    E.g. I can implement a SAT solver that can tackle a subclass of SAT instances in linear time, but that doesn’t mean that it outperforms the best solvers out there. It probably just exploits some structure that makes that particular subclass easy, while performing terribly everywhere else.

    It’s a trivial matter to just solve a fraction of the problem instances, and it doesn’t mean anything unless we can prove that the specific subclass is difficult?

  52. Georgios Stamoulis Says:

    #G (and #H) argument of Xavier Waintal’s response seem to indicate that also Shor’s algorithm is useless: Since the currently best implementation of Shor’s algorithm can factor 21=3×7 (and we seem to be far far away from something more meaningful than that), and since there are far superior classical algorithms, then “Shor’s algorithms offers no quantum advantage”.

    Also, it seems to be that the authors misinterpret the definition and usage of an Oracle computation: of course if we had access to the “source code” of oracles, then we could as well collapse the entire polynomial hierarchy

  53. Scott Says:

    bystander #49: I was quoted in the New York Times saying that if their experiment brought a wormhole into actual physical existence, then a strong case could be made that I, too, bring a wormhole into existence every time I sketch one with pen and paper — and was gratified when many people told me that was the key quote of the whole article for them. I blogged about it. I took an actual professional risk there; I got irate emails from some of the world’s top theoretical physicists who felt my post was too dismissive or that it went after the wrong targets.

    So, not for the first time in this thread, I ask from the bottom of my heart: WTF else did you want me to do?

  54. Edgar Says:

    To soften the debate, complexity theorists could acknowledge that the early history of their field — which was just a generation or two ago — _was_ fraught with misunderstandings about, say, asymptotic behavior and actual program execution. See, e.g., my oral history with Donald Knuth here: https://www.lonelyscholar.com/knuth2

    Moreover, it seems that quite a few complexity theorists view theoretical computer science as a universal science. However, many people in other fields do not (yet) share this viewpoint. Perhaps this is the kind of tension which is in play on Aaronson’s blog tout court.

  55. Your Neighborhood Quantum Mechanic Says:

    Scott 41:

    One of the key features of Grover’s algorithm is the use of an oracle, a black box function that implements the problem being solved. The oracle function takes an input state and maps it to either 0 or 1 based on whether the input satisfies a certain condition. The algorithm is designed to search through all possible inputs to find the one that satisfies the condition in the minimum number of steps.

    However, it is important to note that oracles and black boxes are actually totally different. In classical computing, a black box is a device or system whose internal workings are unknown and irrelevant to the user, and whose inputs and outputs can be observed and manipulated. In contrast, an oracle is a hypothetical device that can perform a specific task, such as determining whether a given input satisfies a certain condition, but whose internal workings are not known or accessible to the user.

    The distinction between oracles and black boxes is important because Grover’s algorithm relies on the ability to implement an oracle as a black box, i.e., to provide a quantum circuit that can apply the oracle function to a quantum state without revealing the details of the function itself. However, in practice, it may not always be possible to implement an oracle as a black box. For example, the function may be too complex or may require access to classical resources that cannot be encapsulated in a quantum circuit.

    In such cases, Grover’s algorithm may not provide a superquadratic speedup, as it relies on the ability to access the oracle function efficiently. Without a black box implementation, the algorithm may require more steps to solve the problem than a classical algorithm, which would negate any potential speedup.

  56. Job Says:

    CM Theorist #48,

    Formally, if you have the output of one single query of the oracle in a classical MPS form, you have your answer (again, you do move around the exponential from the number of calls to the difficulty of getting that form, potentially). I believe that is what X. and M. mean here. It is definitely a correct statement. It might not be what is meant by query in Q.C.

    It’s a misleading statement, because once you have the black box’s implementation, you no longer have to go through the front door, so it doesn’t matter how many times you had to knock.

    In nuclear fusion, that’s like focusing on Q_plasma instead of Q_total, but worse because here you can have whatever Q_plasma value you want.

    For context, I’m basing this analogy on Sabine’s video on the topic:

  57. Craig Says:

    When the Chiefs beat the Eagles in the Super Bowl, Chiefs looked good to everyone. If the Chiefs play a college football team, no matter who wins and by how much, the college football team always comes out looking good while the Chiefs always come out looking bad.

  58. Scott Says:

    Craig #57: Presumably, as a lone blogger who regularly goes up against the terrifyingly powerful twin forces of quantum computing hype and quantum computing dismissal, armed only with some actual knowledge of quantum complexity theory, I’m the college football team in your metaphor? 🙂

  59. Prasanna Says:

    Scott #58 : While it may be worth your time to go up against these forces with actual knowledge, not sure if von Nuemann should end up arguing with village idiots. Surely this blog is a great place to assimilate knowledge about the profound topics being discussed. I do hope some breakthrough(s) come out of these discussions, without the distractions of responding to people who always prefer to stay at the extremes.

  60. gentzen Says:

    That Annoying Guy #42: How do Scott’s lecture notes Lecture notes! Intro to Quantum Information Science, its “update” Quantum Computing Lecture Notes 2.0 and its advanced topics continuation My Quantum Information Science II Lecture Notes: The wait is over! compare to your requirements for a graduate QC course “worth its salt”?

    Therefore, any graduate QC course worth its salt should begin with a thorough grounding in the mathematical foundations of quantum mechanics, including linear algebra, complex analysis, and probability theory. From there, students should be introduced to the basics of quantum information theory, including qubits, gates, and measurement.

    I am missing DiVincenzo’s criteria in your list, especially the “initialization” part. I just checked that the initial version of Scott’s lecture notes contained it:
    Lecture 29: Experimental Realizations of QC (9 pages)
    Here’s a 184-page combined file.
    The updated version omitts them, because it removed the “Experimental Realizations of QC” part. From my POV, this is a mistake, similar to omitting the Turing machine in CS courses, with a reasoning that current computing architectures have moved on, and you would never be able to do justice to them in your course. (Ashcroft&Mermin were never ashamed of the “soon to be obsolete” experimental information in their book, and resisted calls to update it, ignoring those insinuations that their “book would not have aged well”. They were right!)

  61. Lowly Condensed Matter Physicist Says:

    Scott, I for one found your blog posts to be enlightening. When I first read the XM paper as someone not in the complexity field, I found myself agreeing with them. But coming here I realized I did not understand the role and purpose of oracles in complexity theory. I feel sad you have to deal with so much sh*tiness here, but thanks for elaborating in your posts and comments anbout what the paper is missing.

    If I may make a request for a future blog post – maybe something which touches on the gaps between how condensed matter people see interacting quantum systems and how Quantum information scientists see things? Sorry to be vague, I just don’t know what I don’t know!

  62. clayton Says:

    Scott — as a not-entirely disinterested observer, I think the time may have come to delegate moderation duties. I think the number of ratio of malarkey-stirrers-clearly-looking-to-get-a-rise-out-of-you to lost-souls-yearning-for-essential-and-time-critical-guidance-only-you-can-provide ratio has taken a noticeable and dramatic turn for the worse recently. You don’t _owe_ anyone the valuable time you’re providing here, and the distraction that it costs you is costing the world important insights in quantum complexity theory, AI preparedness, and other valuable pursuits.

    Perhaps a comparison to something else you forewent recently would work better — when Wolfram wrote some nonsense recently (last ~20 months? I can’t keep it straight) you said (paraphrasing) “there was a time when I alone had the chops, the audacity, and enough to gain (and not enough to lose) to do this; now the time is ripe for someone else to dunk on Steve”, which I thought was quite perceptive. Please consider applying that same analysis to the trolls who beset you in your “home base”. You really have better ways to spend your time.

    Or, following a slightly different tack — given that you painstakingly assembled a collection of trusted advisors (whose time you value) who not only unanimously agreed that Some Rando’s Nth Comment didn’t deserve to appear, but _also that Some Rando’s First Comment didn’t deserve to appear_… might you need to consider the possibility that you’ve completely miscalibrated? That in fact much of the dross you sift through is, as it appears, unfit for wider consumption?

    There’s too much signal at risk here to lose to the noise.

  63. Andrew Noe Says:

    My apologies if this isn’t the place to ask this, but as a confused non-computer scientist I’d appreciate if someone could help me understand a technical point in the paper.

    Around equation (22), they define the operators “Oc” that select unsatisfying sub-bitstrings for each 3-variable clause in the 3-SAT problem. It seems fair enough that each such operator could be easily implemented once found, but (by my understanding) couldn’t there be exponentially many clauses to check, requiring an exponential number of operators? (And presumably these operators must be found by the algorithm, since knowing them ahead of time would just give the solution, right?)

    I see that the authors restrict the problem to Boolean formulas with a fixed clause-to-variable ratio, in which case there are only a few operators to find, but isn’t this no longer the full 3-SAT problem? So wouldn’t the paper’s 3-SAT algorithm be much more complex when we relax the clause-to-variable ratio constraint?

  64. Vladimir Says:

    Scott #58:

    I guess Waintal would also say he’s fighting against the powerful force of quantum computing hype in this paper. The paper’s bottom line can be phrased as “There is currently no reason to expect or even reasonably hope that Grover’s algorithm will ever be used in the real world”. Do you consider this claim novel, correct or neither? If incorrect, what probabilities would you assign to Grover’s finding real-world application within the next 5/10/20/40 years?

  65. Adam Treat Says:

    FWIW, I like this new Scott who over the last few months has been telling obvious trolls, conspiracy theorists, sneerers, right-wing provocateurs, left-wing provocateurs, bad-faith tormentors, to pound sand with alacrity. Maybe this isn’t new, but I think you’ve come a long way in safeguarding yourself and this blog from such comments/divisive ppl.

  66. pessimist Says:

    Vladimir: “what probabilities would you assign to Grover’s finding real-world application within the next 5/10/20/40 years?” Close to zero. I will bet my life saving that we won’t see any real-world application. The case is probably “closed” after Stoudenmire et.al paper. I wanted to agree with Scott and others, but as I read the paper in depth I saw Stoud and Xav do have some solid points. I agree completely with Xavier’s comments from #A-#F. I encouraged every commenter to read the paper and Xavier’s elaborations in Comment #21. Perhaps the only hopes are Circuit SAT hard instances, as some commenters pointed out.

    By the way, just this weekend I stumbled on an interesting scirate post. Someone proposed an analog “Spring algorithm” that is significantly faster that the new quantum algorithm in “Exponential quantum speedup in simulating coupled classical oscillators”
    For your reference: https://scirate.com/arxiv/2303.13012

  67. DrNik Says:

    Vladimir #64,

    I think this is precisely the crux of the issue: whether you agree or not with Stoudenmire and Waintal’s paper, it’s pretty obvious that nobody seriously believes Grover’s algorithm will ever be practically useful.

    One thing I still miss though, is an *explicit* response to their point re Grover’s noise sensitivity. They find that its success probability decays *doubly-exponentially* in the number of qubits – how in the world would a QEC scheme be able to cope with that?

    Afterall, from a practical point of view, state-of-the-art QEC (after.. 25+ years of development?) is, from what I know, only able to maintain about 1 – 2 fully error corrected qubits at the moment… Albeit to be fair, Google recently published a nice & encouraging result where they managed to increase the performance of their surface code by scaling-up the number of qubits, for once (remains to be seen if they can keep this up for larger devices though).

    And even from a theoretical point of view, is it “obvious” that it’s possible to use QEC to overcome the extreme noise sensitivity of Grover’s? The authors make some pretty strong arguments against that notion in Section VII.C of their paper. Are they wrong, and if so, how?

  68. Robin Says:

    @DrNik #67: The “double exponential decay with n” in the paper (Eq. [27]) is another frustrating red herring. What I mean here is that it’s not interesting or surprising in any way. The paper observes that when Grover is used to search over an exponentially large set, the circuit that performs it will include exponentially many (K) gates. If each one fails with probability P, then the probability that the whole circuit succeeds decays as (1-P)^K ~ exp( -K*P ). This is covered in Quantum Computing 101.

    It’s kind of weird (thought not technically wrong) to call this a “double exponential decay” in this context. Why? Because the asymptotic behavior of the algorithm’s success probability doesn’t matter. What matters is when it drops below O(1). It needs to be O(1) in order for the algorithm to work. So we usually focus on the exponent, and ask what we need to do in order to keep it <1.

    Anyway, the entire takeaway from this is that to get a decent success probability, the error rate per gate needs to be less than the reciprocal of the number of gates in the circuit (1/K). K is exponential in the number of qubits (n) here. This is fine, because all of the gates (including the ones that implement the oracle as a quantum circuit) can be logical gates on error-corrected logical qubits. And the error rate (per gate) of an error corrected logical qubit is exponentially small as a function of the code distance (i.e., the multiplicative overhead of the error correction).

    TL;DR: this is exactly what quantum error correction is designed to deal with, and it's expected to yield the required (exponentially small) error rates, using only poly(n) overhead. QEC doesn't reduce the exponential running time of Grover on 2^n candidates, but it also doesn't increase it meaningfully.

    The discussion of QEC in Sec. VIIC is largely correct, but not novel. The observation that millions or billions of physical qubits will be required to run Grover usefully is well-known to QC experts.

  69. Raoul Ohio Says:

    This might be fun:

    How about asking chatWTF (or your current favorite AI) to generate a pair of (say) 200 word essays, one about the “Grover refutation” is right, and one about how it is wrong. Then the blog participants can vote on which essay is more convincing.

  70. falbarel Says:

    @Y #33: This is exactly what the field of quantum metrology is about. In metrology one assumes to have a quantum channel that encodes some physical parameter onto quantum probes, and if the channel is noisy the asymptotic quantum advantage is lost (apart from very particular kinds of noise that are “different enough” from the signal).
    However, this is not directly related to Grover. As this and the previous post and comments made me understand better, in practice the oracle will be implemented as quantum circuit, and it will be eventually error-corrected. On the contrary, the parameter-encoding channel in quantum metrology is usually something that captures an external physical quantity (e.g. magnetic field, optical phase, etc.) and if this signal intrinsically comes with noise one must cope with it, there is no “source code” and it cannot be perfectly error corrected completely without also erasing the signal itself.

  71. Scott Says:

    To everyone who’s written here about how obviously the expert consensus is that Grover’s algorithm will never be practical, etc etc, as Stoudenmire and Waintal claim:

    No. I do not agree with that statement.

    What’s true is that we don’t currently know how to make Grover practical. In some sense that’s no surprise, since we don’t currently know how to make Shor’s algorithm practical either! (If we knew, it would’ve been done.)

    In another sense, though, the situation is worse with Grover’s than with Shor’s, because the speedup is only quadratic, while the overhead of fault-tolerance is enormous.

    But hope is not lost! Fault-tolerance constructions have already become several orders of magnitude more efficient over the 27-year history of the subject. No one has any real idea how much more efficient they’ll become, and that changes the crossover point where Grover starts to win.

    As a different route, it’s possible that the Grover speedup can be noticed by other quantum algorithms—including “NISQier” things like quantum annealing. There’s even some recent evidence that one sees exactly that, at least on some problem instances, not an exponential speedup like D-Wave was claiming but a theoretically-predicted Grover-type speedup from annealing. Still not “useful” admittedly.

    Is this what people wanted me to do in these posts originally? Just, like, explain all this, rather than assuming that anyone who cared about the preprint in the first place already knew this?

  72. DrNik Says:

    Robin #68

    I see. Thank you!

  73. Scott Says:

    Everyone: “Angry Physicist” sent additional comments, which I left in moderation, saying that he’s not a physicist at all but is the same troll who hounded me this summer, and ALSO the same as a bunch of supposedly different incel trolls on this blog lately. He says it’s all him, all just one troll. And now he says he’s suicidally depressed, and threatens to kill himself if I don’t drop everything else I’m doing to give him sympathy and help.

    So, “Angry Physicist,” if you’re still reading this (who am I kidding, you are): I cannot help you. Please seek professional help. Please, for your own sake, never comment on this blog again. From now on anything that’s even suspected to be from you will be sent to the SOCG with an increased level of vigilance.

  74. Robin Says:

    @Scott #71: I suspect trying to determine “what people wanted me to do in these posts originally” is very much tilting at a windmill. People are startlingly diverse! Get 100 people in a room and they’ll want 101 distinct things. Most people don’t know what we want most of the time. There’s nothing more human than demanding X and then upon getting it saying “Ah! Now I see that I actually want Y.”

    But with that said, I (for one) think it’s truly valuable when you have the patience to explain these things — even for the Nth time. Our field is in the middle of a violent inflationary episode. This is causing all kinds of havoc. The American Physical Society’s Division of Quantum Information (DQI) is now, if I heard correctly, the 2nd largest division at the APS March Meeting! There are literally thousands of wet-behind-the-ears physicists (and more thousands of engineers) trying to fake-it-until-you-make it in QI. Worse, some of them are actively unhappy about it, because they were specialists in something else until their funding agency told them “Start doing QIS research or we take your funding away.”

    (Yes. This actually happened.)

    Anyway, setting aside the actively hostile ones, there’s a lot of early career QIS researchers who have never actually learned directly from an expert. They tend to bumble around like a bull in a china shop, but many are actually well-meaning — just arrogant (like many of us!). And certain parts of the TCS literature *are* incredibly difficult for physicists to get a grip on without actual mentorship.

    Waintal goes off the rails, but his initial sentence rings true: “One of the problem we face in the field of quantum computing is a vast diversity of cultures between, say, complexity theorists on one hand and physicists on the other hand.” Speaking as a physicist who *does* understand and appreciate complexity theory, I empathize. I had the benefit of long one-on-one conversations with patient senior folks like Gilles Brassard, Richard Cleve, Debbie Leung, and Andrew Childs. But that won’t happen today — the field is too big. [Insert clever allusion to cosmic inflation and the Kibble-Zurek mechanism].

    So, yes, in principle all these people should read and absorb your notes, or Mike & Ike, or any of half a dozen others. But in practice, people are flawed, and they won’t. So every time you have the patience to guess that something is ignorance rather than malice, and explain something from Quantum Computing 101 with unique topical insight… even if the actual questioner was a troll, probably 15 other grad students learn something essential.

  75. Scott Says:

    Robin #74: Yeah, you’re right.

    These posts were not some terrible crime for which I need to apologize — the preprint would make sense only under the assumption that quantum complexity theorists had bumbled around and learned absolutely nothing for 30 years, so it’s understandable that I was merciless in explaining why that wasn’t the case.

    On the other hand, a post that did a better job of explaining what is the case, no matter how many times I’ve already explained it, would’ve been better still!

  76. Vladimir Says:

    Scott #71:

    I certainly appreciate you fighting the good fight against QC hype, as recently exemplified in your response to Google’s “wormhole”, but with the state of the field being what it is (Robin #74), I think it’s time to bring that fight a little closer to home.

    I’d like to make a direct appeal to the virtue of intellectual honesty, which I’m sure you would agree isn’t satisfied merely by making well-chosen technically-correct statements, and ask you once again: is it more accurate to say that you expect to see Grover’s algorithm used in the real world within your lifetime, or that you would be surprised to see it?

    If the latter, I would argue that “Grover’s algorithm offers no quantum advantage”, while trivially incorrect from a purely complexity theory point of view, is nevertheless a mostly true statement which the world needs to hear.

  77. Adam Says:

    #73: I’m curious exactly what kind of help this guy is asking you for? And why is he coming to Shtetl-Optimized, as opposed to a therapist, his dad, the neighborhood priest, or anyone else—what do you have to offer him, are you like the Rabbi for bitter disaffected young STEM guys now?

  78. gentzen Says:

    Five blog-posts ago (Feb 22), Scott mentioned his Physics Colloquium, “How Much Information Is In A Quantum State?”

    To describe an entangled state of n particles, we formally need an amount of classical information that grows exponentially with n. But given that almost all the information disappears on measurement, in what sense was it “really there”? In this talk, I’ll first review a career’s-worth results showing that, for many purposes, quantum states can effectively be described with vastly less classical information

    There is at least one well known sense in which n qubits don’t contain more than n classical bits. In a recent discussion about nonlocality, postselection, and monogamy of entanglement, a way in which “n qubits in (quantum) context” can contain up to 2n classical bits presented itself (to me). I first tested the direction and force of the reaction (Jan 12):

    I bring-up Superdense coding here, because the fact that a single qubit “in context” can transmit two classical bits hints at one specific place where our classical intuition risks to get lost. That a single qubit can encode a single classical bit even “without context”, is relatively easy to grasp. But that a single qubit “in context” can encode two classical bits, and also that two classical bits can encode a single qubit “in context” (as demonstrated by Quantum teleportation), that is what is hard to swallow for our classical intuition: Isn’t the single qubit described by three independent real parameters? How should two classical bits ever be enough to encode it? One possible answer may be that each single experiment can only provide a single “context”, and that therefore the “context” is fixed with respect to the experiment. And because the “context” cannot vary, the three independent real parameters lose their significance. And then one can show that only two classical bits remain. Maybe there are better answers, but I guess any answer trying to explain this will remain hard to swallow.

    The “force of the reaction” felt save enough to me, so I tried to explain (Jan 14) how the discussion drove me to that “n qubits in context <= 2n classical bits" perspective:

    And of course, I am aware that I tried here (in this thread) to provide an „answer“ to the question you raised in the (now closed) thread:

    I found that an interesting question back then, I didn‘t knew the answer, and I postponed following that thread until I had enough time to deeply dive into it. But when that time came, the thread was already closed. And so I didn‘t dive into it.

    What has changed now is that I had the impression that your „monogamy of entanglement“ argument did convince PeterDonis. But I still couldn‘t see how it would change the things why I was unsure about the correct answer.

    Then vanhees71 was kind enough to ask the obvious question: “I’ve no clue, how you think you can describe a qubit with two classical bits at all. A qubit can be in a continuity of states, for two classical bits you have only 4 states. So how can there be a one-to-one connection between them?” which I later tried to answer (Jan 16).

    But what has all this to do with Scott’s blog and Grover’s algorithm? Just like Scott’s blog, those are moderated discussions. They even have advantages compared to Scott: the main participants are less anonymous than it appears, they are older than Scott (me included), and they know each others background sufficiently to have meaningful discussions. They also try hard to stay civil, and still they are lucky that there is a team of moderators which occasionally pauses or stops such discussions, when they got out of hand despite all of those advantages. Let me postpone Grover’s algorithm, and first show some more snapshots of how that discussion continued:

    DrChinese (Jan 24)

    Asking your patience in reading an admittedly long thread post covering what I might call The DrChinese Paradox…

    In a recent thread, I argue that Entanglement Swapping + Monogamy Of Entanglement demonstrates “quantum nonlocality” (action at a distance) and “quantum causality” (violation of strict Einsteinian causality). This has the effect of serving as a No-Go for certain QM Interpretations. Any interpretation claiming that swapping can be modeled by statistical post-selection of pre-existing ensembles is necessarily ruled out. Swapping must be an action, a physical event, a projection measurement, an operation, an objective state change, or whatever you want to call it.

    gentzen (Feb 17)

    From my point of view, we still have this “monogamy of entanglement” argument in the room, where I am pretty convinced that I understand extremely well why it tells us something completely different than DrChinese has in mind. But even there, it would take me a huge amount of time and energy to try to explain it, and it would still likely fail. I write this with confidence, because I see others invest that time and energy, with very little progress so far.

    gentzen (Feb 17)

    What I have in mind is that each single one of two maximally entangled qubits will act like a totally independent unbiased random bit in interactions with other qubits. (As long as only one of the two qubits takes part in those interactions.) It is a property of the relation of those two maximally entangled qubits to other qubits, not an “objective property” of the two qubits themselves in isolation.

    Back to Grover’s algorithm. That difference of n classical bits of information for “n qubits without (quantum) context” and 2n classical bits for “n qubits with (quantum) context” is in close correspondence to the diffence between N and N^2 or sqrt(T) and T. This cost of “without context” also occurs when Monte Carlo integration is used to battle the curse of dimensionality. The variance only goes to zero as 1/sqrt(N) where N is the number of samples. If context is allowed, then there are techniques (like for example Christoph Zenger’s sparse grids) that get much closer to the “intuitive” 1/N convergence rate for many practically occuring high dimensional integrals. Often quantum mechanics often naturally exhibits the “intuitive” 1/N convergence rate directly, take for example the de Broglie wavelength λ=h/p as a measure for the possible localization of a cloud of particles with momenta p_i. Since p=\sum_{i=1}^N p_i, we see that we get a 1/N convergence rate. But a classical independence assumtion from probability theory would have only gotten us a 1/sqrt(N) convergence rate.

    Therefore, the idea to see how much closer classical algorithm (i.e. algorithms using only randomness satisfying a classical independence assumption) can get to the “intuitive” effort if context is allowed (i.e. looking at the source code) makes perfect sense (to me). Of course, this doesn’t change the fact that the title “Grover’s Algorithm Offers No Quantum Advantage” drew the attention of journalists, and therefore caused a lot of people a lot of “unproductive unnecessary work”.

    This thing is also interesting to me, because suppose that I would try to get some participants in that discussion to write a paper together about it. Of course we would like to use a title such that our paper doesn’t get totally ignored. For example “The DrChinese Paradox” or “What Entanglement Swapping + Monogamy Of Entanglement really shows”. Or perhaps even “One qubit in context can be encoded by two classical bits”? But because I now learned that this last title could risk to draw the attention of journalists, it probably crosses the barrier of what is “allowed,” independent of how much clarification would be contained in the abstract and the article itself.

  79. Dmitri Urbanowicz Says:

    > The charge of the proof is reversed: one must prove certain properties of the quantum circuit in order to possibly have a theoretical quantum advantage.

    On the bright side, physicists started to notice that P != NP is not an obviously true statement.

  80. Martin Mertens Says:

    Vladimir #76:

    Why do you say the world “needs to hear” a misleading arguably-mostly-true statement? Aren’t there any actually-true statements that would serve better?

    The statement “man-made flying machines are impossible” was unequivocally false in the year 1700 even if nobody alive then would live to see one.

  81. Bluebird Says:

    Hi Scott!

    I wanted to bring to your attention this Twitter thread:


    Seems like this is the official Twitter handle of UMD’s Condensed Matter theory division. The tweet thread appears to be rude, condescending, and extremely derogatory to quantum complexity theorists. I want to ask those readers of your blog who are condensed matter theorists at UMD: do you share the same sentiments as this Twitter thread and hold the same grudges against complexity theorists?

    I think it is especially nasty that a handle purportedly representing the entirety of a physics division at a top public university holds such deplorable opinions against an entire field of researchers.

  82. Grover rover Says:

    @Vladimir #76: What the world needs to hear is that the practicality of Grover hinges on the efficiency of FTQC and the technologies underlying QC. Projected from the constructions and technology we have right now, it is terribly impractical. Several have made this point already. Declaring there is “no a priori theoretical quantum speedup associated with Grover’s algorithm” is not fighting the good fight against QC hype. It is poisoning a research direction by wrongly declaring it theoretically hopeless.

  83. fred Says:

    Scott #73

    “he’s not a physicist at all but is the same troll who hounded me this summer”

    That was pretty obvious to me as soon as I read his very first post in this thread.
    It’s too bad you’re not able to smell this, given that such posts are always clearly designed to trigger you in the same obvious ways.

  84. Martin Mertens Says:

    I’m wondering, which seems more feasible?

    – A practical speed-up from Grover’s algorithm (to an expert in 2023)

    – One of the more difficult tasks performed on a modern supercomputer (to an expert in 1950)

  85. Karen Morenz Korol Says:

    @Vladimir 76 – I don’t think it’s helpful if Scott says untrue things. If people misunderstand the meaning of “this algorithm offers a quantum advantage” then the thing to do is explain what that means and in what situations it is relevant – which is what Scott (and many others) already do!

  86. Scott Says:

    That Annoying Guy #42:

      Therefore, any graduate QC course worth its salt should begin with a thorough grounding in the mathematical foundations of quantum mechanics, including linear algebra, complex analysis, and probability theory. From there, students should be introduced to the basics of quantum information theory, including qubits, gates, and measurement.

    Actually, many graduate QC courses just assume all that stuff as prerequisites—that’s what makes them graduate courses! 🙂

    In my own Quantum Complexity Theory graduate course, I very quickly go through the basics of QC, for the benefit of classical CS theory students, and then move on to lower bounds on quantum query complexity and other stuff.

    There are many possible foci of a graduate quantum information course: quantum Shannon theory, error-correction, Hamiltonian complexity, lower bounds, interactive protocols, etc. etc., or yes, post-Shor-and-Grover quantum algorithms. I merely sketched one possible syllabus (far from the only one) for a course focussed on the last of those.

  87. Scott Says:

    Bluebird #81: The posture of whoever wrote that thread is basically “bemused ignoramus.” If and when there’s an actual argument put forward against the validity of the Grover speedup, I’ll be happy to answer it! 🙂

  88. Scott Says:

    Vladimir #76: I honestly don’t know whether we’ll see a useful Grover speedup in my lifetime. On the one hand, no principle known to me rules it out, and arguably we’ve already seen non-useful Grover speedups. On the other hand, the engineering difficulties look formidable—beyond even those for seeing a useful speedup from Shor’s algorithm. On the third hand, I do hope to live for another half-century or so, which is a long time technologically! 🙂

    So, I dunno, maybe I’ll give it 50/50? But note that the Grover speedup could be “real” in the sense I care about, even if the probability of exploiting it in my lifetime were much smaller than that—just like the principles of heavier-than-air flight were real in 1700, as someone aptly pointed out above.

  89. fred Says:

    Scott #88
    “On the third hand, I do hope to live for another half-century or so, which is a long time technologically!”

    On the fourth hand, an AGI singularity (looking more and more likely in our lifetimes) would be an event horizon beyond which any sort of breakthrough prediction gets even more difficult.

  90. former student Says:

    I wanted to point out a concrete real world problem that grover will be able to speed up, if we have fault tolerant scalable QC:

    bitcoin mining is the problem of finding a nonce given a header such that, SHA256(header, nonce) < threshold. This condition can be expressed as a classical boolean circuit that outputs true or false with the input being a nonce. This can be trivially converted into a reversible quantum circuit which can be used as an oracle for grover.

    Classically, the problem requires T = 2^256/threshold hash evaluations to find a solution with high probability. With grover, it should take ~sqrt(T) steps on a fault tolerant QC to find a solution w.h.p.

    This concrete problem above does not depend on any philosophical disagreement about nature of oracles / black boxes and quantum databases. I haven't read the new paper, does it give any evidence for a faster classical algorithm or refute the runtime of grover? If not, I don't see any reason to.

  91. John Says:

    Sorry to ask a “noob” question. How does FTQC solve the problem of phase inaccuracies? If my logical qubit can be rotated by some theta, what if after the huge overhead of error-correction I end up implementing a logical rotation that is theta + some epsilon instead? (and maybe epsilon even scales with the size of my code)

    Isn’t Grover highly susceptible to this type of error as well? Any resources that explain how FTQC is able to specifically make analog phases in the logical subspace error free?

  92. Scott Says:

    John #91: The whole insight behind quantum error-correction is that once you’ve corrected bit flips, phase flips, and combinations of the two (ie, Pauli X, Z, and Y errors), you’ve automatically also corrected the whole continuum of possible errors (even measurements) that could affect a single qubit. So you just need to find an error-correcting code that corrects that discrete set of errors and then you’re good. It’s hard to explain this intuitively—easiest is to go through a few lines of algebra. See my lecture notes among many other possible places.

  93. John Says:

    Sorry to noob some more, but I understand how EC can fix an issue like \(|s\rangle = a|0\rangle + b|1\rangle\) going through a channel \( (1-p) I |s\rangle \langle s|I + p q(r) |s\rangle \langle s | q(-r) \).

    But what about:

    \( U(t) = |l(0)\rangle \langle l(0)| + q(r)|l(1)\rangle \langle l(1)| \) vs. \( U(t+e) = |l(0)\rangle \langle l(0)| + q(r+e)|l(1)\rangle \langle l(1)| \).

    for some qubit encoding l.

    How can QEC correct these type of precision errors? Are functional fault tolerant quantum computers not perceptible to these type of errros?

  94. Scott Says:

    John #93: Yes, errors in the precision of the gates just get folded into the same formalism and are handled equally well. Please, please read some lecture notes on the subject—if my notes are too cursory and introductory, then try Preskill’s.

  95. Job Says:

    Isn’t it disappointing that quantum computing is ultimately not the answer to the concerns surrounding LLMs?

    You’d think that a large language model – a black-box if I’ve ever seen one – whose output is supposedly indicative of an adversarial stance towards organic life, would be the perfect problem for something like GA.

    In the movies we’d be using large FTQCs to determine if an LLM could ever press the button.
    Imagine the funding you’re missing out on. 🙂

    Alas, the model input is far too large for GA’s speedup to matter even in ideal conditions?

    Hey, maybe that’s the alternative to pausing AI research, just restrict the size of the input such that the model can be shown to be safe.

    My question is, could you do better than GA?

  96. Hans Heum Says:

    Just a note: in case you hadn’t seen it, I believe the field of cryptography provides the “straightforward application of Simon’s algorithm” you were looking for (although maybe not a useful application). Namely, the Even-Mansour cipher was classically proven secure provided its building blocks are secure [1], and yet Kuwakado and Morii [2] showed that, if the adversary is granted the ability to query the primitives in superposition, it can be broken in polynomial time using Simon’s algorithm. More recently Alagic et al. [3] even proved the cipher secure in the post-quantum setting, in which the (quantum) adversary is granted superposition queries to all unkeyed primitives but only classical queries to keyed ones, making for a neat separation between these so-called “Q1” and “Q2” worlds.

    Of course, these results are all still formulated in the oracle setting, but it looks plausible that instantiating with primitives like AES will carry them over into the real world.

    [1] https://link.springer.com/article/10.1007/s001459900025
    [2] https://ieeexplore.ieee.org/document/6400943
    [3] https://link.springer.com/chapter/10.1007/978-3-031-07082-2_17

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.