Quantum advantage for NP approximation? For REAL this time?
The other night I spoke at a quantum computing event and was asked—for the hundredth time? the thousandth?—whether I agreed that the quantum algorithm called QAOA was poised revolutionize industries by finding better solutions to NP-hard optimization problems. I replied that while serious, worthwhile research on that algorithm continues, alas, so far I have yet to see a single piece of evidence that QAOA outperforms the best classical heuristics on any problem that anyone cares about. (Note added: in the comments, Ashley Montanaro shares a paper with empirical evidence that QAOA provides a modest polynomial speedup over known classical heuristics for random k-SAT. This is the best/only such evidence I’ve seen, and which still stands as far as I know!)
I added I was sad to see the arXiv flooded with thousands of relentlessly upbeat QAOA papers that dodge the speedup question by simply never raising it at all. I said that, in my experience, these papers reliably led outsiders to conclude that surely there must be lots of excellent known speedups from QAOA—since otherwise, why would so many people be writing papers about it?
Anyway, the person right after me talked about a “quantum dating app” (!) they were developing.
I figured that, as usual, my words had thudded to the ground with zero impact, truth never having had a chance against what sounds good and what everyone wants to hear.
But then, the morning afterward, someone from the audience emailed me that, incredulous at my words, he went through a bunch of QAOA papers, looking for the evidence of its beating classical algorithms that he knew must be in them, and was shocked to find the evidence missing, just as I had claimed! So he changed his view.
That one message filled me with renewed hope about my ability to inject icy blasts of reality into the quantum algorithms discourse.
So, with that prologue, surely I’m about to give you yet another icy blast of quantum algorithms not helping for optimization problems?
Aha! Inspired by Scott Alexander, this is the part of the post where, having led you one way, I suddenly jerk you the other way. My highest loyalty, you see, is not to any narrative, but only to THE TRUTH.
And the truth is this: this summer, my old friend Stephen Jordan and seven coauthors, from Google and elsewhere, put out a striking preprint about a brand-new quantum algorithm for optimization problems that they call Decoded Quantum Interferometry (DQI). This week Stephen was gracious enough to explain the new algorithm in detail when he visited our group at UT Austin.
DQI can be used for a variety of NP-hard optimization problems, at least in the regime of approximation where they aren’t NP-hard. But a canonical example is what the authors call “Optimal Polynomial Intersection” or OPI, which involves finding a low-degree polynomial that intersects as many subsets as possible from a given list. Here’s the formal definition:
OPI. Given integers n<p with p prime, we’re given as input subsets S1,…,Sp-1 of the finite field Fp. The goal is to find a degree-(n-1) polynomial Q that maximizes the number of y∈{1,…,p-1} such that Q(y)∈Sy, i.e. that intersects as many of the subsets as possible.
For this problem, taking as an example the case p-1=10n and |Sy|=⌊p/2⌋ for all y, Stephen et al. prove that DQI satisfies a 1/2 + (√19)/20 ≈ 0.7179 fraction of the p-1 constraints in polynomial time. By contrast, they say the best classical polynomial-time algorithm they were able to find satisfies an 0.55+o(1) fraction of the constraints.
To my knowledge, this is the first serious claim to get a better approximation ratio quantumly for an NP-hard problem, since Farhi et al. made the claim for QAOA solving something called MAX-E3LIN2 back in 2014, and then my blogging about it led to a group of ten computer scientists finding a classical algorithm that got an even better approximation.
So, how did Stephen et al. pull this off? How did they get around the fact that, again and again, exponential quantum speedups only seem to exist for algebraically structured problems like factoring or discrete log, and not for problems like 3SAT or Max-Cut that lack algebraic structure?
Here’s the key: they didn’t. Instead they leaned into the fact, by targeting an optimization problem that (despite being NP-hard) has loads of algebraic structure! The key insight, in their new DQI algorithm, is that the Quantum Fourier Transform can be used to reduce other NP-hard problems to problems of optimal decoding of a suitable error-correcting code. (This insight built on the breakthrough two years ago by Yamakawa and Zhandry, giving a quantum algorithm that gets an exponential speedup for an NP search problem relative to a random oracle.)
Now, sometimes the reduction to a coding theory problem is “out of the frying pan and into the fire,” as the new optimization problem is no easier than the original one. In the special case of searching for a low-degree polynomial, however, the optimal decoding problem ends up being for the Reed-Solomon code, where we’ve known efficient classical algorithms for generations, famously including the Berlekamp-Welch algorithm.
One open problem that I find extremely interesting is whether OPI, in the regime where DQI works, is in coNP or coAM, or has some other identifiable structural feature that presumably precludes its being NP-hard.
Regardless, though, as of this week, the hope of using quantum computers to get better approximation ratios for NP-hard optimization problems is back in business! Will that remain so? Or will my blogging about such an attempt yet again lead to its dequantization? Either way I’m happy.
Follow
Comment #1 October 5th, 2024 at 5:50 pm
It is not peer reviewed yet.
Comment #2 October 5th, 2024 at 5:57 pm
anonymous #1: Fine, I explicitly noted that it’s a preprint, and also that the speedup claim might or might not stand. This post, itself, will hopefully spur the paper to be reviewed by peers.
Comment #3 October 5th, 2024 at 7:12 pm
Exciting work! Shameless corporate plug to Google for hiring people and allowing them to work on long term algorithms research. At first glance the problem also seems to not connect up with discrete log and friends (I’d call it an interesting open question, I guess).
Comment #4 October 5th, 2024 at 8:55 pm
Is there UG based or any other hardness based approximation impossibility for the problem?
Comment #5 October 5th, 2024 at 9:03 pm
anonymous #4: Excellent question. Certainly not in the regime where the algorithm works, or we’d get the conclusion UGC &impl; NP⊆BQP. What hardness of approximation we have for the problem in other regimes (and under what assumptions) is something to which I don’t know the answer.
Comment #6 October 6th, 2024 at 8:44 am
Really very cool! It seems like, rather than finding points on a lattice whose coordinates hash onto 0, like Yamakawa-Zhandry, Jordan et al. try to find a low-degree polynomial having many points intersecting with a given sparse subset.
Yamakawa-Zhandry avoids the Aaronson-Ambainis conjecture by focusing on NP-*search* problems rather than NP-*decision* problems, and hence they can use unstructured random oracles (SHA256) and still get the exponential speedup. But Jordan et al. might still get a win and be consistent with Aaronson-Ambainis because their problem, by definition, might just have the structure that AA needs…
Yamakawa-Zhandry assume that calculating (say) SHA256 in superposition is ‘easy’, and that doing other basic Fourier/Hadamard transforms are also easy, but that decoding errors in superposition is likely a bit of a challenge. Analogously I think Jordan et al. note that their uncomputation is challenging (but, importantly, still polynomial.)
Two obvious and “duh” questions – any chance that Jordan et al.’s algorithm runs on near-term noisy devices, or does the uncomputation limit that? Also, are there any known industrial application for their algorithm?
Comment #7 October 6th, 2024 at 9:09 am
Mark Spinelli #6: Good questions! Briefly:
– I don’t see how the AA Conjecture says anything about this one way or the other, because where’s the random oracle?
– No (alas), just like with Yamakawa-Zhandry, because this involves doing fancy error-correcting decoding on a superposition of inputs, I see no prospects for running it on a pre-fault-tolerant device.
– I don’t know of practical applications of OPI outside of error-correction itself, but it doesn’t seem implausible — does anyone else know of any?
Comment #8 October 6th, 2024 at 6:25 pm
Sorry about being off-topic, wondering if this paper https://arxiv.org/abs/2410.01201 is an argument for https://x.com/fchollet/status/1841902521717293273
> in general the fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren’t fundamentally important in the curve-fitting paradigm (aka deep learning)
Basically, that there is something like “Turing completeness” for NN, where the if the architecture is sufficiently powerful, it is interchangeable with other architectures. Sort of like substrate-independence for AI. Wonder if there is a theorem to that effect.
Comment #9 October 7th, 2024 at 8:44 am
I am a novice and I apologize if this is a naive question. You appeared to say that the Quantum Fourier Transform can be used to convert certain NP-hard problems to optimal-decoding problems and that there are efficient *classical* algorithms, like Berlekamp-Welch, to solve these decoding problems. Would the overall algorithm then be a hybrid quantum-classical algorithm?
Comment #10 October 7th, 2024 at 9:20 am
So you are comparing QAOA, where it took a group of ten top computer scientists to find an algorithm that matched QAOA’s performance, to this new algorithm DQI, where the paper claims that it gives “a better approximation ratio on average than simulated annealing, although not better than specialized classical algorithms tailored to those instances.”
Did you even compare QAOA to simulated annealing? I am willing to bet that QAOA does better. If that’s the metric you insist on using, you should be praising QAOA, as well.
Comment #11 October 7th, 2024 at 10:10 am
Doubting-Thomas #9:
Would the overall algorithm then be a hybrid quantum-classical algorithm?
No—or rather, only in the same sense that every quantum algorithm is a “hybrid quantum-classical algorithm.”
Note that in DQI as applied to OPI, the Berlekamp-Welch decoding still needs to be run on a coherent superposition of inputs (so, on the QC, not on a classical computer).
Comment #12 October 7th, 2024 at 10:17 am
Peter Shor #10 (if it’s indeed you):
(1) If you’re “willing to bet” that QAOA does better than simulated annealing—ok then, on which set of instances?
(2) Yes, there are some non-algebraically-structured problems where DQI beats simulated annealing but does not beat specialized classical algorithms. But then there’s the OPI problem, where DQI currently achieves a better approximation ratio than any known classical algorithm—so, the same claim that was made about QAOA, before that claim was refuted by the ten computer scientists, this blog having brought the claim to the attention of many of them. Farhi has since given many talks on QAOA that I’ve attended where he presented lots of interesting results, but no longer made any comparable claim. I haven’t placed any bet on whether the analogous claim for DQI will stand or fall, but did feel strongly that I should bring it to experts’ attention.
Comment #13 October 7th, 2024 at 11:51 am
Glad you’re blogging about this. I noticed this paper a while back and was surprised nobody was discussing it. Flew completely under the radar.
Comment #14 October 7th, 2024 at 12:02 pm
Do you predict that we will fail to find a higher approximation ratio on a non-algebraically-structured problem using DQI? If so, why do you believe this?
Comment #15 October 7th, 2024 at 12:22 pm
It would really help my sense that all is right with the universe if our ability to study problems (their amount of algebraic structure) corresponded more closely to our ability to solve them!
Comment #16 October 7th, 2024 at 9:24 pm
What is it with QFT that its nearly universal in all exponential speedup claims. Is there a more “generalized” algebraic structure that QM is amenable to that QFT can naturally fit ? It may be a case of hammer looking for a nail, but doesn’t it appear like a very narrow set of specialized problems that nature can solve exponentially faster ?
Comment #17 October 7th, 2024 at 11:43 pm
“Peter Shor” #10: simulated annealing is a heuristic algorithm and rigorous bounds on its performance are quite loose. However, if you apply simulated annealing to random instances of the problem where QAOA first claimed an advantage (MAX-E3LIN2), it achieves a better approximation ratio on average than either QAOA or the advanced algorithm developed by the ten top computer scientists. Thus, the accomplishment of that team of ten top computer scientists was not to find an algorithm that worked better than QAOA on average for that problem (that was easy to find), but to find an algorithm that provably achieved a better approximation ratio in the worst case.
DQI achieves this (a better provable approximation ratio in the worst case than any known classical algorithm) for a number of problems, including OPI. DQI applied to OPI also achieves a better approximation ratio in practice than any classical algorithm known to us (either one with provable performance, or a heuristic). To my knowledge, QAOA never accomplished that. By the way, DQI outperforms the classical algorithm that the ten computer scientists developed for the problems we considered (we compare against it in our paper).
Comment #18 October 8th, 2024 at 3:08 am
Hi Scott – just flagging once again my paper with Sami Boulebnane. In this work we a) prove theoretical bounds on QAOA’s running time for random k-SAT, which show that QAOA performs better than (e.g.) Grover’s algorithm; b) compare the theoretical and numerically obtained running time of QAOA with the best classical algorithms we could find, and show that QAOA seems to have a modest scaling advantage. I hope that this fulfils your criterion of “a single piece of evidence that QAOA outperforms the best classical heuristics on any problem that anyone cares about” 🙂
The new result by Stephen Jordan et al is really nice! However I don’t think it can be said to be a near-term algorithm (as one needs to execute the error-correcting code decoder in superposition) so is in some sense solving a different problem to QAOA, which I think of as the simplest way of using the structure of a problem to do better than Grover’s algorithm. Also, it was a bit unclear to me whether the “optimal polynomial intersection” problem is one which people will care about for practical applications; and of course we do already know NP problems (like factoring) where we can use algebraic structure to get quantum speedups. That said, it’s great to be able to take advantage of an apparently different kind of structure.
Comment #19 October 8th, 2024 at 4:16 am
Robbie King #14:
Do you predict that we will fail to find a higher approximation ratio on a non-algebraically-structured problem using DQI?
Well, the space of computational problems (even non-contrived ones) is vast, and it wouldn’t surprise me if within that space there were some other structure that DQI could exploit, besides algebraic structure (the glued-trees problem provides one compelling example of a special yet non-algebraic structure that quantum algorithms can exploit for exponential speedup). But again, I’m not placing money either way right now. 😀
Comment #20 October 8th, 2024 at 7:06 am
Ashley Montanaro #18: Thanks for the link to that very nice paper! I must be getting more senile by the day, since I don’t remember it even though you and I must have discussed it.
To make sure I understand: do you claim here to get a super-Grover advantage for random k-SAT, compared to the best known classical algorithm? If so, by what exponent? Or is the claim “merely” to beat Groverized brute force?
If the former, then I indeed need to revise my post (and acknowledge you), clarifying that I was thinking only about superpolynomial speedups. If not, then I think what I wrote stands, no?
Comment #21 October 9th, 2024 at 6:54 am
Scott #20: Thanks! I’m not sure if we discussed it, but you did kindly mention it on your blog.
To summarise, what we claim – based on a combination of theory and numerics – is that the scaling complexity of using QAOA with O(1) circuit layers to solve random k-SAT instances at the threshold (for relatively large k) is a bit better than the scaling of the best classical algorithm we could find, where by “scaling complexity of using QAOA” we basically mean the number of times you need to repeat it to find a solution.
For example, consider random 8-SAT. The scaling of the best classical algorithm we found (WalkSATlm) is about 2^{0.33n} – better than Groverised brute force, and also better than rigorous worst-case bounds would suggest. Our prediction for the scaling of QAOA with 60 layers (so quite a few, but still constant!) is somewhere between 2^{0.19n} and 2^{0.30n} (depending on how optimistic one feels about some points to do with extrapolation, but a bit better in either case). The analysis isn’t theoretically rigorous in various aspects, but the combination of theory + numerics evidence does give us some confidence.
It’s important to note that you can apply amplitude amplification on top of this, so given a fault-tolerant quantum computer, you could get somewhere between 2^{0.1n} and 2^{0.15n} scaling. On the other hand, I’d also highlight that – unlike Grover – the QAOA speedup is achieved with a low-depth circuit; almost all the cost is in repeating the circuit, and you don’t need to maintain coherence over a long time.
Comment #22 October 9th, 2024 at 7:29 am
Ashley Montanaro #21: Thanks!! That wasn’t totally clear to me from your abstract and introduction. I’ve edited my post to reflect it.
Comment #23 October 10th, 2024 at 1:53 am
Your blog post immediately made me think of this paper https://www.science.org/doi/10.1126/sciadv.adj5170 , which also promises a super-polynomial quantum advantage for optimization problems. I don’t think you’ve written anything about it yet. What do you think about it?
Comment #24 October 10th, 2024 at 11:03 pm
Missed the Oct 7th thread, but wanted to bring this to your attention as this will apply to many Israelis (BTW I firmly support Israel’s right to exist).
~~~~~~~~~~
The Fourth Act Amending the Nationality Act, which entered into force on August 20th 2021, has created a new legal entitlement to renaturalization for individuals who lost or were denied their German citizenship due to Nazi persecution and who are not already entitled to restoration of citizenship under Art. 116 II Basic Law. This entitlement to naturalization also applies to all descendants of such individuals.
~~~~~~~~~~
https://www.germany.info/us-en/service/03-Citizenship/-/2479490
Comment #25 October 11th, 2024 at 2:21 pm
The DQI work is definitely exciting and I like your highlighting of it.
I think your discussion of the 2014 Farhi-Goldstone-Gutmann paper is a bit unfair to the authors, since you seem to be implying that their first version was wrong. I reread it and they pretty clearly say that the best known classical algorithm achieves 1/2 + O(1/D), QAOA achieves 1/2 + O(1/D^{3/4}) and it’s NP-complete to achieve 1/2 + O(1/sqrt(D)). It’s great that the classical algorithm was improved, and sure, this weakens the case for QAOA, but nothing in the first version of FGG is misleading or wrong.
Comment #26 October 11th, 2024 at 6:36 pm
Aram #25: Thanks. I didn’t see myself as saying they made an incorrect claim, but just in case I revised.
Comment #27 November 4th, 2024 at 2:02 pm
[…] What about quantum computing? It can solve some exponential complexity problems like factoring in polynomial time. But how about the SAT problem? It is simply not known (see here, here) whether a polynomial-time quantum algorithm exists for arbitrary NP-complete problems. None are known, and it might not be possible at all (though research continues). […]
Comment #28 November 20th, 2024 at 8:13 am
Hi Scott/others,
Can I ask why improving the approximation ratio indicates that the quantum algorithm has an exponential speedup?
Comment #29 November 26th, 2024 at 5:59 pm
Joe #28:
You ask a good question which I get asked frequently. It is not always true that improving the approximation ratio corresponds to getting an exponential quantum speedup. But for the problem we consider, once we exceed approximation ratio 0.55 the only classical algorithms that we are aware of are slightly refined versions of exhaustive search, which take exponential time.
Comment #30 December 17th, 2024 at 6:40 am
Hi! Check out how nice it is to do DQI with Classiq
https://docs.classiq.io/latest/explore/algorithms/dqi/dqi_max_xorsat/