## Back

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.

Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!

A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.

Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.

Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.

For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

### 34 Responses to “Back”

1. Ernie Davis Says:

As regards the breakthrough by Chen et al. on maximum flow, etc.: Very exciting! But I dunno about updating undergraduate algorithms courses. Looking at their introduction, I would guess that the previous best algorithms were already not the kind of thing taught in undergraduate algorithms courses. (Personally, I don’t think I’ve ever taught any algorithm for max flow in any algorithms class at any level — maybe in a second semester masters’ course back in 1989) And as for the new result — well, the paper is 110 pages long.

2. Scott Says:

Ernie Davis #1: Heavens, I didn’t mean that you’d actually teach the new algorithm in an undergrad course! It’s just that whenever I teach any result, I feel duty-bound to tell the students what the current state of the art is on that problem, even if that state of the art is too advanced to present.

3. Brenton Bostick Says:

Hi Scott, maybe non sequitur, but have you done any more work on Collatz since paper last year?

4. Scott Says:

Brenton #3: Nope, sorry!

5. murmur Says:

Hi Scott, are you going to write a blog post/paper based on the comments on the post “Why Quantum Mechanics”?

6. Ernie Davis Says:

A comment and a question about Chen et al.

Comment: There is a probabilistic element here. It isn’t mentioned in the abstract, which seems a little odd, but the statement of both theorem 1.1 (p. 1) and corollary 1.2 (p. 2) end with “with high probability”. It’s not clear to me from the statement of the theorems whether it returns the maximal solution with high probability or runs with the stated running time with high probability or what.

Question. They use $\tilde{O}(\cdot)$ throughout. Does that mean the same thing as $O(\cdot)$ or something different?

7. Curious Says:

If we have a faster search for NP complete problem then is it a fancy way of saying BQP^O contains NP^O relative to random O?

8. William Gasarch Says:

1) What level of knowledge of QC does one need to learn and play Quantle? I’m not talking about being good at it, just being able to play it and get a better sense of QC. Good for people just learning QC?

2) Kudos to Microsoft for the retraction. That’s what responsible scientists do when they are in error. Here is hoping they now have it right.

9. Clément Canonne Says:

For those interested in the max-flow (etc.) breakthrough, Rasmus Kyng gave a talk on their paper on TCS+: https://sites.google.com/view/tcsplus/welcome/past-talks/2022-2023?authuser=0#h.byjmbv38efs1

10. RandomOracle Says:

Regarding the Yamakawa and Zhandry result, what do you think about this idea of “multiplying” quantum states (i.e. going from $$\sum_x \alpha_x |x \rangle$$ and $$\sum_x \beta_x | x \rangle$$ to $$\sum_x \alpha_x \beta_x | x \rangle$$, suitably normalized)? In his talk, Zhandry mentions that the idea originally came from Regev’s quantum reduction from SVP to LWE, though I’ve never seen it presented that way (although I find it much easier to understand from this “multiplying amplitudes” perspective). Do you think there could be other potential interesting applications of this?

11. Anonymous Ocelot Says:

Welcome back, Scott!

Did you see the result about time-bounded Kolmogorov complexity and its relationship to one-way functions? Is it as cool as it sounds (the first “one-way function complete” problem, or something like that…)?

12. kodlu Says:

Curious #7:

They use $$\tilde{O}(\cdot)$$ throughout. Does that mean the same thing as $O(\cdot)$ or something different?

As far as I know, if a function is $$\tilde{O}(f(n))$$ for f(n) growing with n this means we ignore logarithmic or sub polynomial factors, one way to think of it is that
$$\tilde{O}(f(n))=O(f(n)^{1+o(1)})$$

13. Scott Says:

murmur #5: Eventually, hopefully!

14. Scott Says:

Ernie Davis #6: Yes, the algorithm is randomized; derandomizing it is an excellent open question.

O with a tilde means that log factors are suppressed.

15. Scott Says:

Curious #7: No, it emphatically does not mean that. They only construct one specific search problem that’s in (the search version of) NPO as well as BQPO but not BPPO—which is impressive enough!

16. Scott Says:

William Gasarch #8: You only need the most rudimentary knowledge, enough to understand 1-qubit equations like H|0⟩=|+⟩.

17. Scott Says:

RandomOracle #10: Yes, I do think there could be other interesting applications! Mark Zhandry will be speaking about this at our UT Austin seminar in a couple weeks, and I look forward to trying to understand it better then!

18. Scott Says:

Anonymous Ocelot #11: Yes, of course I saw that result, which is 7 months old now (we had a seminar about it in the fall), but recently written up in Quanta here! And yes, it’s extremely cool! It hasn’t solved the 45-year-old open question of whether cryptography can be based only on the assumption that P≠NP. But combined with other results in the same spirit over the past couple years, it’s contributed to a sense that an answer to that question is no longer ridiculously out of reach.

19. Curious Says:

What is difference between NP or BQP and search version of these for a complexity theorist?

20. Scott Says:

Curious #19: In an NP search problem, you need to find a solution (promised that there is one); in an NP decision problem, you need to decide whether a solution exists. Sometimes, as here, the distinction between the two is extremely important. Ambainis and I conjectured that there are no exponential quantum query complexity speedups for decision problems relative to a random oracle. The new work doesn’t refute that conjecture, but shows an exponential quantum speedup for NP search problems.

21. Curious Says:

Is it like magic? How can you exponentially solve search without having solved decision as well? Is there easy example to understand for nimble brains?

22. Leor Fishman Says:

Wait, if you have an exponential quantum speedup for the search problem, how do you _not_ have an exponential speedup for the decision problem? (since the search to decision reduction is logarithmic, and the decision to search reduction is constant time, iirc?)

23. Scott Says:

Curious #21 and Leor Fishman #22: Did you at least read the intro? Relative to a random oracle, Yamakawa and Zhandry define an NP search problem that always has many solutions—so the decision problem (“is there a solution?”) is completely trivial—but finding a solution is hard for classical computers. This is the usual way one separates search and decision. Then the interesting part, the new part, is to make the search problem easy for quantum computers.

24. Curious Says:

So it is TFNP class? Correct? https://en.wikipedia.org/wiki/TFNP

25. Scott Says:

Curious #24: Yes (albeit, TFNP relative to a random oracle).

26. Bruce Smith Says:

Now I *think* I just submitted the following corrected version, expecting to again get the chance to review and edit the formatting, but it just disappeared, with no evidence I submitted it at all. Maybe a blog bug due to the original editing time limit still not having expired? So I’ll try again, but maybe this is a duplicate comment. Oh and I did correct one typo in this third try!)

Anonymous Ocelot #11, Scott #18: Thanks for pointing out that interesting result and Quanta article.

But (after skimming parts of their arXiv preprint) there’s a technical point which I don’t understand. They get a result concerning $$K^t(x)$$ (Kolmogorov complexity of $$x$$ under runtime bound $$t(|x|)$$) for $$t(n)$$ exceeding any polynomial $$(1 + \epsilon)n$$ for $$\epsilon \gt 0$$. But since $$K^t$$ is defined relative to an arbitrary universal TM, I don’t understand how this fine a time-limit distinction makes sense — what if we chose a universal TM (for use in our definition of $$K^t$$) which always takes at least $$2k$$ steps to produce any output of length $$k$$?

There must be some standard provision for that in the definition of time-bounded Kolmogorov complexity $$K^t$$, but I don’t see what it could be while retaining a meaningful distinction between $$t(n) = (1 + \epsilon)n$$ and $$t(n) = n$$. (I checked a few references and could not find a good answer, including the definitions of $$K^t$$ in the preprint itself and in https://people.cs.rutgers.edu/~allender/papers/chapter.pdf.)

27. Spencer Says:

Scott #2: When I was at ETHZ as an undergraduate they taught Max-Flow in detail in the data structures & algorithms class. Perhaps that has something to do with two of the authors on this paper coming from ETHZ!

One thing I didn’t get a sense for from skimming the paper is the practicality of this algorithm. They don’t mention an implementation. Is it galactic? Are the memory requirements for the dynamic data structure reasonable?

28. gentzen Says:

Ernie Davis #1:

As regards the breakthrough by Chen et al. on maximum flow, etc.: Very exciting!

Chen et al. actually are (in alphabetical order): Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva

It is an exciting breakthrough, but surprisingly not a surprising one. They knew that they were making steady progress in their field. In fact, I guess it would be a good idea to not see this as a theoretical breakthrough, but rather as a masterpiece of practical algorithm design and analysis following earlier theoretical breakthroughs. On the other hand, the initial breakthrough(s) by Spielman and Teng from 2003 happened so long ago that such achievements must be seen as independent breakthroughs, even so the goal to extent the reach of the new paradigm to more and more general problems was clear before.

But I dunno about updating undergraduate algorithms courses. Looking at their introduction, I would guess that the previous best algorithms were already not the kind of thing taught in undergraduate algorithms courses. (Personally, I don’t think I’ve ever taught any algorithm for max flow in any algorithms class at any level — maybe in a second semester masters’ course back in 1989) And as for the new result — well, the paper is 110 pages long.

I agree that undergraduate algorithms courses already have more than enough material to cover. I would see this more in the context of courses on algorithmic graph theory, or courses on combinatorial optimization, which were second courses in a sequence of three optimization courses in my student days in applied math. (I enjoyed mine from Gottfried Tinhofer, which was a friendly old professor in my days. Peter Gritzmann would have been the friendly dynamic young professor, his course would have been more challenging, and I regretted back then that my course was too easy in comparison. Which was stupid in hindsight, the foundations I learned were good, and it really made me enjoy graph theory and its applications, and also its constant stream of breakthroughs.)

But back to my point that we are seeing here masterpieces of practical algorithms rather than theoretical breakthroughs. I have very good experiences with Spielman’s approxChol preconditioner, which is based on a paper by Rasmus Kyng and Sushant Sachdeva from 2016. Of course, I would have never implemented it myself based on a paper, but only adapted it to my needs, because already the non-adapted Julia version performed very satisfactory for my use cases. My initial C++ port was a factor 2 faster than the Julia version, but I was surprised that for another C++ port, the initial port was actually a factor 2 slower instead. However, their optimized version is a factor 8.5 faster than the Julia version, which is even more surprising to me, because my optimized and parallelized version was only a factor 5 faster than the Julia version. I definitely plan to study that other C++ port deeper at some point, and see whether the strengths of my version and the strengths of their version can be combined. But the existence of different versions just underlines my overall message, that those are practical algorithms. And that they can be understood, modified, and adjusted to individual needs by mere mortals.

29. Urs Schreiber Says:

The Nature-retraction that you point to is the one from last year:

“Retraction Note: Quantized Majorana conductance” (08 March 2021)
doi.org/10.1038/s41586-021-03373-x

But there was a second retraction just days ago:

“Retraction Note: Epitaxy of advanced nanowire quantum devices” (19 April 2022)
doi.org/10.1038/s41586-022-04704-2

on which there is more recent commentary here:

“Authors retract second Majorana paper from Nature” (24 April 2022)
retractionwatch.com/2022/04/24/authors-retract-second-majorana-paper-from-nature

By the way, when I recently looked into understanding how these nanowire Majorana anyons are (eventually) to be “braided without braiding” (according to the proposal in arxiv.org/abs/0802.0279, arxiv.org/abs/0808.1933), I got away wondering that the theoretical argument seems to have a gap: quantumcomputing.stackexchange.com/q/25638/19363 (?)

30. Lazar Ilic Says:

Wow wow wow at Maximum Flow and Minimum-Cost Flow in Almost-Linear Time. Legendary seminal oeuvre… agnus dei opus dei manus dei corpus dei vir dei mens dei lignum dei ailanthus altissima… inspirational work from that crew. I agree, fantastic literature and background for undergraduates! We are still living in the times when fundamental foundational results like Primes Is In P drop!

31. Leo A Says:

Ernie Davies and Kodlu:

From what I’ve seen, $$\tilde{O}( f(n))$$ generally means asymptotically $$f(n)$$ up to polylog factors, $$\log^k (n)$$. Viewing these as classes of functions, one would have

$$\tilde{O}(f(n)) = \bigcup_k O(f(n)\log^k (n))$$

This doesn’t quite agree with kodlu’s characterization. For example, $$f(n)^{o(1)}$$ can grow quite fast depending on $$f(n)$$. Take $$2^{n^2}$$ for example.

32. Igor Says:

> they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones

Does this provide intuitive evidence for non-relativized separation? I read that the Random Oracle Hypothesis is false in general, so it’s not as simple as that. Is there something specific to the quantum setting that suggests that this result means something for non-relativized separation? Or is this just an interesting result that tells us nothing about the non-relativized case?

33. Scott Says:

Igor #32: It’s true that there are (generally contrived) examples in cryptography where the Random Oracle Hypothesis fails. In this specific case, though, I’d expect everything to work just fine if you instantiate the random oracle with a pseudorandom function—you should then indeed get an explicit, real-world NP search problem that’s efficiently solvable by quantum computers and not classical ones. (On the other hand, I don’t know what that search problem is good for other than quantum supremacy and certified randomness, and I also don’t know how to implement it without a full scalable QC capable for example of running Shor’s algorithm.)

34. Qwerty Says:

Glad you are OK (on the uncharacteristical quietness)! That is one ~worry off the list :).

I’ve decided not to worry about the fate of the world. There is no point. Besides, I saw an amazing Tamil movie where the hero (who plays 10 characters) gets shot in the head but the bullet only dislodges a tumor he didn’t know he had, exactly. I think this is a sign that the various problems in the world are going to cancel each other out. 🙂

This is as good a thing to believe as anything else!!!