Two papers

Just to get myself back into the habit of blogging:

For those of you who don’t read Lance’s and Bill’s blog, there was a pretty significant breakthrough in complexity theory announced last week.  (And yes, I’m now spending one of the two or so uses of the word “breakthrough” that I allow myself per year—wait, did I just spend the second one with this sentence?)  Ben Rossman (a former MIT PhD student whose thesis committee I was honored to serve on), Rocco Servedio, and Li-Yang Tan have now shown that the polynomial hierarchy is infinite relative to a random oracle, thereby solving the main open problem from Johan Håstad’s 1986 PhD thesis.  While it feels silly even to mention it, the best previous result in this direction was to separate PNP from Σ2P relative to a random oracle, which I did in my Counterexample to the Generalized Linial-Nisan Conjecture paper.  In some sense Rossman et al. infinitely improve on that (using completely different techniques).  Proving their result boils down to proving a new lower bound on the sizes of constant-depth circuits.  Basically, they need to show that, for every k, there are problems that can be solved by small circuits with k layers of AND, OR, and NOT gates, but for which the answer can’t even be guessed, noticeably better than chance, by any small circuit with only k-1 layers of AND, OR, and NOT gates.  They achieve that using a new generalization of the method of random restrictions.  Congratulations to Ben, Rocco, and Li-Yang!

Meanwhile, if you want to know what I’ve been doing for the last couple months, one answer is contained in this 68-page labor of love preprint by me and my superb PhD students Daniel Grier and Luke Schaeffer.  There we give a full classification of all possible sets of classical reversible gates acting on bits (like the Fredkin, Toffoli, and CNOT gates), as well as a linear-time algorithm to decide whether one reversible gate generates another one (previously, that problem wasn’t even known to be decidable).  We thereby completely answer a question that basically no one was asking, although I don’t understand why not.

34 Responses to “Two papers”

  1. Mitchell Porter Says:

    Is there some way to understand the relationship between progress in absolute separation of complexity classes (like Ryan Williams, 2010), and relativized separation, like Rossman et all?

    In a naive visual way, for absolute separation I can imagine complexity classes as points on a line, with progress in separation proofs slowly establishing the left-to-right order of the points. And for relativized separation, there are many such ‘lines’, one for each auxiliary hypothesis that can be used to imply extra information about the order of the points. And then, expert knowledge of the power of these hypotheses means placing these enhanced orderings in a (partial) order themselves, according to how much more information they would imply, over the body of already-established absolute results…

    Does that sound metaphorically a bit like, what it’s like to understand this stuff? 🙂

  2. Jay Says:

    Glad you’re back, we’ve been bored!

    >Our theorem (…) implies that any n-bit reversible circuit can be “compressed” to an equivalent circuit, over the same gates, that uses at most 2n poly (n) gates and O(1) ancilla bits; these are the first upper bounds on these quantities known, and are close to optimal.

    I thought (don’t remember where I read it, but could digg if it’s controversial) that any neural nets with n hidden layers could be “compressed” to a neural net with only one hidden layer (altought that would sometime requiere exponentially more neurons in the one_H model as compared to the number of neurons in the n_H model). Is it correct that what’s new here is you extended this result to any classical gate (rather than the kind of gate defined by a perceptron) or I miss something else? Also, what are the main barriers so that this result would hold also for quantum gates?

  3. Scott Says:

    Mitchell #1: Just to confuse you a little more, 🙂 the way people prove relativized separations for “high-up” complexity classes (like the levels of the polynomial hierarchy), turns out to be by proving unrelativized separations, not involving any oracle, for “low-down” complexity classes (like constant-depth circuits). Indeed, the two tasks are essentially equivalent: from a certain perspective, a polynomial-hierarchy algorithm that queries an exponentially-long oracle string is “just” a massively scaled-up constant-depth circuit of AND, OR, and NOT gates (with the universal quantifiers playing the role of AND gates and existential quantifiers playing the role of NOT gates). It can therefore be studied using exactly the same techniques that we use to understand the limitations of such circuits.

    What this means is that, seen from far enough away, what Rossman et al. did really is “the same category of thing” as what Ryan Williams did—in that they’re both lower bounds on the sizes of constant-depth circuits. From that perspective, the main differences are that

    (1) Ryan was interested in a more powerful kind of circuit: namely, ACC circuits, which can include mod m gates for any m, rather than just AND, OR, and NOT gates. The presence of mod gates makes things vastly more difficult, and is what forced Ryan to go all the way up to NEXP to find his function that he could prove wasn’t in ACC, rather than using a more “explicit” function like Parity or Majority (or the Sipser functions that Rossman et al. use).

    (2) On the other hand, while Rossman et al. were “merely” dealing with AC0 circuits—a kind that we’ve known how to prove lower bounds on since the early 1980’s—in order to get the desired application to proving PH infinite with a random oracle, Rossman et al. needed way more fine-grained information than Ryan did, or than those people in the 1980’s did. It wasn’t enough to find a function that they could prove didn’t have AC0 circuits, or even a function that they could prove had AC0 circuits of depth k but not AC0 circuits of depth k-1 (something that was also done in the 1980’s). Rather, they needed a function that they could prove had AC0 circuits of depth k, and that couldn’t even be approximated by an AC0 circuit of depth k-1. That’s the new part, and is what seems to have required significant innovation in AC0 lower bound techniques.

    In summary, if you’re trying to understand where these results fit along the “path” to a proof of P≠NP, then it’s best just to forget about oracles altogether, and think of all these circuit lower bounds as scaling various molehills near the base of Mount Everest. But—and this is crucial—other people weren’t even able to scale these molehills, after decades of trying! So if someday, someone does make it to the peak, they’ll presumably look back on the molehill-climbers like Andrew Wiles looks back on the mathematicians in the 17th through 19th centuries who proved various “baby” cases of Fermat’s Last Theorem. Sure, it’s primitive compared to where one wants to be, but seeing just how far the techniques of that time could get you, where they broke down, etc. was an absolutely necessary step along the way.

  4. Scott Says:

    Jay #2: OK, you’re talking about something quite different from what we do in our paper.

    Ironically, what you’re talking about is much more clearly related to the Rossman et al. paper, which is indeed about showing that, if you take a constant-depth circuit and try to reduce the depth, then you’ll necessarily suffer an exponential blowup in size! However, Rossman et al. are concerned with constant-depth circuits of AND, OR, and NOT gates, which are weaker than neural networks, because they can’t include threshold gates. And they were also concerned with circuits that only approximate the desired function. For neural networks (i.e., TC0 circuits, constant-depth circuits of threshold gates), anything like the Rossman et al. result is currently way beyond our ability to prove, even though presumably such a result holds. (And also, if you want to push the depth of your neural net all the way down to 1—i.e., make it into a perceptron—then yes, you can prove that an exponential blowup is inherent, for cases like the Parity function. Indeed, that was done by Minsky and Papert in 1968.)

    Our paper didn’t say anything about the depth of reversible circuits, though that’s an interesting problem that other papers have studied. Mostly, we were simply concerned with classifying reversible gates in terms of which functions they can generate at all, by circuits of whatever size! As a byproduct, though, we did get the results you quoted about the required sizes of reversible circuits over arbitrary sets of gates, and also about the number of ancilla bits they need.

    Oh, and now that you mention it: while we didn’t say so in the paper, one can immediately deduce from our result a ~2n upper bound on the required depth of reversible circuits over arbitrary sets of gates (that follows trivially from our size bound, since depth≤size). With slightly more work, I believe one could also use our result to prove the following:

      Any n-bit reversible circuit can be compressed to an equivalent circuit, over the same gates, that has depth only O(n). (By a Shannon counting argument, such a circuit will of course need exp(n) gates in general, and therefore exp(n) ancilla bits as well.)

    This is worth checking—thank you!

    In any case, the depths we’re talking about here (2n, O(n), …) are all vastly larger than the constant depths that people care about in the neural net world, and also that Rossman et al. and the other circuit lower bound people are currently able to handle.

  5. anon Says:

    Hi Scott, I heard you went to Michigan for the SSC meetup. How was it?

  6. Scott Says:

    anon #5: It was great! Met lots of interesting people. Thanks very much to the other Scott A. for inviting me, and to everyone who showed up.

  7. Geoffrey Irving Says:

    Can random oracle be replaced with any kind of cryptographic hardness assumption, such as a pseudorandom generator?

  8. Scott Says:

    Geoffrey #7: Good question! It depends what you mean.

    On the one hand, any oracle that was pseudorandom against all of PH (or, “scaling down,” pseudorandom against constant-depth circuits) would work fine for this result: by definition, PH can’t distinguish such oracles from truly random oracles anyway. Also, once you specify the size and depth of your PH circuit, you could pick some standard pseudorandom generator (for example, the Nisan-Wigderson generator) that works against PH circuits of that size and depth, and then the result would go through for those circuits.

    On the other hand, it’s also the case that, given any “conventional” pseudorandom function fs (that is, one that’s computable in polynomial time given a random seed s), there will exist a very low-depth PH circuit that distinguishes fs from a true random oracle. What will that PH circuit do? Why, it will just use an existential quantifier (i.e., an OR over exponentially many things) to guess the seed s! (This is just a convoluted way of saying that if you’re NP, then you can break any cryptographic PRG in polynomial time.)

    On the third hand, just because some oracle is distinguishable from a random oracle by PH-circuits, that doesn’t mean the oracle couldn’t also make PH provably infinite! In fact, for 30 years we knew how to make PH infinite relative to certain non-random oracles. The whole new thing with Rossman et al. is that now we can also make it infinite relative to random oracles.

    On the fourth hand, if your oracle is computable in P/poly—that is, computable in polynomial time once a single polynomial-size secret (like, say, the seed of a pseudorandom function) has been guessed—then for the foreseeable future, we won’t be able to prove unconditionally that PH is infinite relative to that oracle. In fact (exercise for the reader), doing so would give us an unconditional proof that PH is infinite in the real world! So, that gives a sense in which the Rossman et al. result can’t possibly work as stated with a “standard” cryptographic PRF; again, what you’d want is a PRF that was pseudorandom against all of PH.

  9. Sniffnoy Says:

    Oh, this is neat! Yay for more nice lattices to draw diagrams of! 🙂 Basic question about paper #2, there’s a basic point I’m stuck on:

    Why is this new lattice you’ve defined not a sublattice of Post’s Lattice Lite? I mean, evidently it’s not, because it’s infinite, but how is it not the case that not all these classes you’ve defined are clones (with constants)? I assume there must be some additional restriction on how functions may be combined (presumably a reversibility one) that clones do not have?

    …oh, nevermind, it looks like this is all answered in 2.2 and 2.3. Guess I answered my own question. Worth pointing out though I guess. 🙂 (Well, mostly answered it. I still need to sit down and actually understand this.)

    By the way, Figure 7 has ended up stuck in the middle of the bibliography.

  10. Benoit Essiambre Says:

    “We thereby completely answer a question that basically no one was asking”

    Hey, still better than than rejecting a null hypothesis and getting an answer to a question no one should even be asking!

  11. Scott Says:

    Sniffnoy #9: Yeah, Post’s lattice consists of classes of Boolean functions with n inputs and 1 output. Our lattice consists of classes of reversible functions with n inputs and n outputs. So the two are incomparable—and as it turns out, not just for silly definitional reasons, but for substantive ones. For example:

    – Post’s lattice has nontrivial monotone functions, which can’t be in our lattice because reversibility rules them out.

    – Our lattice has things like the set of all reversible transformations that conserve Hamming weight (or Hamming weight mod k, or inner products mod 2), which wouldn’t even make sense in Post’s context. Indeed, the possibility of “conserved quantities” could be seen as the main issue that makes our classification problem so much harder than Post’s, if we assume in both cases that constants are free. (If constants aren’t free, then Post’s classification is pretty complicated too…)

    Of course there are also points of convergence, like the fact that whether or not you’re affine over GF(2) is important for both classifications, but reversibility really does give you a different beast (or rather, a different zoo of beasts).

    Speaking of which: yes, for me, drawing pretty diagrams of lattices (or having your students render them in TikZ) is one of life’s greatest pleasures!

  12. Geoffrey Irving Says:

    Scott: Thanks for the detailed answer!

  13. Sniffnoy Says:

    Scott: Thank you for the Hamming weight example (and the others), that was exactly the sort of thing that still remained for me to figure out! 🙂 So it would seem that we do get a mapping from this lattice to Post’s Lattice (Lite), it’s just not injective, because adding the transition betwen varying n destroys a lot of information…

  14. Scott Says:

    Sniffnoy #13: Yes, it’s neither injective nor surjective.

  15. Sniffnoy Says:

    I think there’s a slight error in the statement of Theorem 82 — or rather, the combination of it with the statement of Theorem 3; at the least, it’s stated confusingly. If you allow loose ancillae, CNOTNOT should collapse with CNOTNOT+NOT. (Because obviously with loose ancillae you can get NOT from CNOTNOT.) However, Theorem 82 says only that the classes of the form “C+NOTNOT” collapse with something else, which I took to mean the classes listed under case 10 of theorem 3. But CNOTNOT isn’t listed under case 10, it’s a separate case, case 6. So Theorem 82 currently appears to say that CNOTNOT and CNOTNOT+NOT don’t collapse. (I noticed this because I was drawing the lattice diagram for the loose ancillae lattice, and included the two classes separately, and ended up with a poset that wasn’t a lattice at all! Which is clearly nonsense. 🙂 )

  16. Sniffnoy Says:

    Some further silly notes on this: (note whenever I say “map” I mean “map of posets”)

    There’s a map from the reversible lattice (hereafter RL) to Post’s Lattice Lite (hereafter PLL), but also there’s one the other way. Both these factor through the loose lattice (hereafter LL), so I’ll consider them as maps betwen LL and PLL rather than RL and PLL.

    Note, by the way, that LL isn’t just a subset of RL, it’s actually a sublattice. I’m not sure whether or not this is obvious; it’s certainly obvious for meets, it may be obvious for joins but I’m not certain. But just by the diagrams it’s apparently true. Moreover, the map from RL to LL also is a lattice map — it obviously preserves joins, I have no idea why it preserves meets other than evidently it does by the diagrams — which means LL is a “lattice retract” of RL. This is in contrast to the case of PLL sitting inside Post’s Lattice (hereafter PL); PLL is a sublattice of PLL, but the obvious map from PL to PLL fails to be a lattice map (it doesn’t preserve meets).

    Neither the map from PLL to LL, nor from LL to PLL, is a lattice map though. The former fails to preserve joins, the latter fails to preserves meets.

    …I’m not really sure anymore where I was going with this, but I know that earlier today I decided I had to figure out just what these maps do, and this was the part of what resulted that seemed worth posting! So, if anyone cares, here’s some facts about some natural maps between some important posets.

    (And of course also the map from LL to PLL actually factors through PL, but I decided to not take the time to figure out what the map from LL to PL looks like, because I have even less of a handle on that…)

  17. Nick Says:

    If you’re looking for topics to blog about, I have a request: could you do a post about this new “PDQP” complexity class? In particular, could you address the following questions in reasonably nontechnical terms?

    1) What is PDQP?
    2) How does it differ from DQP?
    3) What’s the problem with DQP?
    4) How, if at all, does all of this affect the DQP discussion in chapter 13 of QCSD?

  18. Scott Says:

    Sniffnoy #15: Yes, you’re right, Theorem 82 is stated a bit confusingly, since for it to hold, one needs to interpret <CNOTNOT> as a “C+NOTNOT” class. We can fix that and acknowledge you—but should we use your real name (which I happen to know), or just “Sniffnoy the commenter”? 🙂

  19. Scott Says:

    Nick #17: Good questions! How about if I just answer them in this comment section?

    1) Informally, PDQP is the class of problems solvable in polynomial time by a quantum computer, if we assume the completely unphysical ability to make “non-collapsing measurements” (measurements that still leave the state there to be measured again), in addition to ordinary collapsing measurements. For a formal definition, see this paper by Bouland, Fitzsimons, Lee, and myself. The interesting thing about PDQP is that it contains Statistical Zero Knowledge (and thus, graph isomorphism and various other problems that we don’t know how to solve with a quantum computer), but relative to an oracle, it doesn’t contain NP-complete problems. It thus gives one of the only known examples of a complexity class that generalizes BQP, but only “slightly” so.

    2) DQP, which I defined in 2005, is the class of problems solvable in polynomial time by a quantum computer, if you assume the unphysical ability to see the entire history of a hidden variable, in any hidden-variable theory that satisfies a couple reasonable axioms. (See my paper for the formal definition.) The relation between DQP and PDQP is a subtle question—indeed, just while writing this comment, I realized there were basic questions about how they relate that I wanted to think about more!—so let me defer those questions to a later comment, and maybe talk them over with my coauthors.

    3) There’s not really any “problem” with DQP, except just that it’s really hard to prove some of the statements about DQP that we think are true and would like to prove! As one example, the best upper bound on DQP that I was able to prove was EXP; by contrast, we were able to show that PDQP is contained in BPPPP, which is a much better bound. As a second example, as I discussed in this post (and as we discuss in detail in the new paper), I had claimed to give an oracle relative to which NP is not in DQP—but while I still think it’s possible to do that, there was an error in my proof. In the new paper, we manage to give a correct proof that there’s an oracle relative to which NP is not in PDQP.

    4) The second printing of QCSD onwards discusses the error in my claimed proof from 2005 that there’s an oracle relative to which NP is not in DQP, and everything it says about the subject is (I believe!) now correct. The book doesn’t talk about PDQP, but that’s only because, at the time it came out (2013), Adam Bouland and Mitchell Lee hadn’t yet defined PDQP, or started doing the research that led to our newer paper! It’s a shame that research can’t just stay put to prevent books from getting out of date. 🙂 Anyway, if there’s a second edition of QCSD, maybe I’ll add something about PDQP, which is another wrinkle in the story.

  20. Scott Says:

    (addendum) As far as we can currently tell, the classes DQP and PDQP might be incomparable—we don’t know how to show to containment of either one in the other. This is a great question, which we hadn’t noticed before!

  21. David Speyer Says:

    Fun paper!

    Any chance you could explain why classifying the quantum gates is so out of reach? Out of curiosity, might it be easier to just classify the corresponding Lie algebras? (To spell the last comment out: As I understand it, a family of gates should give a subgroup G_n of GL(V^{\otimes n}) for each n. A good warm up would be the describe the possible Lie algebras \mathfrak{g}_n, this is basically orthogonal to the sort of discrete computations you are doing in this paper.)

    Not that I’m saying I know how to do it! Just that it sounds like a fun problem, and it isn’t obvious to me that it is intractable.

  22. Scott Says:

    David #21: Thanks!

    I should say that I completely agree that classifying quantum gates is a wonderful project for people to work on now! Small special cases have already been pretty complicated, which was the basis for our (very possibly wrong) speculation that the full problem “seems out of reach at present.” But regardless of how hard the full problem turns out to be, it’s very clear that you can make definite, useful progress. (Indeed, we suggest four possible warmup cases in the open problems section, which can also be combined with each other to get “even easier” special cases.)

    Classifying the possible Lie algebras seems very closely related to classifying local Hamiltonians, which was one of the special cases we listed (maybe it’s even the same thing in different language?). One could of course start by classifying the 1- and 2-qubit Hamiltonians. Here a natural conjecture would be that there’s a very sharp dichotomy, and that once you have pretty much any nontrivial 2-qubit interaction Hamiltonian, you can use it to do universal quantum computing (possibly in an encoded subspace, and possibly using special initial states). I should mention that two students in my group have been working on the commuting special case of this problem, which they’ve already found complicated (though they can now handle everything except a few measure-0 subsets of commuting 2-qubit Hamiltonians).

    This is probably a good place to mention that, when it comes to classifying quantum gates, there are many decisions you need to make about just how fine-grained a classification you want. For example, do you care about which unitaries can be exactly generated, or only which ones can be approximated to arbitrary precision? Do you assume that G-1 is available whenever G is available? (Unlike in the discrete case, this could actually matter if we care about exact generation, or even Solovay-Kitaev-style efficient approximation.) Do you allow arbitrary initial states? Intermediate measurements? (Note: if you allow both, then the stabilizer quantum circuits—i.e., the most interesting non-universal class that we know about—won’t be part of your classification, since it can be boosted up to universality with those resources.)

    Of course, since we don’t currently have a classification under any of these assumptions, an obvious place to start would be whichever ones make the proof easiest. Anyway, if you’d like to try your hand at this, don’t let me stop you!

  23. Sniffnoy Says:

    Scott #20: I’m confused, I thought part of the point of PDQP was that it was contained in DQP?

    Regarding my own comment #16: Having now actually tried it, I have to conclude that while you can factor the map from the “loose lattice” to Post’s Lattice Lite through Post’s Lattice, you probably shouldn’t. Not having 0s and 1s available means the rules for generating aren’t very compatible, and as a result you can’t do it based on generating sets — you actually have to consider all the functions in the class. Presumably doable if you’re determined to, for whatever reason, but painful and probably not useful. (Also, I wrote that comment while extremely tired, sorry if there are parts that are not very clear.)

    Scott #18: Hey, if you think that’s significant! 😀 Uh, real name please if you’re going to do that. I mean it’s not exactly a secret, and I’m pretty sure I linked to my personal home page back on the blog-book thread…

  24. David Speyer Says:

    Yes, I think Hamiltonian’s is the same as Lie algebras. I must admit, I was thinking about closed subgroups, which is the same as approximation to arbitrary accuracy. Presumably, classifying all gates exactly would be absurd in the same way that classifying all subgroups of $SU(2)$, with no topological condition, is absurd. I see you made the same choice in your beam splitter paper.

    I’m reading Bouland-Aaronson (1310.6718) now to get an idea of the issues involved. Is that a good place to start?

    Disclaimer: I very often get exited about a project for a few days, then give up and move on. But I’m excited for the moment!

  25. Scott Says:

    David #24: Yes, my paper with Adam Bouland seems like an excellent place to start. Adam and Lucy Zhang have some further stuff that they’re currently working on, and would probably be happy to discuss via email.

    There are actually some pretty interesting recent results (typically involving algebraic number theory) about exact synthesis of unitary transformations; see for example here. A major reason for the interest is that, for the special quantum gate sets for which you can use algebraic number theory to understand which unitaries can and can’t be represented exactly, you can use more algebraic number theory to get procedures for approximating arbitrary unitaries that blow the parameters of the Solovay-Kitaev Theorem out of the water. (Specifically, you can get within an additive constant of the information-theoretic limit of log(1/ε) gates, as compared to the Solovay-Kitaev bound, which was log3.97(1/ε)—a huge difference in practice.) See here for example.

    So I wouldn’t say that the exact classification problem is “absurd”—I would merely say that it seems way more complicated even than the approximate classification problem, making the latter a very obvious place to start!

  26. Scott Says:

    Sniffnoy #23:

      I’m confused, I thought part of the point of PDQP was that it was contained in DQP?

    No, the point is that it’s a variation on DQP that captures many of the same intuitions—e.g., both classes contain SZK and BQP and probably fail to contain NP for very similar reasons—while also being easier to reason about. If it were also contained in DQP, that would be a further nice property, but now that I’ve (belatedly!) explicitly considered that question, I don’t see how to prove it.

  27. Alex V Says:

    Scott, Due to the paper about classification I “revisited” stabilizer circuits and encountered some technical problem. May be you could comment that? There is a paper hep-th/9905212 based on earlier Blichfeldt book with classification of finite subgroups of SU(4), but I do not recognize finite subgroup corresponding stabilizer circuits on two qubits here. Either I missing something, or the classification is not complete. Did you read the work?

  28. Nick Says:

    Scott #19, 20: Thanks, that helps.

    Based on the very hazy terms in which I understand quantum computing, it would surprise me if those two classes were incomparable. QM is an “island in theory-space”, right? So BQP should be hard to extend in an interesting way. But here we have two purportedly distinct, natural, not-too-powerful extensions? How many such extensions could there possibly be?

  29. Along the path | P versus NP versus Tiny Strings Says:

    […] there is a new theorem regarding the polynomial hierarchy. Its relationship to P vs NP is, according to the famous Scott A., like the relationship between the first attempts to prove […]

  30. Sniffnoy Says:

    OK, this is probably getting silly now, but the map from the reversible lattice to the loose lattice has to preserve meets because each of the equivalence classes has a smallest element. (One could then ask if there’s any particular reason why that’s true, but I doubt the sense of such a question.) Thanks to John Wiltshire-Gordon for pointing me towards that general line of reasoning.

  31. Laura Says:

    Scott #22: I wanted to point out that it is known that almost any 2-qubit Hamiltonian is universal. In fact, no ancillary space is needed. This, modulo some caveats and hand waving, was first shown by Deutsch-Barenco-Ekert in quant-ph/9505018 and by Lloyd in PRL 75(2):346–349. We later classified the universal 2-qubit Hamiltonians in 1004.1645. As for 2-qubit unitaries, we recently added an appendix showing how our classification easily implies universality of almost any 2-qubit unitary. In essence: the unitary exp(-iH) is universal as long as H is universal and the eigenvalues of H are rationally independent.

  32. Scott Says:

    Laura #31: Thanks very much! I forwarded your comment to my student Adam Bouland, and he confirmed my vague impression: that yes, he was certainly well aware of your and others’ nice results on universality for almost all Hamiltonians and unitaries. From his standpoint, the entire difficulty now lies in classifying the “remaining 0%”! That’s what all the Lie-algebra machinery is needed for. If you have ideas about how to make progress here, please email us; we’d love to discuss it.

    By analogy, relatively early in my, Daniel Grier, and Luke Schaeffer’s classification project, we had managed to classify “almost all” n-bit classical reversible gates (i.e., 100% of them, in the limit as n goes to infinity). But handling the remaining 0% then took us several more months. 🙂

  33. Laura Says:

    Scott #32: Thanks for your reply!

    I agree that there is no classification of the nonuniversal 2-qubit unitaries and that this seems too basic a question to be left unresolved. However, one of the things I was trying to point out in my previous comment is that we do have such a classification for 2-qubit Hamiltonians at least in regards to “in place” universality. In particular, H is nonuniversal iff 1) H is traceless or 2) H shares an eigenvector with the swap gate or 3) H is similar* to a local Hamiltonian (*some details missing). Unfortunately we could not resolve which non-universal Hamiltonians become universal in the presence of ancilla, which perhaps is a more natural scenario.

    I haven’t thought about the universality problem for a while now, but I’ll make sure to email you and Adam if anything useful comes to mind. I am looking forward to the developments to come 🙂

  34. More Reasons for Small Influence | Combinatorics and more Says:

    […] Rossman, Rocco Servedio, and Li-Yang Tan. Here are blog reports on Computational complexity, on the Shtetl Optimized, and of Godel Lost letter and P=NP. Let me mention one of the applications: refuting a 1999 […]