Open Problems Related to Quantum Query Complexity

Way back in 2005, I posed Ten Semi-Grand Challenges for Quantum Computing Theory, on at least half of which I’d say there’s been dramatic progress in the 16 years since (most of the challenges were open-ended, so that it’s unclear when to count them as “solved”). I posed more open quantum complexity problems in 2010, and some classical complexity problems in 2011. In the latter cases, I’d say there’s been dramatic progress on about a third of the problems. I won’t go through the problems one by one, but feel free to ask in the comments about any that interest you.

Shall I push my luck as a problem-poser? Shall or shall not, I have.

My impetus, this time around, was a kind invitation by Travis Humble, the editor-in-chief of the new ACM Transactions on Quantum Computing, to contribute a perspective piece to that journal on the occasion of my ACM Prize. I agreed—but only on the condition that, rather than ponderously pontificate about the direction of the field, I could simply discuss a bunch of open problems that I wanted to see solved. The result is below. It’s coming soon to an arXiv near you, but Shtetl-Optimized readers get it first.

Open Problems Related to Quantum Query Complexity (11 pages, PDF)

by Scott Aaronson

Abstract: I offer a case that quantum query complexity still has loads of enticing and fundamental open problems—from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.

Some of the problems on my new hit-list are ones that I and others have flogged for years or even decades, but others, as far as I know, appear here for the first time. If your favorite quantum query complexity open problem, or a problem I’ve discussed in the past, is missing, that doesn’t mean that it’s been solved or is no longer interesting—it might mean I simply ran out of time or energy before I got to it.

Enjoy! And tell me what I missed or got wrong or has a trivial solution that I overlooked.

65 Responses to “Open Problems Related to Quantum Query Complexity”

  1. Sniffnoy Says:

    Hm, so in problem #4, what about R(f) vs D(f)? Obviously that’s a purely classical problem, but it is the obvious third question there which is omitted. What’s known about that?

  2. Sniffnoy Says:

    I suppose my question about Problme #4 extends to Problem #5, as well, comparing deg~(f) to R(f) and D(f) as well… (again, those are purely classical, but…)

  3. Shalev Says:

    Nice list! Here are a few additions I would propose:

    1. Is there a query-to-communication lifting theorem for quantum query complexity? We now have such a theorem for deterministic, randomized, and zero-error randomized algorithms (which lift to the corresponding communication measures). A version for quantum query complexity would be of enormous help in proving quantum communication lower bounds. Currently, we only know how to lift the polynomial method, but not the adversary methods.

    2. What is the quantum query complexity of triangle-finding (and related graph problems)? This one is particularly interesting in that it might force us to introduce new techniques for quantum lower bounds; the current techniques can’t seem to get a non-trivial lower bound (I mean, in theory the negative-weight adversary bound is tight, but in practice it’s only been successfully used a handful of times). It would also be interesting if the reason we can’t prove a lower bound is because quantum algorithms actually *can* find triangles in a number of queries that is linear in the number of nodes. Would that also apply to finding other constant-sized subgraphs?

    3. Is there a super-polynomial quantum speedup on total function search problems? By a total function search problem, I mean a set of small certificates (i.e. partial assignments in which polylog(n) bits are revealed) with the property that each n-bit Boolean string is consistent with at least one certificate; the task is then to find such a certificate by querying an input string. It is known from like the 80s that there is an exponential separation between deterministic and randomized query complexity in this setting, but I believe the randomized vs. quantum version of the problem is open.

  4. fred Says:

    “The key here is that a single query can access multiple bits of X, one in each branch of a superposition state.”

    By query, do we mean a measure of the state (or a substate), which means the superposition is then lost? (the wave function collapses)?

    ” f : {1,…,n} → {1,…,m} (where n is even and m ≥n), and are asked to decide whether f is 1-to-1 or 2-to-1″

    By 1-to-1, do we mean that all input points in n have a distinct output point in m?
    So, to find it’s 2-to-1, it’s enough to find two input points with the same output point?

  5. Gerard Says:

    Scott:

    I found this item from your 2011 post quite interesting since it’s along the lines of something on my own very vague to do list.:

    > 5. Prove explicit lower bounds on the number of arithmetic operations needed to compute the permanents and determinants of 3×3 and 4×4 matrices. In the 4×4 case, can you obtain a separation between the permanent and the determinant?

    I have two questions:

    1) Has someone done this yet ?

    2) There is a broken link on the “explicit lower bounds” text to some document on your website. Is there a way to restore that ?

  6. Gerard Says:

    Scott:

    Also from the 2011 (classical) list:

    > 2. Build a public library of 3SAT instances, with as few variables and clauses as possible, that would have noteworthy consequences if solved. (For example, instances encoding the RSA factoring challenges.) Investigate the performance of the best current SAT-solvers on this library.

    In my opinion RSA is much too hard because that circuit will generate a CNF with 105 to 106 variables which seems to be well beyond the range that current SAT solvers can deal with.

    What I’d be interested in seeing is a list of small 3SAT instances (or families of instances), say < 1k variables but preferably < 100 and preferably satisfiable, that are difficult for all current solvers. From what I’ve seen such instances are rare if they exist at all.

  7. Nick Drozd Says:

    Problem #3 from “Projects aplenty” says: “Find an explicit n (the smaller the better) for which you can prove that the value of BB(n) (the nth Busy Beaver number) is independent of ZF set theory.” Did your work with Adam Yedidia actually spring from that blog post?

  8. Jack Galler Says:

    The classical one about cellular automata sounds very interesting to me. Any progress on that?

  9. fred Says:

    Gerard #6

    A while back I was looking at
    https://www.cs.ubc.ca/~hoos/SATLIB/benchm.html

  10. Scott Says:

    Sniffnoy #1 and #2: See Table 1 in my paper with Ben-David, Kothari, Rao, and Tal for an up-to-date list of the best known relations and separations between query complexity measures for total Boolean functions.

    For D vs. R (where R means bounded-error randomized query complexity), the best known gap is now quadratic while the best known relation remains cubic, so there’s indeed an open problem there. For D vs. R0 (where R0 means zero-error randomized), by contrast, we now know that the best possible gap is quadratic.

  11. Scott Says:

    Shalev #3: Thanks so much!! I especially love your third problem, which I somehow can’t remember ever having thought about. What’s the total function search problem that exhibits a superpolynomial query complexity separation between deterministic and randomized?

  12. fred Says:

    Gerard #6

    “What I’d be interested in seeing is a list of small 3SAT instances […] that are difficult for all current solvers. “

    Is that a thing? I would imagine that different solvers could perform very differently on the same instances, otherwise, we would be able to tell ahead of time what instances are harder than others, and I don’t think it’s possible. But maybe I’m totally wrong about this…

    Also, I don’t think anyone can be centralizing results on every possible solver out there.
    It also all depends what we mean by “it’s easier for solver 1 than for solver 2”.

  13. Scott Says:

    fred #4: No, ordinarily a quantum query is a unitary transformation that doesn’t collapse the state at all. It looks like this:

    Σx,w αx,w |x,w⟩ → Σx,w αx,w |x,w⊕f(x)⟩

    where x is the input to be queried, f(x) is the answer to the query, w is a “workspace register” into which the answer gets reversibly written, and αx,w is the amplitude of the basis state |x,w⟩.

    Yes, k-to-1 means that every output has k preimages. So to prove that a function f is 2-to-1 rather than 1-to-1, it suffices to find any collision pair (an x≠y such that f(x)=f(y)).

  14. Scott Says:

    Gerard #5: Just for you, I’ve done an archeological excavation of my inbox. While I don’t believe they ever published anything about it, back in 2014 then-MIT-students Charles Fu and Zipei Nie did a project about this problem. Let me quote from Charles’s conclusions, since I can’t improve on their clarity.

      1) we showed that the minimal algebraic formula size for 3 x 3 determinant (and permanent) is indeed 14, as expected; this is including division.

      2) we showed that the minimal division-free algebraic formula size for 4 x 4 determinant (and permanent) is at least 35. Moreover, we suspect that the exact bound is 47 for both determinant and permanent, that is, 4 x 4 is unlikely to be the separator between determinant and permanent.

      3) we also improved Kalorkoti’s Omega(n^3) bound’s constant; in particular, we improved his bound from 1/46 (n^3 – n^2) to n^3/4 – n^2/2 + n/2 – 1.

    Note that all this is for formulas rather than circuits (i.e., the fanout of each gate is restricted to 1).

    I’m not aware of any further progress—in particular, any concrete separation between permanent and determinant (which, per the above, would presumably require looking either at circuits for 4×4, or at formulas for 5×5 or above). Anyone who knows of such progress should let me know! My impression, though, is that what work there’s been on this problem has generally concentrated on e.g. the smallest m such that an nxn permanent can be embedded into an mxm determinant, which is asymptotically related to the formula complexity of the nxn permanent (and is “cleaner” in many ways) but is not exactly the same.

    As for the link, sorry, it should be fixed now (it was just an http vs. https issue).

  15. Scott Says:

    Gerard #6: I don’t know that anyone ever took me up on my challenge to create a public library of “3SAT instances that would have important consequences if solved”! But I could ask Marijn Heule, a world expert on applied 3SAT-solving who became a good friend years after I wrote that post.

  16. Scott Says:

    Nick Drozd #7:

      Problem #3 from “Projects aplenty” says: “Find an explicit n (the smaller the better) for which you can prove that the value of BB(n) (the nth Busy Beaver number) is independent of ZF set theory.” Did your work with Adam Yedidia actually spring from that blog post?

    I’d rather say that the blog post and my work with Adam Yedidia sprang from a common source—namely, me wanting an answer to that question! 😀

  17. Scott Says:

    Jack Galler #8:

      The classical one about cellular automata sounds very interesting to me. Any progress on that?

    Absolutely yes! My then-PhD-student Luke Schaeffer had a breakthrough on the problem back in 2014, when he gave an explicit example of a physically universal CA. See this blog post for more.

  18. Shalev Says:

    Scott #11, the paper is here:

    https://epubs.siam.org/doi/pdf/10.1137/S0895480192233867

    (The conference version was from 1991, not the 80s as I remembered.)

    Briefly, the construction is as follows. Fix a bipartite graph with the following properties: (1) it has n nodes on the left and 2n nodes on the right, (2) it has constant degree, (3) it is an expander. The inputs to the search problem will have one bit per edge of this bipartite graph. Interpret such an input string as specifying a subgraph of the original bipartite graph, where if an input bit is 1 it means the edge is kept and if it is 0 it means the edge is deleted. The goal is to output the name of a node whose degree in the subgraph is not 1 (i.e. it is either 0 or at least 2).

    Now:

    – Due to the constant degree of the original graph, a valid output has a constant-sized certificate.

    – As there are n nodes on the left and 2n nodes on the right, one can show that a constant fraction of all nodes in the subgraph will have degree not equal to 1, so there is a trivial constant-query randomized algorithm.

    – Due to the expansion property, it is possible to show that the deterministic query complexity is Omega(n).

  19. M Says:

    On time/space tradeoffs for collision search: using Grover to find collisions is overkill – we can match the query and space complexity classically. The birthday paradox naively already matches the query complexity, and we can even improve the space to O(log n) classical bits using Pollard Rho style techniques.

  20. Scott Says:

    Shalev #18: That’s really nice!! Thanks for telling me a result that I should’ve learned a long time ago.

  21. Scott Says:

    M #19: Thanks for that observation! Do Pollard Rho techniques work even if the range of the function is much larger than n?

  22. Gerard Says:

    fred #9

    That page is nice but there’s no indication of hardness for any of these instances.

    I tried downloading a couple of the sets and running Z3 and/or minisat on some instances. Oddly one of the sets contained all files with some non-standard DIMACS content that neither solver could read.

    On some examples from another set of 100 variable instances both solvers returned a solution immediately.

  23. Gerard Says:

    fred #12


    > “What I’d be interested in seeing is a list of small 3SAT instances […] that are difficult for all current solvers. “

    Is that a thing? I would imagine that different solvers could perform very differently on the same instances, otherwise, we would be able to tell ahead of time what instances are harder than others, and I don’t think it’s possible. But maybe I’m totally wrong about this…

    Also, I don’t think anyone can be centralizing results on every possible solver out there.
    It also all depends what we mean by “it’s easier for solver 1 than for solver 2”.

    If you look at the SAT competitions I believe you can find instances that no solver in the competition could solve. However these tend to be quite large instances with at least 10000 variables or so, I think. What doesn’t seem to happen much is people looking specifically for small instances that are hard under this kind of definition.

    The search space for an instance with 100 variables is already enormous compared to any resources you could hope to bring to bare on the problem so if 3-SAT is inherently of exponential complexity, wouldn’t one expect there to be small instances that no solver can solve ?

  24. Mo Nastri Says:

    Scott #10: I thought you’d finally coauthored a paper with Terry Tao, but when I clicked on the link I realized you just misspelled Tal. Bummer for my “favorite theorists crossover” fantasies…

  25. Scott Says:

    Mo Nastri #24: Thanks, fixed! (But even if he’s not Terry Tao, Avishay Tal is a spectacular CS theorist who I’m honored to coauthor with, and the crucial insight that yielded D(f)=O(Q(f)4) in that paper—resolving one of my favorite open problems for 20 years—was his.)

  26. fred Says:

    Gerard

    “The search space for an instance with 100 variables is already enormous compared to any resources you could hope to bring to bare on the problem so if 3-SAT is inherently of exponential complexity, wouldn’t one expect there to be small instances that no solver can solve ?”

    It’s really not obvious because simply re-arranging the list of variables can make a hard instance suddenly much easier for a given solver. Of course this specific “re-arranging” of variable order is in itself an NP-hard task, but I’m just saying that solvability is also driven by specific formatting (var order) of the same instance.

  27. M Says:

    Scott #21: that’s a good question, and I’m not sure the case of large output has specifically been analyzed, but I think it should hold.

    Suppose you have a 2-to-1 function \(H\). The basic idea is to come up with your own uniformly random function f which maps outputs of \(H\) back to the domain. Then you start at a random point \(x_0\), and keep computing \(x_i = f(H(x_{i-1}))\). After about \(O(\sqrt{n})\) steps, the sequence of \(x_i\) will loop back on itself. You can find this point in small space using the 2 pointer trick. The result is a collision for the function \(f(H(x))\). Usually you would try to have \(f\) be injective, so that any collision for \(f(H(x))\) is actually a collision for \(H\). If \(H\) is expanding, this is not possible. However, if \(f\) is actually a uniform random function, then you should be able to argue that your collision for \(f(H(x))\) is nevertheless a collision for \(H\) with constant probability.

    The problem with a random \(f\), of course, is that you need to write it down, which takes huge space. But you can use a pseudorandom function. The pseudorandom function would need to be exponentially secure, if the key size is to be logarithmic in n. Alternatively, if you assume \(H\) is a random 2-to-1 function, then the randomness of \(H\) should be sufficient.

  28. Gerard Says:

    fred #26

    What you say may be true but it should apply equally well to instances with 10000+ variables. It’s not like there are 100! different SAT solvers around, I doubt there are more than a few dozen that have been entered into the competitions.

  29. Luowen Says:

    Perhaps a stupid question, but for problem 11, couldn’t one do the following: first copy |x> in the computational basis (i.e. CNOT if binary), and then invoke the erasing oracle to simulate the standard oracle? Formally, |x>|0>|a> –> |x>|x>|a> –> |x>|f(x)>|a> –> |x>|f(x)>|a+f(x)> –> |x>|x>|a+f(x)> –> |x>|0>|a+f(x)>. The fourth arrow assumes access to inverse of erasing oracle, is that the issue? Is there any reason to assume that the algorithm should not be able to evaluate the inverse?

  30. Scott Says:

    Louwen #29: Exactly, with erasing oracles, we don’t automatically assume access to inverses (since maybe there are interesting separations that go away once you have inverses). I agree that the problem I mentioned would be trivial if inverses were available.

  31. fred Says:

    Gerard #28

    We also need to define what we mean by “instances that no solver can solve”.
    Competitions may have time limits. But maybe if you run a solver for a day or two, a solution is found. Or maybe no matter how long you run any solver, they all run out of memory or don’t seem to finish at all in a reasonable time.
    Another point on which I’m not clear is that among instances that have solutions, are all instances with just one solution harder to solve than instances with many solutions? that seems reasonable.

    Then, are instances with no solution harder or easier than instances with one solution?

    At the minimum, brute force on a problem that has no solution will always take longer than brute force on problems that have solutions (cause the probability that you’ll test the unique solution at the very end of a brute force run is very low).

    What I’m getting at is that hard instances with solutions seem more interesting to collect than hard instances with no solution, but if all solvers fail (run out of resources or just the implementer runs out of patience and pulls the plug) on a given instance, brute force won’t work either to figure if it has a solution or no solution!

  32. Gerard Says:

    fred #31

    > But maybe if you run a solver for a day or two, a solution is found.

    OK, but I’d still like to see a 100 variable instance that takes more than a second on my laptop.

    > Another point on which I’m not clear is that among instances that have solutions, are all instances with just one solution harder to solve than instances with many solutions? that seems reasonable.

    I doubt that’s true in as much generality as you state here. There could be formulae generated from easy problems, let’s say that have a lot of locality and where the underlying problem is actually in P, that may only have one solution but still be easy because of the way the constraints propagate. But typically I would expect it to be true for problems that are of a comparable level of intrinsic hardness apart from the size of the solution set.

    > Then, are instances with no solution harder or easier than instances with one solution?

    One thing that is clear is that you can’t expect to prove unsatisfiability by brute force, yet solvers are often capable of proving a formula UNSAT (and for some of them generating a certificate of unsatisfiability). I think that involves finding short resolution proofs that generate the empty clause (ie. a clause with no literals that therefore can never be satisfied) as a logical consequence of the other clauses.

    Beyond that I don’t know a lot about the details of how the solvers work except that the two main algorithms are called DPLL and CDCL (conflict driven clause learning).

  33. Gerard Says:

    fred #31

    Sorry, I had one more thing to add.

    > What I’m getting at is that hard instances with solutions seem more interesting to collect than hard instances with no solution

    I think there is quite a bit of interest in both types of problems. In fact if your goal is to prove theorems UNSAT may be of greater interest than SAT. I suspect that if you try to compile a first order logic theorem down to a CNF formula you will probably need to show that the formula is UNSAT in order to prove the original theorem, because of the way logical entailment works.

  34. Dan Says:

    For Problem 6, could I ask for a bit more clarification, please? It describes the Unitary Synthesis Problem. The goal is, for some given \(U\), to demonstrate existence of an oracle \(A\) and an algorithm in BQP\(^A\) that (approximately? deterministically?) ‘implements’ \(U\).

    Exactly what this means potentially depends on what approximations we’re allowed, and also on what gateset we’re allowed to build our algorithm from, including whether the implementation must be deterministic or may contain measurement and an associated (heralded) failure probability (failure destroying the underlying state on which \(U\) is being implemented).

    1. For a weak notion of ‘approximates’, we could add in some extra ancilla qubits and demand that \( |\phi\rangle|0\rangle \) is mapped deterministically to something close to \( (U|\phi\rangle)|0\rangle \). In other words, we don’t insist that the ancilla is cleaned perfectly. Such a “deterministic-but-unclean” mapping could then be transformed to a “randomised-but-clean” mapping by simply measuring the ancilla bits and post-selecting on their being zero. This weak notion of unitary synthesis is easy to do in \( O(n/\epsilon) \) many oracle calls, by hard-coding descriptions of approximations to both \( U \) and \( U^{-1} \) in the oracle \(A\).

    2. For a stronger notion of ‘approximates’, we allow extra ancilla qubits but demand that they are returned perfectly clean, while requiring the whole process to be deterministic. This is the version of the question that you’re getting at? (What motivates the requirement for perfectly clean ancillas and perfectly deterministic operation, if the operation is still allowed to be noisy?)

    3. A yet stronger notion would require returning clean ancillas and a fully precise implementation of \(U\), albeit with (heralded) failure probability. This is easily shown to be impossible with a finite gateset, simply because \(U\) might require infinite precision to specify precisely. So this can’t really be a reasonable interpretation of the problem. (Is there a better way of understanding “BQP\(^A\) machine” that allows for infinite precision in the target \(U\)?)

    Thanks!

  35. Mark Says:

    I very much like problem #10 on glued trees. I try to envision instantiating glued trees as a way to tie or untie a knot, or to show that two knots diagrams are equivalent to one another. Going up and down the trees feels like type I/type II Reidemeister moves, while going across the glued middle feels like a type III Reidemeister move. The START and END nodes may correspond to different knot diagrams. A classical random walk of tying/untying the knot may lead absolutely nowhere and get stuck in the middle, but maybe a quantum algorithm gets from left to right a lot faster.

    To your question, can this be framed as asking whether there’s a search/decision dichotomy in the glued trees framework? The Childs et al. paper might give a quantum algorithm to “decide” something about, e.g., knots, but it’s not clear if there’s a quantum algorithm to “search” for, e.g. a set of moves to tie or untie the knots.

  36. Scott Says:

    Dan #34: The unitary synthesis problem is interesting for any reasonable notion of approximating U. In other words, we lack a positive result even for the loosest notions of approximation you mentioned, or a negative result even for the most stringent ones! Once we have some results, then we can start worrying about these distinctions. 🙂

  37. Scott Says:

    Mark #35: Yes, we’re precisely asking whether “search is harder than decision” for the glued trees problem—that’s a nice way to think about it.

  38. Job Says:

    Gerard #32,

    One thing that is clear is that you can’t expect to prove unsatisfiability by brute force, yet solvers are often capable of proving a formula UNSAT (and for some of them generating a certificate of unsatisfiability).

    That sounds like primality testing, in the context of integer factorization.

    E.g. Solvers can’t efficiently find a prime factor for an integer x, but can efficiently show when x has no prime factors.

    Could there be an efficient “primality test” for NP Complete problems?
    What would be the consequence of something like this?

  39. OhMyGoodness Says:

    Scott #17

    Your 2014 post includes links to a webpage that Luke Schaeffer prepared discussing his result. Probably not but is his discussion available? The website isn’t still at the included link.

    The functional definition of a corporate bureaucracy as “people sitting around trying to poison each other with PowerPoint presentations” may need to be freshened based on your latest post. 🙂

  40. Job Says:

    Actually, the consequence would be P=NP.

    I guess integer factorization is just unique and weird.

  41. Gerard Says:

    Job #38 and #40

    Right, if factorization were known to be NP-complete (in other words we knew how to encode any 3-SAT instance as a factorization problem) then an efficient algorithm for primality testing would be able to solve any coNP problem efficiently which would imply P = NP.

  42. Scott Says:

    Gerard #41: You mean factoring. We already have an efficient algorithm for primality testing.

  43. Gerard Says:

    Scott #42

    They’re equivalent though aren’t they ?

    If I have the factorization problem: ( x*y = c ) && ( x != 1 ) && ( y != 1 ) (where c is known, x and y are unknowns) then that (by which I mean the CNF encoding of the associated boolean circuit) is UNSAT iff c is prime.

    PS. I haven’t thought about this overnight, so I’m probably missing something.

  44. Scott Says:

    Gerard #43: NO!!!!! Finding nontrivial prime factors seems to be much, much harder than merely deciding whether they exist (the primality testing problem) — that’s one of the most fundamental observations in algorithmic number theory. For the NP-complete problems, there’s an equivalence between search and decision, but it doesn’t seem to extend to many problems below NP-complete like factoring.

  45. Gerard Says:

    Scott #44

    > For the NP-complete problems, there’s an equivalence between search and decision, but it doesn’t seem to extend to many problems below NP-complete like factoring.

    Yes, but we were considering the hypothesis where factoring were shown to be NP complete (should be hard because factoring itself isn’t a decision problem).

    What would such a proof look like ? Clearly there would need to be a way to reduce any 3-SAT DP instance to a factoring problem that is equisatisfiable in some sense. It seems to me that all of the NP hardness reductions I’m aware of end up producing a problem that can be considered a decision problem. What decision problem would there be associated with factoring if not primality testing ? The only other one I can think of is multiplication (ie. does a * b = c where a, b and c are constants), which is even easier.

  46. Scott Says:

    Gerard #45: No, the thing to imagine is that some decision problem regarding the prime factors, more interesting than “do any nontrivial prime factors exist?,” would encode 3SAT — for example, “are there at least 3 prime factors?” “is there a prime factor ending in a 7?” In that scenario, we’d still have NP=coNP but not necessarily P=NP. This is what people mean when they discuss the (remote) possibility of factoring being NP-complete.

  47. Gerard Says:

    Scott and Job

    Well I guess you can imagine other kinds of reductions where you would need to know something about one of the factors to solve the decision problem. You might have to evaluate some arbitrary predicate on one of the factors, which primality testing alone wouldn’t allow you to do. Primality testing could even be completely irrelevant if the reduction guaranteed that the number it generates is not prime.

    It seems like trying to reason about the consequences of hypothetical reductions is a bit too speculative.

  48. Scott Says:

    Gerard #47:

      It seems like trying to reason about the consequences of hypothetical reductions is a bit too speculative.

    ROFL, complexity theory might not be your cup of tea then 😀

  49. Gerard Says:

    Scott #48

    > ROFL, complexity theory might not be your cup of tea then

    There’s no doubt about that. I’ve only ever cared about finding solutions, not proving that they don’t exist.

  50. Turing's Enigma Says:

    What is the use of query complexity to P vs NP? Why study query complexity and work on it?

  51. Scott Says:

    Turing’s Enigma #50: Does everything have to help with P vs. NP in order to be worth working on? 🙂 Query complexity is the source of many insights in complexity theory—in a certain sense, it’s all of computational complexity except for the “non-relativizing” aspects—but it especially comes into its own in quantum algorithms, where it’s the source of a large fraction of everything we know (to take one example: explaining what structure Shor’s algorithm exploits that doesn’t seem to be available for the NP-complete problems).

  52. Turing's Enigma Says:

    Do you know how deeply washed up the statement “Query complexity is the source of many insights in complexity theory—in a certain sense, it’s all of computational complexity” is?

    Why should a graduate student or a mathematician in algorithms or lower bounds care about query complexity which seems to be a niche area with no applications to complexity contradicting your claim it is all of complexity?

    Can you be precise what you mean by all of complexity is related to query complexity?

    I think you are overrating query complexity because you are working on it.

    May be you are biased.

  53. Gerard Says:

    Scott #51

    > explaining what structure Shor’s algorithm exploits that doesn’t seem to be available for the NP-complete problems

    How would you describe that structure ?

  54. Scott Says:

    Turing’s Enigma #52: I said it’s all of computational complexity except the nonrelativizing aspects—a statement that’s true basically by definition! The nonrelativizing aspects of complexity are obviously extremely important, but we still have only a very narrow window into them. Most of what we currently know about the relationships between complexity classes, holds true even in the simpler setting of query complexity.

    I’ve often used the analogy that query complexity is to theoretical computer science as perturbation theory is to particle physics. I.e., it’s an approximation that doesn’t capture everything but that’s been the source of a large fraction of what we know.

    I find your tone needlessly hostile, so this will be my last response to you. If you don’t like the stuff I work on, go read someone else’s blog!

  55. Scott Says:

    Gerard #53: Periodicity.

    In the core part of Shor’s algorithm, you’re given black-box access to a function that’s known to be periodic, and are asked to find its period. So, not only is there structure, there’s abelian group structure, which is the key thing that lets the Quantum Fourier Transform work. The periodicity structure makes the problem completely different from just searching for a needle in an exponentially large haystack, which is the problem addressed by Grover’s algorithm (for the latter, only a square-root speedup is possible). This is one of the most fundamental distinctions for anyone studying quantum algorithms.

  56. Gerard Says:

    Scott #55

    What if for some problem you could construct a function that was periodic but you needed to find not its period but whether it contained some specific frequency component. Would the QFT help with that ?

  57. Scott Says:

    Gerard #56: As long as
    (1) the Fourier component that you were looking for had a large enough magnitude, and
    (2) your access to the function was of a kind that let you prepare the appropriate superposition state,
    yes, you could. I think that, e.g., Sean Hallgren’s PhD thesis has material about the use of the QFT to learn about functions that aren’t fully periodic.

  58. Turing's Enigma Says:

    I think your opinion “we still have only a very narrow window into them” is misleading. Algorithmic aspects are non-relativizing and is arguably the heart of theory cs. Tell me why cannot we translate small quantum query complexity vs large randomized query complexity to acutual results on BQP vs BPP. The simple answer is there could be a non-relativizing algorithm which collapses BQP and BPP and I really do not like how you shoot down those who disagree with your grandiose but marginally reasonable opinions.

  59. Dan Says:

    Scott #36: Ah, I guess I hadn’t realised the Unitary Synthesis problem was wide open! I can sort-of see a possible path to its resolution, but can’t yet prove anything…

    In terms of computational group theory, if we let \(H\) denote the Hadamard matrix (on one qubit) tensored with the identity (on \(n-1\) qubits), and if we let \(gen\) denote the set of all \(2^n \times 2^n\) permutation matrices together with \(H\), then \(gen\) finitely-generates a dense subgroup \(G\) of the orthogonal group (on \(n\) qubits). (The intuition is that oracles furnish the permutations, standard BQP methods furnish the \(H\) gate, and no other elementary gates are necessary for BQP universality.) The question then is about how rapidly this density is achieved, i.e. Does there exist an orthogonal matrix which is not well approximated by a ‘low height’ element of \(G\) i.e. not until a superpolynomially long string of generators is considered?

    Counting (or rather upper-bounding) the number of points of \(G\) reachable within ‘\(k\) rounds’ of such gates, we see that there are \((2^n)!\) permutations available, so at most \((2^n)!^k\) points reachable after \(k\) rounds. Heuristically, the number of points of \(G\) reachable after polynomially many gates could be written \((2^n)!^{poly(n)}\), or \(2^{2^n \cdot n^{O(1)}}\). Perhaps standard results about epsilon nets in Lie algebras would be enough to resolve the question (albeit non-constructively) using these kind of counting arguments, though I’m not familiar enough with the methods to do this myself.

  60. Gerard Says:

    Scott:

    I tried implementing the 4×4 matrix determinant formula from slide 7 of your wildidea.ppt presentation and it doesn’t seem to work when you substitute in numerical values. Maybe there was some information in the talk that isn’t in the slide. Obviously it assumes some pre-processing to avoid division by zero. I just ignore those cases when they occur but maybe there’s some additional pre-processing assumed that I’m not aware of ? Otherwise is there anything that explains where that formula comes from ? It’s not obvious to me how you get it from Gaussian Substitution.

  61. Gerard Says:

    Scott, Re #60

    Sorry I meant to say Gaussian Elimination.

  62. Gerard Says:

    Scott

    I think I have a version that works now but with 41 operations instead of 37 and it’s a circuit instead of a formula. Now that I see how this works I’ll try to find the error in your slide.

  63. Scott Says:

    Gerard #60-62: Thanks! Hopefully it’s just some easily fixable bug.

  64. Gerard Says:

    Scott #63

    Found it !

    Line 2 should read EA X B instead of EA B.

    Other than that my circuit looks identical to yours, I just used better variable names 😀.

    The opcount is indeed 37. It turns out I had overcounted in a couple of places.

  65. Dina G Says:

    Any thoughts on the current status of question 2 from the original semi-grand list: what quantum systems can be simulated in classic polynomial time?

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.