I’d say that Jan’s model is classical because it can be efficiently simulated classically.

Of course, the simulation overhead needs to be taken into account. But what’s the overhead? That’s the problem.

If the QC gets a quantum version of the circuit as input, then does the classical algorithm also get its *customized* version of the circuit – whatever that might be?

This is why i find the oracle separations for BQP to be weaker than normal.

Granted, for Simon’s problem it’s not clear how a custom circuit could allow a classical machine to match a QC, but it’s not out of the question.

My impression is that a toy model, like Jan’s, can exhibit interference and solve Simon’s problem for simple circuits (e.g. using NOT/CNOT), which we already know can be efficiently simulated, but will have a real tough time when Toffoli’s or controlled PI/x rotations are used.

]]>I agree with Sniffnoy #20 that the way you leave quantifiers implicit in the definitions of R and R_0 is confusing. Even after your reasoning in #23, I think you should try to improve that part of the description.

]]>————————————————————

here is the algorithm. Set the number of columns to n^{1/3}, the number of rows to n^{2/3}.

1. Sample n^{1/3} log n random entries in each column, eliminate all columns with at least one 0. W.h.p. this eliminates all columns in which the total number of 0s is at least C n^{1/3}.

2. Repeat C log n times, for an appropriately chosen C:

2a. Choose a random column in which we do not know any 0s, query all elements in it.

2b. If the column is all-1, test if f=1.

2c. If not, take every 0 in the column and pursue the sequence of pointers starting at it until the sequence ends or visits the same column for the second time.

3. If an all-1 column was not found in step 2, answer “f=0”

The number of queries is:

Step 1: n^{1/3} log n in each of n^{1/3} columns, total of n^{2/3} log n;

Step 2: in each iteration, we have:

– n^{2/3} queries to query all the elements of column;

– C n^{2/3} queries to pursue c n^{1/3} sequences of pointers, each of which is of length <= n^{1/3};
Since the number of iterations is O(log n), the total number of queries is O(n^{2/3} log n).
Correctness: Let S_i be the set of columns that contain 0s but for which we have not yet queried any zeros, after i repetitions of step i. I would like to claim that, if f=1, E[S_{i+1}]<=E[S_i/2]. This implies that, after O(log n) repetitions, S_i will be empty w.h.p.
The proof of E[S_{i+1}]<=E[S_i/2] is as follows:
- if f=1, there is the good path of pointers;
- all columns of S_i are on this path;
- if we choose column k in step 2a, we end up eliminating from S_i all the columns which are after column k on the path;
- since k is chosen randomly, every other column in S_i is after k with probability 1/2.

The fundamental difficulty is that, with the hidden-variable models, you only get to see a hidden-variable history **once**. I.e., you don’t get to sample multiple hidden-variable histories in superposition, then sample the history of a hidden variable through *that* quantum computation, and so on recursively. Because of this, hidden-variable algorithms don’t compose in the way you would like. And since the quantum algorithms for the Ambainis et al. functions all work by composing multiple calls to Grover’s algorithm, this failure of composition is a serious problem.

As an easier warmup case, let’s consider an OR of √N AND’s of √N variables each. This function is well-known to have quantum query complexity Ω(√N). Thus, by analogy to the case of the OR function, one might hope that its hidden-variable query complexity would be O(N^{1/3}). The trouble is that that would require recursion: if we run the recursive Grover algorithm, then sampling the hidden variable will speed up the search for a subtree of the OR that evaluates to 1, but it’s not going to speed up the search *within* each subtree. For this reason, the best upper bound I’m able to get on the hidden-variable query complexity of the OR/AND tree is

O( (√N)^{1/3} (√N)^{1/2}) = O(N^{5/12}).

This is better than √N, but not all the way down to N^{1/3}.

And for Ambainis et al.’s function h—the one that gives a cubic separation between R_{0} and Q—I’m not able to get *any* upper bound on hidden-variable query complexity better than the upper bound on Q. Basically, hidden-variable sampling only obviously helps you for the *last step* of the algorithm, the one where you’re verifying that the cycle of “marked rows” really is marked. But when you plug in m^{1/3} rather than √m for the query complexity of the last step, the linear program that tells you how to optimize the parameters still doesn’t return a better solution than T=(mn)^{1/3} queries.

Likewise, for Ambainis et al.’s f function—the one they use to achieve a quartic separation between D and Q—I also currently can’t get anything better when I consider the hidden-variable query complexity. The reason is that we have D=Ω(nm) assuming n≥2m, while (ignoring log factors)

Q = O(√n + √m),

and the hidden-variable complexity is O(√n + m^{1/3}). But as long as we need the assumption n≥2m, improving that √m to m^{1/3} just doesn’t make a difference.

OK, but having said all that: there’s also an alternative model we could consider, where you get to make “multiple non-collapsing measurements” of a state, then use the (classical) results of those measurements as inputs to your next quantum computation, and so on. (Note, however, that you still don’t get to invoke the “multiple non-collapsing measurements” oracle in superposition.) This is basically the model that Bouland, Fitzsimons, Lee, and I considered.

In this stronger model, first of all, using T queries, you can prepare a superposition that has a probability mass of T^{2}/N on the ‘1’ branch of the OR/AND tree. You can then keep sampling from that superposition over and over, and for each branch you see, check whether it’s indeed a ‘1’ branch in O(N^{1/6}) queries. So if a ‘1’ branch exists, then you’ll succeed in finding one after (N/T^{2})*N^{1/6} queries total. Setting

T = N^{7/6}/T^{2}

and solving for T, we find that a non-collapsing algorithm can evaluate the OR/AND tree with

O(N^{7/18}) ~ O(N^{0.389})

queries, which is better than the

O(N^{5/12}) ~ O(N^{0.417})

for the hidden-variable algorithm, but still not all the way down to O(N^{1/3}).

In the non-collapsing measurements model, I also get that, by setting k=N^{2/7} and m=N^{6/7} (where N is the total number of cells), we can get an O(N^{2/7})-query algorithm for the function h. This yields a 3.5^{th}-power separation between R_{0} and the non-collapsing query complexity, better than the cubic separation that Ambainis et al. got between R_{0} and Q.

Likewise, for Ambainis et al.’s function f, I get that there’s a non-collapsing algorithm for the first stage (the one that finds the candidate marked column) that uses

O( min{ (n/k)^{1/3}k, k^{1/3}√(n/k) } ) = O(n^{7/15})

queries, and an algorithm for the second stage (the one that verifies the marked column) that uses O(m^{1/3}) queries (as usual, ignoring log factors). Combining this with the lower bound D(f)=Ω(nm) that holds when n≥2m, we find that the non-collapsing query complexity of this function is

O(D(f)^{7/30}) ~ O(D(f)^{0.233}).

So, again, this is slightly better than the quartic separation between D and Q.

]]>1) If there are more than n^{3/2} zeroes on the board, then random sampling will knock out a column after O(sqrt(n)) samples. Unless the distribution is very uneven, random sampling of un-eliminated columns will eliminate all but sqrt(n) columns within O(n * sqrt(n)) samples; and then you can just brute force the rest.

2) If there are fewer than n^{3/2} zeroes, then fully scanning randomly chosen columns, then following all the chains, and iterating on un-eliminated columns, becomes viable. If there is a long chain of zeroes, like there must be if there’s a solution, then you’ll tend to hit it at the halfway mark when you sample a column, cutting the un-eliminated columns in half.

3) I’m not exactly sure how to solve the case where there isn’t a long chain of zeros and there’s less than n^{3/2} zeroes. I think it has to do with finding columns A and B such that A doesn’t lead to B and B doesn’t lead to A proving there’s no solution. An adversary trying to avoid that disproof being easy to stumble upon might have to connect columns in ways that mean searching outward from one column must lead to at least half of the other columns on average, eliminating them.

]]>Let DQ(f) be the “dynamical quantum query complexity” (i.e., quantum query complexity in the hidden-variable model I defined in 2005).

Then I *conjecture* that DQ(f)=Ω(bs(f)^{1/3}) for all f; this would follow if a cube-root speedup was the best you could get for the Grover search problem. In my original paper, I had sketched what I claimed was a proof for this, but the proof was wrong—see my blog post about this, or the later paper with Bouland, Fitzsimons, and Lee.

Anyway, *if* the cube-root lower bound is true—or true for some particular class of hidden-variable theories—then for those theories, combining with D(f)=O(bs(f)^{3}) would yield D(f)=O(DQ(f)^{9}).

What *separations* are possible between D(f) and DQ(f), or between R(f) and DQ(f), etc. in light of Ambainis et al.’s results, is a great question that I’ll take up in a subsequent comment.