I admittedly couldn’t understand your claimed argument. But let me make two remarks:

* Impagliazzo-Wigderson did *not* prove that P!=NP implies P=BPP. They proved that if E requires exponential-size circuits then P=BPP. Their assumption is both weaker and stronger than P!=NP: weaker because you only need a lower bound for E, but stronger because the lower bound you need is nonuniform and exponential (as opposed to uniform and superpolynomial).

* *If* we could prove that P!=NP implies P=BPP, then we could conclude P=BPP unconditionally! For there are only two cases: P!=NP or P=NP. And if P=NP, then we know the polynomial hierarchy collapses down to P, and hence P=BPP by Sipser-Gacs-Lautemann.

Is there a mistake in the following ?

assumption: BBP \in NP

suppose P=NP => P=BPP

suppose contrary P!=NP => (Impagliazzo-Wigderson) P=BPP

hence: BBP \in NP => P=BPP

]]>Theorem from my paper: If p is prime, then there exists an ensemble of p^t unbiased states on a p-state qubit with the following property: If you have t copies of a state |psi> drawn from the ensemble, the result is measurement-indistinguishable from t copies of a uniformly random state in the torus of all unbiased states. In other words, there is a t-design on this torus with p^t points.

Scott had found a very similar t-design for n qubits with 4^{tn} points. It seems that if you use a prime p instead of the prime power 2^n, and if you tweak the construction a tiny bit, then you can square-root the number of points needed in the design.

]]>One thing I might’ve added (though I realize you are cramming material in as is) is more on the distinction between the various strengths of derandomization we might hope for, i.e. if P = BPP, it’s still unclear whether derandomization is achievable by pseudorandom inputs to BPP algorithms, and if so, whether one pseudorandom generator can work for all BPP algs simultaneously.

We should be sure to emphasize not just that Impagliazzo-Wigderson suggests P = BPP, but that it suggests pretty much the strongest kind of derandomization is possible, covering Monte Carlo estimation, etc. Seems like the latter is what most people care about, not e.g. polynomial identity testing.

I also think it’s interesting that we can’t find significantly weaker complexity hypotheses (than the I-W one) that would show P = BPP but not strong derandomization.

]]>I am to lazy to think about the problem before I am certain I have understood the question.

]]>Theorem: Consider a randomly chosen state in a p-state qudit of th eform e2*pi*a(x)/p|x>>, where a(x) is a polynomial function from Z/p to Z/p of degree at most t-1. Then the expected value of

tensor t A |psi>tensor t

exactly equals the value that it would take if all of the phases of |psi> were uniformly random on the circle. In other words, this set of states is a t-design, or a t-cubature formula, for the torus of unbiased states. This construction is asymptotically optimal for any fixed t, certainly if you want exact equality, and possibly even if you want approximate equality.

]]>