Changing fields
I’m at CWI in Amsterdam, after spending two weeks in Israel. Next week I head to QIP’2010 in Zurich, and the week after that, to the Perimeter Institute in Waterloo.
The following meta-question emerged from a conversation with Dorit Aharonov two weeks ago:
What’s your favorite example of a result in theoretical computer science that works over finite fields, but doesn’t work (or isn’t known to work) over the reals or complex numbers?
Conversely, what’s your favorite example of a result in TCS that works over the reals or complex numbers, but doesn’t work (or isn’t known to work) over finite fields?
In either case, what’s the crucial property of the underlying field, that causes the result to work in one case but not the other?
By “crucial property”, I mean something like this:
- There’s a natural metric (i.e., a distance measure) on the reals or complex numbers, but not on a finite field.
- There’s a uniform distribution over a finite field, but not over the reals or complex numbers.
I’d especially be interested in properties that don’t reduce to one of the two above.
Comment #1 January 13th, 2010 at 10:48 am
Dvir’s solution of the Kakeya problem in finite fields has to be high up on the list of all-time good examples, and since it was proved by a TCS person it could count as a TCS result I suppose. Here it seems to be the metric properties of R that make the problem different in that case (but others are much better placed than I am to discuss this example).
Comment #2 January 13th, 2010 at 11:12 am
Shamir’s secret sharing works only on finite fields. And this is due to crucial property #2.
Comment #3 January 13th, 2010 at 11:14 am
unfortunately…
Comment #4 January 13th, 2010 at 11:21 am
A relaxed version of the same phenomenon is when a property is trivial for finite fields and non-trivial for the reals/complex, or vice versa. Some examples of this type would be interesting also.
Comment #5 January 13th, 2010 at 11:22 am
Works over reals, not over finite fields: all of lattice-based cryptography! 🙂 A big reason is property #1 above.
There is of course code-based crypto over finite fields (using the Hamming metric), but it’s comparatively lacking in rigorous proofs, and variety of what can be achieved.
Comment #6 January 13th, 2010 at 11:44 am
I think (2) might better be phrased as “finite fields are compact.”
How about the Riemann hypothesis? Over finite fields: proven. Over C: one of the biggest open problems in mathematics.
In another direction, the Ax-Grothendieck theorem is a result that is trivially true over finite fields, but interestingly true over C: a polynomial function from the field to itself that is injective must be bijective. (Partly I find this interesting because it is an example of a result outside the domain of logic that can be proved using ideas from mathematical logic.)
Comment #7 January 13th, 2010 at 11:50 am
There’s the Basu-Zell topological proof of the real Toda’s theorem. I asked Basu about the issues with extending it to finite fields, but he was circumspect. That said, the argument is pretty algebraic in nature, and I’d be surprised if it didn’t have something to do with the fact that cohomology theories for algebraic varieties over finite fields are way more complicated than for the reals. I don’t know what the underlying reason for that is, though — completeness?
Incidentally, Basu only recently extended the method to a complex analogue. The first proof relied on the ordering of the reals, which I’m kind of surprised you didn’t mention, since it comes up all the time in, um, math.
Comment #8 January 13th, 2010 at 11:53 am
I guess I should note that by “recently,” I mean “last month.” Since the first B-Z proof is itself barely a year old, it pays to be more precise.
Looking over the abstract again, I see that both the real and complex proofs assume that the domain that actually comes up is compact….
Comment #9 January 13th, 2010 at 12:10 pm
Polynomials over Q can be factored in deterministic polynomial time, while for polynomials over (large) finite fields no one has managed to remove randomness.
Comment #10 January 13th, 2010 at 12:32 pm
Scott, footnote 3 of Terry Tao’s article Perelman’s proof of the Poincare conjecture: a nonlinear PDE perspective (arXiv.org:math/0610903) can be read as a (heuristic) strategy for constructing properties of the class you seek.
The heuristic (which is Perelman’s heuristic that Tao is commenting upon) is to take any property/algorithm/theorem that is awkward and/or infeasible to establish by combinatoric and/or algebraic proof methods, and recast that problem as a dynamical PDE flow that is associated with a natural geometric intuition and/or (usually equivalently) a natural physical picture.
Plenio and Virmani’s Upper bounds on fault tolerance thresholds of noisy Clifford-based quantum computers (arXiv:0810.4340v1) provides an example to which this heuristic can be applied. Their result—the existence of efficient simulation algorithms—is feasible-but-awkward to establish by via Clifford algebras. To engineers, results in this class seem somewhat more natural—and perhaps broader in practical application—when they are translated into statements about (symplectic and metric) structure-and-flow on complex manifolds.
The practical point is that structure-and-flow frameworks are becoming ubituitous in modern systems science—most especially in biology and in finance—and that is why we quantum systems engineers are keeping a keen eye out for TCS results that can be naturally converted to PDE results.
Looking into the future (but only a couple of weeks), modern optical networks can be viewed as analog devices for computing the permanent, and thus TCS results concerning the hardness of computing the permanent have a natural isomorphism with structure-and-flow frameworks based on PDEs.
This means that any physical device that computes (by analog means) a permanent is almost surely infeasible to simulate by classical resources … or to say the same thing in engineering terms … any input that (effectively) computes a permanent is almost certain to “break” a PDE structure-and-flow simulation of that input. We engineers are eager to understand this breakdown, both in finite-field frameworks and in structure-and-flow frameworks.
More broadly, natural problems that break simulation codes are a priceless resource for systems engineers—we need all the examples we can get—no matter whether the field is biology, finance, or QIP.
And *that*, Scott, is why we quantum systems engineers are very much looking forward to your scheduled QIP2010 talk, New evidence that quantum mechanics is hard to simulate on classical computers … which we hope will flesh out the recent work that you and Aleksandr Arkhipov have presented on permanents-as-optical-scattering.
The goal here is broadly characteristic of modern systems engineering: intuitions from TCS provide vital guidance on how best to *crash* PDE codes . Good!
In summary, by understanding better—via TCS insights—the class of inputs for which PDE-based structure-and-flow simulations *do* crash, we come to understand better the class of inputs for which they *don’t* crash … and for modern-day systems engineers this is vital knowledge indeed.
Comment #11 January 13th, 2010 at 1:41 pm
The exponential lower bound on the size of depth-3 circuits over finite fields due to Grigoriev-Razborov ’98. It follows from low-degree approximation techniques that don’t generalize to the reals where the best lower bound is still quadratic. This shows how our knowledge of lower bounds can differ dramatically between finite fields and the reals.
Comment #12 January 13th, 2010 at 1:45 pm
Completely tangential, but cool to hear you’ll be coming to Perimeter soon! I’m looking forward to your talk here.
Comment #13 January 13th, 2010 at 5:57 pm
The Mignon-Ressayre quadratic lower bound for determinant versus permanent.
Comment #14 January 13th, 2010 at 8:24 pm
One of my friends who had not gone through the whole blog post (had just seen the title) told me that Scott Aaronson is changing his field?
I said “What do you mean?”
He said “You had written a blog declaring that?”
I was dumbstruck. Now after having gone through your post, all I can say is that I had a hearty laugh (with that friend of mine) who I will not identify [:)]
Hey XXXX don’t worry.
Comment #15 January 13th, 2010 at 8:25 pm
My bad
He said “You had written a blog declaring that”
I instead wanted to write
He said “Scott had written a blog declaring that”
Now XXXX will be having a hearty laugh [:(]
Comment #16 January 13th, 2010 at 10:24 pm
Well, here’s one that doesn’t technically meet your criteria, but I’m fascinated by it anyway. And, hell, this is a blog, so I’m not going to follow the rules compulsively. This is about elementary quantum mechanics, and doesn’t have any connections to TCS that I know about, yet.
Let H be a Hilbert space indexed by a field — either (i) GF(pn), or (ii) the reals, R. Such Hilbert spaces support a natural notion of phase space (discrete for GF(.), and continuous for R).
Now, the interesting thing is that all the finite Hilbert spaces have dim(H)+1 mutually unbiased bases, corresponding to sets of lines on phase space. But as far as anybody can tell, the infinite one [a.k.a. L2(R)] doesn’t — it only has three MUBs.
One of the things that’s interesting about this is that it’s not fully understood. In particular, continuous-variable Hilbert space is a nasty subject, and the inner produt doesn’t necessarily have all the same interpretations that it does for finite dimensions! So this result may be an artifact of misinterpreting inner products. We don’t know.
AFAICT, the crucial property is topology — R has a notion of neighborhood, and GF(.) doesn’t. This is clearly related to “has a metric,” although I’m not sure whether it’s entirely the same or not.
Comment #17 January 14th, 2010 at 9:42 am
The Chevally-Warning Theorem: Given n homogenous polynomials f_1, …, f_r in n variables, with sum deg(f_i) less than n, there is a nontrivial root. True over any finite field, very false over R.
I’m not sure that this is a CS result, but here are two reasons you might care about it:
(1) It’s a nice example of a nonconstructive existence result; the proof gives no algorithm for finding that root other than brute force search.
(2) This makes certain extremal combinatorics problems behave very differently over finite fields and R. For example, in the Cohn-Umans fast multiplication paper, they have a section where they point out that, while they can’t construct triples of large subgroups of GL(F_p) satisfying the triple product property, they can do it in GL(R). When you unwind why this works, the point is that certain equations don’t have any real roots, because they look like “sum of squares = -1”, but they do have roots in finite fields, because their degree is low.
Comment #18 January 14th, 2010 at 11:39 am
Algebraic closure? True over C, false over any finite field.
I guess that one could attribute the algebraic closure of C to a metric property, but obviously that isn’t enough since R isn’t closed. If I had to classify it, I would call it a topological property.
On the other hand, it is clear that the finiteness of finite fields is the problem: a simple counting argument proves that polynomials without roots exist over any finite field. I don’t think that the existence of a uniform distribution is really the issue here.
Comment #19 January 14th, 2010 at 3:55 pm
The fact that, over C or R, polynomials are equivalent iff the functions that they compute are equivalent. To illustrate what I mean, consider these two polynomials over the field GF(2): (X_1)^2 +( X_2)^2 and X_1 + X_2.
They are not the same as “formal objects”, but the respective functions from G(2)xG(2) to G(2) that they compute are the same. Over C or R this cannot happen.
This means that some circuit lower bounds that work out in the arithmetic model (over C, say) don’t carry over to the binary model, because over GF(2) it suffices to compute any polynomial in the equivalence class.
Comment #20 January 15th, 2010 at 1:36 pm
Hmmmm … here is an example … or really, a question … let’s ask “Is computing the permanent/determinant algorithmically well-conditioned?”
And I hereby ask forgiveness in advance, for a post whose intuitions are wholly physical in origin (rather than mathematical). Because this very possibly is a case in which the mathematicians know more than the practicing engineers.
For finite fields the answer is (seemingly, `cuz I’m no expert in finite fields) … “Of course! Because any and all algebraic operations have zero error!”
For determinants the physical intuition is “Of course! Because determinants are geometric objects, which *always* are well-conditioned.”
For permanents the physical intuition is *also* “Of course! Because we can construct analog devices that compute the permanent, using off-the-shelf optical couplers … and physically realizable analog computations *always* are well-conditioned.”
But when we look to the mathematical literature for a proof well … Wikipedia is no help … and the arxiv server whole-text search just locks up on “well-conditioned permanent” (maybe its off-line).
Falling back therefore on physical intuition, we tentatively decide that the answer is either (and most likely?) “Yes, permanents are well-conditioned”, or else (and less likely?) “No, because a constructing a physical optical device having a specified scattering matrix is more difficult than one might anticipate”, or else (least likely?) “No, and this physically corresponds to high-order correlations in boson scattering experiments being far more sensitive to noise than has been generally appreciated.”
For positive-entry matrices, the success of the Jerrum-Sinclair-Vigoda algorithm would seem to offer mathematical support that the answer is “yes”. So is there a proof that the general permanent over R or (better) C is well-conditioned?
In any case, these considerations nicely illustrate what von Neumann called “the reinjection of more or less directly empirical ideas” into mathematics.
Or as engineers prefer to view it, “the reinjection of more or less directly mathematical ideas” into practical engineering. 🙂
Comment #21 January 15th, 2010 at 2:23 pm
This is kind of funny. I copied the whole thing so that posterity doesn’t loose this gem: (original link here) The original title was “Aussie quantum experiment challenges Einstein, computer science” but the author changed it.
Aussie quantum experiment challenges computer science
University of Queensland and Harvard experiment set to shake science
* Darren Pauli (Computerworld)
* 12 January, 2010 13:24
* Comments (6)
Tags | university of queensland | quantum computing | harvard university
Australian scientists have completed ground-breaking research using quantum computing that will challenge, among scientific principles, the theory of quantum mechanics.
A joint experiment between the University of Queensland (UQ) and Harvard University, the first of its kind to apply quantum mechanics to chemistry to predict molecular reactions, could have huge implications for science.
UQ physics professor Andrew White, a co-author of the project, said the existence of quantum computing means that either quantum mechanics is wrong, or the Church Turing Thesis, which underpins computer science, is flawed.
“If the Church Turing Thesis is wrong, that’s really big news; or it means that quantum computing will turn out to be impossible for a fundamental reason, or that a fast classical factoring algorithm exists,” White said, referring to a theory by MIT assistant professor Scott Aaronson that the only way to prove the probability of quantum mechanics is to build a quantum computer.
“If you asked [the inventors of the diode] what good they have done, they might have said they can shrink a computer to the size of a living room, but they would never have guessed what computers would become – this is where we are at.
“What we have done is a 2 qubit (quantum bit), toy experiment – it won’t put anyone out of a job anytime soon… but if we scale to tens and then hundreds of qubits, that’s when we will exceed the computational capacity of the planet… that will happen [within] 50 years.”
Due to the nature of science, the ramifications of the experiment are essentially unknown, however, White postulates that it could be used to predict the outcome of chemical reactions, albeit without the inherent randomness that is absent in controlled computer simulations.
He said it is likely that chemistry, rather than cryptography (which requires a prodigious amount of processing) will spearhead quantum computing research.
The experiment ran an algorithm dubbed the iterative phase estimation to measure the precise energy of molecular hydrogen against a predicted model. The results, White said, were “astounding” and were accurate inside of 6 parts in a million. Data was calculated to 20 bits, and in some instances up to 47 bits, and experiments were repeated 30 times for classical error correction.
Quantum computers work “brilliantly” for molecular simulations: Computational power doubles with each qubit, via the phenomena of entanglement, while the complexity of chemical reactions double with each additional atom. Simply put, no other computer, supercomputer, or bunch of supercomputers, could hope to run the simulations to the same degree.
A quantum computer with hundreds of qubits would be more powerful than every traditional computer on Earth, amounting to billions of bits. “A classical computer with 300 bits of can store 300 bits of information, whereas a 300 qubit register can store more information than the number of particles in the universe,” White said.
Scientists involved on the project included Benjamin Lanyon, Geoffrey G. Gillet, Michael E. Goggin, Marcelo P. Almeida, Benjamin J. Powell, Marco Barbieri and Harvard’s Alán Aspuru-Guzik. The experiment was funded by the Australian Research Council Federation Fellow and Centre of Excellence programs, and the US Army Research Office and Intelligence Advanced Research Projects Initiative.
Comment #22 January 15th, 2010 at 3:19 pm
The concluding slide of Scott’s talks usually phrase “quantum mechanics is wrong,” a little bit more delicately as “textbook quantum mechanics is false.”
If we factor “textbook quantum mechanics is true” into the (better-posed) paired assertions “QM dynamical flow is symplectic” ∩ “QM state-space geometry is Hilbert”, then there are strong QIP arguments for the former (see Abrams and Lloyd 1998 for example), but there is not much QIP literature relating to the latter (unsurprisingly). Regrettably, it is very common for popular accounts of QC/QIP to blur together these two (very different) mathematical assertions.
Thus, a geometrically well-posed way to motivate QC/QIP experiments is to view them as experimental investigations of the question “Does the state-space of QM have a Hilbert geometry?” From this geometric point of view, today’s QC/QIP experiments are (potentially) similarly seminal to the Eddington experiments of 1919, that established the non-Newtonian geometry of our solar system.
Of course, there are *many* other good ways to think about QC/QIP … the above geometric approach is only one of many (wholly compatible) approaches.
Comment #23 January 16th, 2010 at 6:19 am
What about the sum-product theorem? The proof of the finite field version is fairly different from the one over reals and the constants are very different.
Comment #24 January 16th, 2010 at 8:55 am
The Szemeredi-Trotter theorem in incidence geometry doesn’t hold over finite fields, but holds over the Euclidean plane (and also the complex plane). Thus, any proof of the theorem must use some property not present in finite fields. Presently, there are two lines of proof for the theorem, one which exploits convexity and the other topology. See the discussion here.
My comment is perhaps related to Anindya’s comment about the sum-product theorem above, but this connection isn’t totally clear.
Comment #25 January 17th, 2010 at 11:45 am
Oh I can point to a few places just off the top of my head that textbook QM is plainly stupid, if not actually self-contradictory. But physicists oversimplify everything anyway.
As for the Church-Turing thesis.. so what if it’s overturned in the quantum realm? It’s not any sort of a theorem; it just sums up that every time they came up with a model of computation, it turned out to be exactly equivalent to the ones before. But all of their setups made certain underlying assumptions like classical logic. So maybe if they had the benefit of our hindsight they’d have said “any classical computer…” and still have been right.
Comment #26 January 17th, 2010 at 11:46 am
Oh, and has anyone suggested that a quantum computer can solve any problems that a Turing machine can’t? Yeah it’s faster, but that’s not what Turing machines and the C-T thesis are about.
Comment #27 January 18th, 2010 at 2:37 am
It seems to me that if you are able to simulate a quantum computer with a classical Turing machine, then in terms of what they can compute, Turing machines and quantum computers are equivalent.
Comment #28 January 18th, 2010 at 8:31 pm
Scott, I notice that the “Surveys and Book Reviews ” in your papers directory points to HTML and .doc versions of a Nature article “Why Quantum Chemistry is Hard”. Unfortunately the html version goes to a nature.com paywall and the .doc version (what happened to TeX? ;-)) gets a 404. Is a copy of the paper publicly accessible online?
Comment #29 January 19th, 2010 at 7:25 am
Asdf, Scott’s commentary Why Quantum Chemistry is Hard appeared in Nature Physics vol 5, p. 707 (2009). It informally explains Schuch and Verstraete’s Computational complexity of interacting electrons
and fundamental limitations of density functional theory, on page 732 of the same issue. Also in the same issue of Nature Physics is the very interesting article by A. J. Bennet et al, titled Interference of dissimilar photon sources (showing that photons from wholly independent sources nonetheless exhibit quantum interference).
Scott’s commentary will whet your appetite to read these details, and so (regrettably) it would probably pay to go to a library with a subscription.
I track this stuff pretty carefully, because it relates to one of the great questions of our era: which dynamical systems (classical and quantum) can be efficiently simulated with classical resources?
The arxiv server has recent preprints Unfrustrated Qudit Chains and their Ground States (arXiv:1001.1006v1) and Simulation of strongly correlated fermions in two spatial dimensions with fermionic Projected Entangled-Pair States (arXiv:0912.0646v1) that also bring principles of QM/QC/QIP to bear on these issues.
Empirically, Dick Lipton’s observation seems to be broadly correct: it is a commonplace that classical and quantum systems which are provably infeasible to simulate in the worse case (assuming no collapse of the complexity hierarchy) can be feasibly simulated in generic cases.
E.g., the Schuch and Verstraete article that Scott comments upon constructs a ground state for which DFT methods will fail … but the associated physical system is so highly artificial, as to be infeasible to construct with today’s technology.
It seems that the situation nowadays with regard to quantum simulation is rather like the post-Gödel situation of mathematicians; the mathematicians knew that some theorems were unprovable … and so the challenge was to construct “natural” examples. Similarly we know some quantum dynamical systems are infeasible to simulate … the challenge now is to construct “natural” examples of hard-to-simulate systems that we can test in the laboratory.
It is *surprisingly* hard to construct such systems—and yet physical systems that break simulation codes are exceedingly useful and instructive for practicing engineers.
This quest I take to be the focus of recent work (by Scott/Arkhipov and many others) on computing the permanent by optical scattering; Stefan Scheel’s Permanents in linear optical networks (quant-ph/0406127) is a good introduction.
Comment #30 January 19th, 2010 at 8:09 am
Oh yeah … to link the above post to the topic of this thread … the Mortal Matrix Problem of Blondel and Tsitsiklis is undecidable over the integers, decidable (but NP-Hard) over finite fields.
Now all we gotta do (vaguely) is “quantize” the Mortal Matrix Problem … by somehow coding the integer arithmetic in terms of raising and lowering quantum operators … maybe throw in a closed timelike loop just for fun … and well … we begin to appreciate that the theory of quantum scattering matrices and the theory of computation might be more alike than one might think?
Perhaps the Mortal Matrix Problem is yet another of those classical/quantum problems that, to borrow the concluding phrase of Scott’s Nature Physics commentary: “can look strange at first both to physicists and to computer scientists. But often enough they’ve turned out to have illuminating and non-trivial answers.”
Comment #31 January 19th, 2010 at 11:08 am
Is a copy of the paper publicly accessible online?
asdf #28: Sorry about that! It’s fixed now.
Comment #32 January 21st, 2010 at 1:04 am
Avatar Update: First death reported in El Reg:
http://www.theregister.co.uk/2010/01/19/avatar_death/
Comment #33 January 21st, 2010 at 1:06 am
Avatar Update: Banned in China:
http://www.theregister.co.uk/2010/01/19/avatar_ban/
Comment #34 January 22nd, 2010 at 12:07 am
Maybe it’s a stretch to call this computer science, but the torsion part of the algebraic K-theory of R or C is periodic; not so for finite fields.
Comment #35 January 25th, 2010 at 10:25 am
I don’t think this is *quite* the same thing, but there are quite a few counting complexity results which don’t immediately generalise to counting in modular counting classes because you can’t do interpolation when you only have a finite number of numbers to count with.