Lens of Computation on the Sciences

This weekend, the Institute for Advanced Study in Princeton hosted a workshop on the “Lens of Computation in the Sciences,” which was organized by Avi Wigderson, and was meant to showcase theoretical computer science’s imperialistic ambitions to transform every other field.  I was proud to speak at the workshop, representing CS theory’s designs on physics.  But videos of all four of the talks are now available, and all are worth checking out:

Unfortunately, the videos were slow to buffer when I last tried it.  While you’re waiting, you could also check my PowerPoint slides, though they overlap considerably with my previous talks.  (As always, if you can’t read PowerPoint, then go ask another reader of this blog to convert the file into a format you like.)

Thanks so much to Avi, and everyone else at IAS, for organizing an awesome workshop!

60 Responses to “Lens of Computation on the Sciences”

  1. Shmi Nux Says:

    Scott, re your last slide, where math determines physics and eventually manifests as computational limits.

    It is easy to imagine a way to break the barriers of relativity and quantum mechanics to, say, travel arbitrarily fast, just take the limits c->infinity and hbar->0 and end up with Galilean invariance and Newtonian mechanics. We can trivially simulate it in computer games, by sweeping the edge cases, like the ultraviolet catastrophe, under the rug.

    Can you come up with a change in some physical law which would result in P=NP?

  2. Scott Says:

    Shmi #1: Yeah, I tried to address that question on slide #13.

    P vs. NP itself is a purely mathematical statement, which (by definition) is independent of physics.

    By contrast, whether NP-complete problems can be efficiently solved in the physical world is of course a question that does involve physics. And yes, there are hypothetical changes to the laws of physics that would make NP-complete problems tractable—some of which I covered in the talk.

    As one example, if the Schrödinger equation were even slightly nonlinear, Abrams and Lloyd pointed out in 1998 that one could “generically” solve NP-complete problems in polynomial time. Likewise if there were closed timelike curves, or if our universe had a postselected final state (as Yakir Aharonov has speculated, and as Horowitz and Maldacena also toyed with in their proposal for the black-hole information problem).

    As another example, if we lived in a hyperbolic geometry, one could reach an exponential amount of space in a polynomial amount of time, and could therefore imagine setting up exponentially-many parallel processors and having them all work on your problem at once.

    Needless to say, these would all represent pretty big changes to the laws of physics as we currently understand them.

  3. Michael Bacon Says:

    ” . . . whether NP-complete problems can be efficiently solved in the physical world is of course a question that does involve physics.”

    Your are, my friend, the master of understatement. 😉

  4. Joshua Zelinsky Says:

    Scott #2

    I wonder if it would be meaningful to measure how different a hypothetical set of physical laws were by what computational classes collapsed in that setting. In that sense, CTCs are a bigger change to physics (because they let one solve PSPACE problems in polynomial time) than is living in hyperbolic space because (unless there’s something subtle or clever I’m missing) one can only use it to solve NP problems in polynomial time. (By subtle and clever things I don’t mean collapses that we widely expect not to occur but rather aspects of hyperbolic geometry.)

  5. Scott Says:

    Joshua #4: Actually, hyperbolic geometry should let you solve everything in PSPACE in only “polynomial time.” For you should be able to do whatever can be done using polynomial time together with exponentially-many parallel processors, but that’s well-known to coincide with PSPACE.

    (Basic idea: the output of the computation is expressible as the value of some exponentially-large, polynomial-depth Boolean formula. Any such formula can be evaluated in PSPACE, using depth-first recursion. Conversely, such formulas can simulate PSPACE, by the PSPACE-completeness of the TQBF problem.)

    More generally, though, you’re absolutely right that it’s fun to compare different hypothetical changes to the laws of physics by just how much they would change complexity! As one example, Watrous and I showed that, if you model closed timelike curves as David Deutsch proposed, then they give you exactly the power of PSPACE. By contrast, Lloyd et al. have a paper showing that, if you model CTCs using postselection, then you get “merely” the power of PostBQP=PP, which is presumably less (but still enormous).

  6. Joshua Zelinsky Says:

    Scott, yes I was thinking about your paper with Watrous in that context.

    Thanks also for that construction. So I wasn’t missing anything subtle or clever, just something that should have been obvious.

  7. vzn Says:

    nice. CS megalomania. seriously. lets rule the world BWAHAHA.
    yes the computational lens is a big deal and goes back decades in various forms.
    its interesting how everyone who talks on the subject so assiduously avoids citing wolfram. am not exactly a wolfram fanatic, but not a detractor or stigmatizer either.
    planning on blogging on this myself sooner or later.
    see also section B “thinking/ worldview” in this collection joy of code 2013

  8. Douglas Knight Says:

    There are many logical constraints on what class might be accessible to physics. For example, it has to be closed under complement, and thus cannot be NP (under usual assumptions). Is there a good axiomatization of such constraints? How many examples of such classes are there?

  9. Sniffnoy Says:

    As long as we’re talking about computation in hyperbolic space, I hope you don’t mind if I ask a tangentially related question I have no idea if you can answer. 🙂 (In that it falls under the area called “algorithms” rather than the area called “complexity”.)

    I was wondering some time ago why the standard model of computation for algorithms purposes is the multi-tape Turing machine, as opposed to the random-access machine. After thinking about it for a bit, I came up with the following guess, which I never really followed up on and tried to read about.

    What I guessed was, we basically live in (d-dimensional) Euclidean space, so if you had a physical random-access-like machine, it could only store at most r^d bits at distance r from the processor. Assume that higher addresses are further away, and note that there is a speed limit. Then to read to or write from address number n, you’d need time proportional to n^(1/d). And you could define a computational model that works like this — random-access with delay, where a read or write may count as multiple steps depending on the address number.

    And what I figured was, presumably the multi-tape Turing machine model is equivalent (for algorithms purposes) to the d=1 case, and the cases of higher d are probably also equivalent. But the random-access model would make sense if we lived in hyperbolic space, since then it would only take a time delay of log n; which is essentially the same as no time delay at all, since you must have used log n time to prepare the address n in the first place. (Although now I’m not sure if this logic is right, since perhaps you could reuse n an unbounded number of times in an essential way?) (Somehow I never thought to fill the exponential space with more processors rather than just more memory! 🙂 )

    Anyway, basically my question is, is any of this right? Like I recall reading that multi-dimensional Turing machines are not known to be equivalent (for algorithms purposes) to multi-tape Turing machines, which would suggest that even if the claims above are correct, they’re not proven (and thus are presumably not responsible for the choice of multi-tape Turing machine as standard model). (Strictly speaking I didn’t make any claims about multi-dimensional Turing machines, but, we can add that in as a claim, that random-access machines with n^(1/d) delay is equivalent to a d-dimensional multi-tape Turing machine.) So, I don’t know what’s going on.

    (Well, apparently in the random-access model it’s possible to perform integer multiplication in linear time?? Which is clearly unreasonable. But doesn’t necessarily suggest a particular alternative.)

  10. Scott Says:

    vzn #7: You got it backwards. It’s not that we avoid citing Wolfram; it’s that he avoids citing everyone else. The central insights that he offers as his own go back to Turing, von Neumann and others—and if you noticed, almost all of the speakers did cite those people.

    Still, it’s not like I religiously avoid mentioning Wolfram or anything. I’ve brought up ANKOS in lots of other talks (often in the context of explaining why its classical CA model for Bell inequality violation doesn’t work). If you’re interested in my 2002 review of ANKOS, see here.

  11. Scott Says:

    Douglas Knight #8: That’s an excellent point. Years ago, inspired by a conversation with Dave Bacon, I came up with the notion of a “physical complexity class,” which is a class C that needs to have the property that CC=C. (A much stronger requirement than being closed under complement.) Basically, this says that there has to be a sensible notion of subroutines and recursion, and that hooking up a machine in your class to a subroutine also in the class must not give you a new, more powerful class.

    I actually don’t know of all that many complexity classes that satisfy this constraint. Some examples of the ones that do are L (logspace), NC (polylog depth), P, BPP, BQP, PSPACE (with restricted oracle access mechanism).

  12. Nick Says:

    There seem to be two different notions of time and efficiency floating around. On the one hand there’s time in the sense of tick-tick-tick, and on the other hand there’s time in the sense of the number of steps it takes to carry out a computation. To compute faster, you can either reduce the number of steps you need to take, or reduce the time it takes to perform a step.

    When we talk about speed or efficiency or feasibility, which sense of time is meant? When you address the relativity spaceship method and the Zeno computer as means of solving NP problems efficiently, you talk about energy requirements and stuff like that. Now if we’re talking about efficiency in terms of tick-tock time, that’s fine. But if we mean efficiency in terms of number of steps, then aren’t these ideas completely irrelevant? They would allow you to compute faster, but only by letting you perform steps at miraculous speeds.

    P and NP are defined in terms of steps, right? If that’s true, and if P is the class of efficiently-solvable problems, then how are these ideas relevant? I guess they would show that the PvsNP question isn’t really important, but they still wouldn’t answer it.

  13. Scott Says:

    Nick #12: Well, in ordinary theoretical computer science, you sidestep the entire issue by discretizing time, and then assuming that you can perform (let’s say) 1 primitive logical operation per time step.

    But then people who know a little bit about physics (but not too much!) will complain: why is that justified? Once we remember that time is continuous, why couldn’t we do our computation 1000 times faster by just rescaling, so that each time step takes one-thousandth as much time? E.g., just multiplying our evolution Hamiltonian by 1000? Or why couldn’t we use relativistic time-dilation, etc.?

    So then, purely in order to reply to those people, one needs to say something about the energy cost of accelerating the time steps in that way. The upshot of the discussion is that what you gain in time, you’re going to pay for in energy—and moreover, that once you hit the theoretical limit of ~1043 time steps per second, you would need so much energy that your computer would collapse to a black hole, so you can’t trade time for energy beyond that limit.

    So, bottom line, the theoretical computer scientists’ original choice to treat time as discrete was fine to begin with. 🙂

  14. Joshua Zelinsky Says:

    Scott, #11,

    What natural examples are there of classes that are closed under complement and isn’t believed to be to be low for itself? ZPP is closed under complement but is low for itself. The only obvious example to me is PP.

  15. Scott Says:

    Joshua #14: P#P, EXP, EXPSPACE (and lots more things above those), PPAD, SZK, AWPP, SPP, probably some others I’m forgetting.

  16. Sam Hopkins Says:

    Can you give an example of how the laws of physics might be changed to *shrink* the set of problems we can solve efficiently? Or even just a toy model where we cannot efficiently solve problems in P?

  17. Scott Says:

    Sam #16: Sure, I can do that. Imagine a cellular-automaton universe that consisted entirely of XOR operations (i.e., bitwise additions mod 2), with no ANDs or ORs possible. Such a universe would only let us solve ⊕L problems, not all of P. Or imagine a universe that consisted entirely of balls falling down a series of tubes, being switched by (and switching) toggles as they go—as in the DIGICOMP II mechanical computer that’s in the lobby of the Stata Center. In that case, I showed that you could solve exactly the problems in CC (Comparator Circuit), which is also believed to be smaller than P. Finally, imagine that the laws of physics are highly nonlocal, but the universe comes to an end after only ~log(n) computation steps can have been executed. In that case, physics could be simulated in some complexity class like NC or BQNC, which are not believed to contain P.

    If you’ve gotten the impression that preventing the laws of physics from being able to do all of P requires really seriously hobbling the laws—well, that’s correct! 🙂

  18. Rahul Says:

    Off topic but on the third slide are you actually using those gigantic rubber gloves to protect your hands from soapy water? 🙂

    Reminded me of boxing gloves!

  19. Gil Kalai Says:

    Regarding the Church-Turing thesis, physics, the efficient Church-Turing thesis, and Wolfram, for what I know Wolfram was the first to formulate the efficient Church-Turing thesis. (I learned it from Itamar Pitowski who was also one of the early people to state and study the efficient Church-Turing thesis.) Another thing I learned from Oron Shagrir is that neither Church nor Turing formulated or thought about the Church-Turing thesis as a statement about physical computational devices. Thinking about the CT thesis in this way came, according to Oron (and my memory of what he told me) later on but I don’t know who first propose it.

  20. Rahul Says:

    There’s a quote in the Powerpoint (didn’t hear the whole talk yet) about Martin-Lopez et al in 2012 factoring 21 using a QC.

    Now that we are approaching the end of 2014 have we moved beyond 21? Anyone know?

  21. anon Says:

    Scott, thanks for the very interesting talks.
    Going a bit (ok, completely..) off topic: will you discuss in the near future the latest work of Troyer et al about simulated (quantum) annealing?

    http://xxx.lanl.gov/abs/1411.5693

    Thanks,
    K.

  22. Scott Says:

    Rahul #18: No, I was wearing the rubber gloves because I’d previously cut my hand on the wet edge of the glass. In retrospect, I should’ve used plexiglass.

  23. Scott Says:

    Gil #19: I’d always thought of the Extended Church-Turing Thesis as having been implicitly formulated in the 60s and 70s, when people converged on P as the right notion of “efficiently computable,” and also realized the problems with physical computing proposals that tried to “solve NP-complete problems in polynomial time” by, e.g., using analog counters to represent exponentially-many bits.

    Furthermore, even if we go later in time, I’m not actually sure where Wolfram discusses the ECT! Do you have a reference? At least in ANKOS, he’s interested in computability in the Turing sense, hardly at all in efficient computability. (So for example, he isn’t concerned at all—doesn’t even note—that the proof of universality for Rule 110 goes through cyclic tag systems, and therefore suffers an exponential slowdown compared to ordinary Turing machines. A universality proof for Rule 110 that doesn’t suffer this slowdown was only done a few years later.)

  24. Scott Says:

    anon #21: That paper looks great! I’ll put it on my stack to maybe blog about at some future time…

  25. John Sidles Says:

    Gil Kalai (#19) notes that [seemingly] “Neither Church nor Turing formulated or thought about the Church-Turing thesis as a statement about physical computational devices.”

    One early article that (as it seems to me) deserves to be mentioned in any discussion of the evolutionary tree of Church-Turing postulates is Robert Geroch and James Hartle’s “Computability and physical theories” (1986):

    The Geroch-Hartle prediction (1986)  There are indications that this theory [quantum gravity] may have measurable numbers that are not computable. [We ask] what then would be the consequences for physics? […] Whether physical theories will be required to possess some algorithm by which they can be implemented will presumably be settled […] not through general arguments, but rather through experience with specific examples.

    Nowadays\(\ \)— substantially in accord with the 1986 Geroch-Hartle prediction\(\ \)— the specific example of boson-sampling, and the broad emergence of Lindblad-Carmichael unravelling as a fundamental tool of distribution-sampling, are alike encouraging us to regard the fundamental capability of physical science not the computation of numbers, but rather as the sampling of distributions (the former being regarded as a limiting case of the latter).

    This evolution makes it natural to embrace entropy as the fundamental measure of both experimental and computational sampling methods.

    Concretely, suppose that Alice experimentally samples a distribution, and Bob computationally samples a distribution that is statistically indistinguishable from Alice’s. Both increase the local entropy (as measured in ordinary J/K units). Then it is natural to wonder:

    Question  Is there any experimental sampling protocol for which Alice’s entropy-cost is less than Bob’s?

    Answer  Centuries of practical experience says “no”. Yet boson-samplers boldly hope that someday (soon?) the answer will be “yes”\(\ \) both practically and even provably.

    For us engineers, this bold hope is what’s great about boson-sampling! And for this bold hope, we all of us owe thanks (as it seems to me) to Scott Aaronson and Alex Arkhipov.

    In closing, Scott’s IAS talks includes a wonderful passage (at is seems to me) on slide #10:

    A QC believer’s hope  A few skeptics, in CS and physics, even argue that building a QC will be fundamentally impossible. I don’t expect them to be right, but I hope they are! If so, it would be a revolution in physics.

    Geroch-Hartle’s advice points toward the dual truth (equally valid, as it seems to me):

    A QC skeptic’s hope  A few believers, in CS and physics, even argue that building a QC will be practically feasible. I don’t expect them to be right, but I hope they are! If so, it would be a revolution in physics.

    Falsifiable prediction\(\ \)I  The hope of QC believers (for a revolution in our physical understanding) will be realized before the hope of QC skeptics (for a revolution in practical computation).

    Falsifiable prediction\(\ \)II  Both hopes will be fulfilled eventually\(\ \)… albeit the revolutions in physical understanding and computational capability will not arrive in the forms that, at present, are generally foreseen.

    As history teaches us all to expect, needless to say.

    Most importantly  Best wishes for a happy holiday season are extended to all Shtetl Optimized readers.


    @article{Geroch:1986uq, Author = {Geroch, R. and
    Hartle, J.B.}, Journal = {Found. Phys. (USA)}, Number =
    {6}, Pages = {533 - 50}, Title = {Computability and
    physical theories}, Volume = {16}, Year = {1986}}

  26. Job Says:

    Scott,

    In Simon’s algorithm, the bits in the X register (initially at |000…0>) are run through a Hadamard, then sent through the black-box function gate, and again through another Hadamard.

    I’m sure this is a common question, but why doesn’t the second Hadamard always produce |000…0>?

    I understand that the action of the function gate could modify the phase of the bits in the X register (as with Deutsch’s phase kick-back), but in Simon’s algorithm the Y register is always |0>, so why would the X register change?

    I’m assuming that the function gate is implemented entirely in Toffoli or Fredkin gates since it is the quantum implementation of a classical function.

    My understanding is that if X is unchanged then, given that the bits in the X register are run through two Hadamards, which are unitary, then the result should be the original |000…0> state.

  27. Mike Says:

    One thing I’ve never understood about that Abrams and Lloyd result about nonlinearity in the schrodinger equation vs. NP is that if you include an environment and integrate over it the central quantum variables can and usually do have nonlinearity. the schrodinger equation’s linearity disappears in the presence of environments.

  28. Scott Says:

    Job #26: It’s because the function gate entangles the X register with the Y register. Thus, after the function is computed, the state of the X register is no longer a uniform superposition over all n-bit strings, but (conditioned on the state of the Y register) only a superposition over two strings. If it were a uniform superposition over all strings, then it’s true that Hadamarding all the qubits would return its state to |0…0>. But it’s simply not that anymore: the computation of the function in the Y register has changed the X register. And in fact, that was the entire point of computing the function: we don’t care about the actual function value that we measure in the Y register, only about the effect that computing it had on the X register.

    Keep in mind that in quantum mechanics, there’s no way to have register X “act on” register Y (e.g., via CNOT or Toffoli gates), without the possibility of the state of register X also getting changed as a result. (That might be the root cause of the confusion here.) In some cases, this “back-reaction” onto the control register is something we wish we could avoid, but in the case of Simon’s algorithm, it’s exactly what we want.

  29. Scott Says:

    Mike #27: Ah, the key is that the kind of “effective nonlinearity” that you get by considering an open quantum system together with its environment, is not the kind of nonlinearity that Abrams and Lloyd are talking about. With open quantum systems, you still have a superoperator (or if you prefer continuous-time evolution, a Lindblad operator) acting on your system—and thus, you could say, the evolution is still “linear” (though no longer unitary) at the level of the density matrix. (At least, the evolution acts linearly on the vector of entries of the density matrix; it acts on the density matrix ρ itself via ρ → Σi Ei ρ Ei*.)

    Note that Lindblad evolution of this kind can always be simulated using ordinary unitary evolution, except on a larger system containing the system you’re interested in. (Indeed, if you believe in the “Church of the Larger Hilbert Space,” then it literally is unitary evolution on a larger system.) For that reason, there’s no possibility here of doing anything not in quantum polynomial-time.

    What Abrams and Lloyd were talking about, by contrast, was evolution that’s nonlinear even at the level of pure states—such as would arise, for example, if you added a small term that was quadratic in ψ to the Schrödinger equation, alongside the usual linear term Hψ. If you had that kind of nonlinearity, then “generically,” you could take two state vectors that were initially almost identical, and then apply operations that repeatedly doubled the distance between them, until the vectors were nearly orthogonal and could easily be distinguished by a measurement. And that’s the key to how you could solve NP-complete problems in polynomial time. Note that nothing similar is possible with the Lindblad kind of “nonlinearity,” which can never magnify distances between quantum states, only bring them closer together.

    (Note: when I said “generically” above, I meant: this can be done with almost any nonlinear Schrödinger equation one could think of writing down. Whether there exist any nonlinear Schrödinger equations for which it can’t be done is an interesting question that I’d love to understand better.)

  30. John Sidles Says:

    Scott wonders “Whether there exist any nonlinear Schrödinger equations for which it [rendering \(\mathsf{\,P\sim NP\,}\)] can’t be done is an interesting question that I’d love to understand better.”

    Hmmm … I see we’re still working with polymers …

    … what is this, the Dark Ages?

    ————

    Prediction  Individual skeptical curiosity (like Scott’s), equipped with the 21st century’s burgeoning mathematical armamentarium, is advancing our understanding of quantum dynamics\(\ \)— particularly in regard to the strait-laced creed of the Church of the Larger Hilbert Space\(\ \)— farther and faster than my mere literature-links ever could.

  31. jonas Says:

    Scott: re #23, excuse me, but doesn’t going through cyclic tag systems give only a polynomial slowdown? I think a cyclic tag system can simulate a single-tape turing machine with a quadratic slowdown.

  32. Gil Kalai Says:

    Hi Scott (#23),

    The link to Wolfram 85 paper is http://www.stephenwolfram.com/publications/academic/undecidability-intractability-theoretical-physics.pdf. A link to Pitowsky’s 1990 paper is https://gilkalai.files.wordpress.com/2014/11/pitowsky-1990-iyyun.pdf and, of course, Deutsch’s 1985 paper http://www.cs.berkeley.edu/~christos/classics/Deutsch_quantum_theory.pdf is also very relevant.

  33. Scott Says:

    Jonas #31: Nope, there’s an exponential slowdown, because a Turing machine state gets encoded in unary at some point (that’s even in ANKOS if you look carefully, and I noted it in my review). See here for the 2006 paper by Neary and Woods that made the simulation efficient.

  34. Scott Says:

    Gil #32: Thanks for the refs! The paper by Wolfram is nice, though it’s more a survey / expository essay than a research paper, and at any rate, I don’t see that it explicitly formulates the classical ECT or considers things like quantum mechanics that could challenge it (though I recall seeing another Wolfram paper, also from the 80s, that briefly speculates that maybe the exponentiality of the Feynman path integral could be used to solve NP-complete problems).

  35. Itai Says:

    Nick #12 and Scott #13
    The huge importance of energy that effects the time of computation CANNOT be negligible from an accurate computation theory even though it’s effect is limited as Scott said with the black hole thing 🙂

    Energy and time have opposite relation , when you have more energy you can (almost always ) save time.
    For example, getting faster clock , or using relativity time effects or just using parallel computing.

    What we really want in building efficient computers is not to minimize computation time, but to minimize action ! ( yes physical action = energy*time ).
    What theoretical CS count when they say time complexity is NOT TIME but basic operations ( or action ) of the physical computer !

    Also, magically the basic axiom of physics is the principle of least/stationary action that may lead to all known theories such as QM and GR ( when taking the right Hamiltonian/Lagrangian for each theory) .

    So, there is a suggestion of prof. Tom Toffoli (known for Toffoli gate in QC ) about it to consider physical action as the physical measure for computation in the following series of papers:
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.205.6511
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.2410
    http://link.springer.com/article/10.1023/A%3A1024411819910#page-1

    You can see papers that are built on Toffoli’s assumption here:
    http://arxiv.org/abs/quant-ph/0409056
    http://link.springer.com/article/10.1007%2Fs40509-014-0018-2#page-1

  36. JollyJoker Says:

    I find it pretty interesting that QM seems to have exponential resources compared to the classical world, yet there’s some odd conspiracy preventing us from actually using it for exponential speedups except in some very specific cases.

    How unlikely is it that a polynomial factoring algorithm exists and P=BQP?

    (Layman warning, may have misunderstood something)

  37. Scott Says:

    JollyJoker #36: Ah, but once you learn it, it’s really not an odd or unexplained conspiracy—it’s just how quantum probability works. No one expects you can get magical exponential speedups using a classical computer with a random-number generator, just because describing the “state” of a randomized algorithm with n bits of memory requires a list of 2n probabilities. For people intuitively understand that, as soon as you look at the computer, you’re just going to see the actual bits, not the list of probabilities: the latter are only used to calculate how likely you are to see one outcome versus another one. Quantum computing is a very similar story, with the one difference that the probabilities are replaced by complex amplitudes, which can interfere with each other. And the interference does increase what you can do in subtle and non-obvious ways. But the amplitudes are still no more directly accessible to you than classical probabilities were—and it’s no more a conspiracy in the one case than in the other.

    As for your question, it’s actually two different ones. If P=BQP then certainly there’s a fast classical factoring algorithm. But even if there’s a fast classical algorithm, many of us would still conjecture that P≠BQP (or at the very least, that PromiseP≠PromiseBQP—which for lay purposes amounts to basically the same thing).

    Factoring is “just” one specific, highly-structured problem. If a fast classical algorithm for it were discovered, that would have pretty big consequences for the world’s digital economy—but the uncomfortable fact remains that, as far as we know, it need not have any disastrous theoretical consequences for anything else (unlike, say, a fast algorithm for 3SAT). For this reason, I believe a fast classical factoring algorithm has to be treated as a live possibility, not just a “maybe the moon is made of cheese” kind of thing.

    On the other hand, it’s also true that, after decades of serious thought about classical factoring algorithms, we don’t have any that run faster than ~exp(n1/3), even heuristically and in practice (i.e., completely setting aside the issue of proof). This is in contrast to other problems like linear programming and primality testing, which we knew how to handle in practice long before we had rigorous proofs that they were in P. And of course, Shor’s algorithm was discovered just a few short years after computer scientists started thinking seriously about quantum algorithms. So the argument “if there’s a such a great classical factoring algorithm, why hasn’t it been found already?,” while far from decisive, does have some nonzero weight with me.

    But as I said, factoring is just one specific problem. And even if we knew how to solve it in P, coming up with an efficient classical simulation for all of quantum mechanics would still seem to me like an extraordinarily tall order. Indeed, part of the point of my work with Alex Arkhipov on BosonSampling, and of the independent work of Bremner-Jozsa-Shepherd, was to show that if your classical simulation of quantum mechanics was really, really good, then it would do much more than just give you fast classical algorithms for factoring and a few other number theory problems—rather, such a simulation would cause a collapse of the polynomial hierarchy.

  38. JollyJoker Says:

    Scott #37: First off, thanks for the very detailed and informative reply!

    Does 2^n of something “really exist” then? (I’m aware that using those words might be a bit iffy when talking about anything quantum, but I’m not sure that’s relevant here).

    To some degree I’m thinking about David Deutsch’s argument on slide 15 of your presentation vs your answer there. It seems a bit strange that the 2^n “worlds” would be accessible/observable only in some fairly rare circumstances and that brings to mind how, in physics, there’s always the reminder that if you’re not talking about real observables you might be doing something wrong.

    However, if it seems likely that P!=BQP that would tell us 2^n of something in some sense exists. (It seems a bit weird to talk about many worlds if they are all part of our world) Yet what we can access them all for seems severely limited.

  39. a Says:

    “As another example, if we lived in a hyperbolic geometry, one could reach an exponential amount of space in a polynomial amount of time, and could therefore imagine setting up exponentially-many parallel processors and having them all work on your problem at once.”

    This is amazing… except that I do not understand this. Could you please explain?

  40. Scott Says:

    JollyJoker #38:

      Does 2^n of something “really exist” then?

    Yeah, that’s the million-dollar (or 2n-dollar?) metaphysical question. All anyone else can do is lay out the issues for you; ultimately, you need to decide what you want to mean by words like “exist,” and which philosophical position you find the least unsatisfying. For starters, check out Section 8 of my Why Philosophers Should Care About Computational Complexity essay.

  41. JollyJoker Says:

    Scott #40: Thanks, I might have to read the whole essay 🙂

  42. JollyJoker Says:

    Scott #40: Erm, section 8 spells out exactly what I thought. I mean as in completely jaw-dropping “I didn’t write this myself, did I?”. Lol, I had no idea I was this smart 😉

  43. Pascal Says:

    It seems that the idea of looking at the Church-Turing thesis as a statement about the laws of physics is due to Robin Gandy, a PhD student of Turing (see wikipedia on the history of the thesis).

  44. John Sidles Says:

    Comment #37 amounts to (see also Bremner-Jozsa-Shepherd, arXiv:1005.1407 [quant-ph], also Aaronson-Arkhipov, arXiv:1011.3245 [quant-ph])

    The Collapse Thesis (CT) “If a classical simulation of quantum mechanics was really, really good, then it would […] cause a collapse of the polynomial hierarchy.”

    which naturally generalizes to

    The Extended Collapse Thesis (ECT)  “If a classical simulation of a random number stream was really, really good, then it would […] cause a collapse of the entire hierarchy of complexity theory.”

    Questions  Aren’t the CT and ECT both flawed (as stated)?\(\ \)… and for similar reasons?

    Falsifiable assertion  What complexity theorists reasonably regard as a “really, really good” sampling simulation differs fundamentally from what experimentalists and statisticians reasonably regard as a “really, really good” sampling simulation\(\ \)… and this definitional imprecision has fueled public and scientific discourse of deplorable confusion and futility.

    `t\(\ \)Hooft’s advice  In view of these definitional ambiguities, `t\(\ \)Hooft’s advice surely applies (per arXiv:1405.1548)

    “We should not be deterred by ‘no go theorems’ if these contain small print and emotional ingredients in their arguments.”

    The history of scientific research in general, and of quantum complexity research in particular, shows us plainly that `t\(\ \)Hooft’s advice is worthy of consideration by complexity theorists, experimentalists, and statisticians alike.

    Conclusion  The formal complexity-theory barriers to the computational simulation of random-number distributions and boson-sampling distributions should not dissuade us from implementing either one, with reasonable hope that for practical purposes our simulations can be both efficient and “really, really good”.

  45. domenico Says:

    I have a problem with the impossibility of warp drive ( only for not superluminary travel).
    If the Maxwell’s equations can have a corresponding equation for mass “charge” (the new Gauss’s equations), with charge replaces by masses, some sign replacements (Coulomb’s law have different sign of the Newton’s law of universal gravitation), and some different constants for particles-antiparticles field (so that can be possible to write the Einstein field equation for charge particles with the electromagnetic field like a curvature, and the Coulomb constant instead of the Newton’s gravitational constant, and with White Holes obtained with antiparticles), then it can be possible to obtain a cloud of aligned particles that can give an acceleration.
    I think that a high density cloud of aligned flavourless mesons, with the antiquark-quark dipole that pointing in the same direction is equivalent to an electrostatic double layer, so that a curvature is possible (I am not interested to the technology to obtain the double layer, but it is possible).
    It is interesting the soapy water like a computer, and there is a system that it is also interesting: I think that the protein folding is a minimization problem that it is obtained quickly in the real word, that can be modified with artificial protein to assign to it a different real word problem (but it is very complex to read the solution), and that it is hard to solve in a computer.

  46. Scott Says:

    John Sidles #44: OK, I can specify exactly what I meant by “really, really good” simulation. I meant one that samples from exactly the same probability distribution over measurement outcomes that the ideal physics experiment samples from—or at least, one where all the probabilities are multiplicatively close to what they were in the ideal experiment. Even if the simulation only samples a distribution that’s close in variation distance to the ideal distribution, we think that would still collapse the polynomial hierarchy, though for that we need an additional complexity hypothesis, that estimating the permanent of a matrix of i.i.d. Gaussians is #P-complete. (Even without that hypothesis, such a simulation would put the Gaussian permanent estimation problem in BPPNP.)

    Of course weaker forms of classical simulation, or simulations for more restricted classes of quantum systems, could exist without collapsing PH—I don’t need that explained to me.

  47. John Sidles Says:

    Scott notes (#46) “Of course weaker forms of classical simulation, or simulations for more restricted classes of quantum systems, could exist without collapsing PH\(\ \)—\(\ \)I\(\ \)don’t need that explained to me.”

    Scott (#46), please appreciate that my comment (#44) was not intended to explain the PH to you personally, or to explain it to any professional complexity theorist.

    Rather its intent was to bridge certain confusion-inducing language-gaps between complexity theorists, scientists, mathematicians, and engineers.

    As a concrete example of this language-gap, STEAM students will benefit from studying the engineering-language idioms of the Keysight (formerly Agilent) 81150A application note Precision digital noise\(\ \)—\(\ \)new noise technology and its application.

    This technical note concludes with the following smile-inducing passage

    The 81150A provides deterministic Gaussian white noise, with signal repetition of 26 days. You can decide on any arbitrary distribution, and trigger the noise to start when you need it.

    Predictions

    •\(\ \)Complexity theorists will smile  at the naive engineering notion of “noise” that is deterministic and repetitive\(\ \)… obviously these Keysight engineers are far-gone in von Neumann’s state of sin!

    •\(\ \)Mathematicians will smile  at the naive engineering notion of “arbitrary” distributions that none-the-less can be programmed\(\ \)… obviously these Keysight engineers have little notion of computable functions!

    •\(\ \)Physicists will smile  at the naive engineering notion of “triggering the noise to start when you need it”\(\ \)… obviously these Keysight engineers have little notion of a task whose limiting cases are not less subtle\(\ \)— and even potentially infeasible (e.g., for relativistic gauge theories)\(\ \)— than “triggering large\(\text{-}n\) boson-sampling photon-states when you need them”.

    ————
    Conclusion  Hot crowded planets in urgent need of jobs for young researchers are well-advised to embrace language that fosters discourse among complexity theorists, logicians, physicists, and engineers that is clear, friendly, impersonal, mutually respectful and (most of all) hopeful.

    The virtue of sin  The “state of sin” that is associated to the blithe disregard of no-go theorems and postulates that are broad yet subtle — including the broad yet subtle no-go postulates that are grounded in the creed of the Church of Larger Hilbert Space — has been and will continue to be a viably responsible, creative, and productive professional path for the world’s engineers, entrepreneurs, scientists, and mathematicians\(\ \)… and complexity theorists too.

  48. John Sidles Says:

    Student exercises (for #47)

    Exercise\(\ \)I\(\ \)(easy)  Explain why the Keysight (formerly Agilent) 81150A Pulse Function Arbitrary Noise Generator\(\ \)— despite its formidable technical capabilities\(\ \)— fails the fundamental and seemingly simple challenge of generating gaussian noise whose distribution is (in Scott’s phrase of #46) “close in variation distance to the ideal distribution.”

    Exercise\(\ \)II\(\ \)(intermediate)  Explain why practicing engineers commonly don’t care.

    Exercise\(\ \)III\(\ \)(intermediate)  Give an example in which practicing engineers do care. Hint:\(\ \)see Wikipedia’s discussion “Audiophile controversies.”

    Exercise\(\ \)IV\(\ \)(advanced/open)  Briefly summarize the practical and fundamental implications of these considerations for large-scale quantum simulation in general, and large\(\text{-}n\) boson-sampling experiments in particular.

  49. On-sidles-laught Says:

    Nothing against Sidles personally, but to aid with digestion of the blog, do you think it might be worthwhile to limit him to a comment number and comment length maximum?

  50. Job Says:

    Scott #28,

    That sounds right, but i’m not totally grasping that second Hadamard.

    Let me step through a simple f(x) so you can see where i’m going wrong. Suppose that X has n bits and the black-box function f(x) uses CNOT gates to copy the top n-1 bits, while ignoring the least significant bit of X.

    In this case the secret value s is 0…01, and f(x) is the same for every consecutive pair of x values (2i, 2i+1), e.g. f(0)=f(1)=0, f(2)=f(3)=2, f(4)=f(5)=4, etc.

    Before f(x) is applied, X is in an even superposition of all n-bit values, while Y is constant |000…0>. After f(x), X is… well i’m not sure, and i think that’s where my confusion begins.

    I’ve seen how in a QC, controlled gates often modify control bits as much as target bits. I’m also assuming that the f(x) gate uncomputes f(x) after copying the result to the Y register.

    Looking at X in isolation it seems that it still is in an even superposition over all possible n-bit strings. The amplitudes corresponding to each possible value of X should all be positive and have similar magnitude, because the CNOTs are only shuffling equal amplitudes around (?), given that Y is |000…0>.

    Looking at the XY system, then we have two possible types of (X, Y) pairs: (2i, 2i) or (2i+1, 2i). If we were to measure X at this stage, i would expect to get a random n-bit value with equal probability. For Y i would expect a random n-bit even value with equal probability. For XY i would expect to get an odd-even pair 50% of the time and an even-even pair 50% of the time.

    If this were true, then applying the second Hadamard to X should yield |000…0> every time and we would not get interference, so i’ve gone wrong somewhere. On the other hand, if we were to measure Y before applying the second Hadamard to X, then this would change.

    I wonder, in Simon’s algorithm, is Y measured before the second Hadamard is applied to X?

  51. Job Says:

    It does look like Y must be measured before the second Hadamard is applied.

    At least according to this lecture by Dave Bacon (in part B):
    http://courses.cs.washington.edu/courses/cse599d/06wi/lecturenotes8.pdf

    The other online resources that i’ve seen discussing Simon’s algorithm (including Wikipedia) don’t state this explicitly and illustrate the circuit as performing a measurement at the end, e.g.:
    http://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/Simons_algorithm.svg/230px-Simons_algorithm.svg.png

    The math does seem to indicate that there is a measurement before the second Hadamard, but it’s totally not obvious, especially since i usually just gloss over that stuff.

  52. Scott Says:

    Job #50, #51: It doesn’t matter whether Y is measured or not before the second round of Hadamards. In textbook presentations, Y is typically measured for purely pedagogical purposes. But a measurement of Y can’t nonlocally change the state of the X register—so if the algorithm works with the measurement, then it must also work without the measurement. It’s the computation of f (entangling the X and Y registers) that changes the state of the X register and actually matters for the computation, not the subsequent measurement of the Y register.

    For more, please read (e.g.) Mermin’s Intro to Quantum Computer Science, or any of the excellent online course lecture notes (e.g., Vazirani’s).

  53. Scott Says:

    On-sidles-laught #49:

      Nothing against Sidles personally, but to aid with digestion of the blog, do you think it might be worthwhile to limit him to a comment number and comment length maximum?

    Sidles has been banned from this blog several times, but the bans have expired. I completely agree that something should be done to rein him in and improve this comment section’s readability. The trouble with a comment number and comment length maximum is that they’d take mental effort for me to enforce (I’d have to count the number of lines, and remember: how many other comments did he post today?).

    But I’m open to other suggestions. For example, what do people think of the following policy: any Sidles comment with boldfaced headings and/or BibTeX references goes immediately to the trash.

  54. John Sidles Says:

    My comments here on Shtetl Optimized are informed by a three-way blend of

    (1)\(\ \)the engineering and medical objectives of “Spin microscopy’s heritage, achievements, and prospects” (2009);

    (2)\(\ \)the mathematical narratives provided by Tim Gower’s “Vividness in mathematics and narrative” (2012) and Barry Mazur’s “Visions, dreams, and mathematics” (2012), and

    (3)\(\ \) the accelerating realization of these practical objectives and mathematical perspectives that is envisioned in the philosophical/fictional STEAM-narratives of recent works like Jacques Roubaud’s Mathematique: (2012), Andrea Barrett’s Archangel (2013), Yannick Grannec’s The Goddess of Small Victories (2014), and Kathleen Goonan’s “Girl in Wave: Wave in Girl” (2014).

    BibTeX references are appended (as usual!). I’d be interested to sample the distribution of Shtetl Optimized on the aversion-curiosity-familiarity spectrum:

    • Aversion  “These works and ideas have scant relevance to quantum math, science, and engineering; references to them are unwelcome on Shtetl Optimized.”

    • Curiosity  “Some or all of these works and ideas are unfamiliar to me; references to them are welcome on Shtetl Optimized

    • Familiarity  “Because these ideas and works are already in my BibTeX database; references to them on Shtetl Optimized have no particular value to me.”

    Note  Several of these works are by women and/or center upon women’s roles in STEAM … I’d be very interested to hear from women who read/comment on Shtetl Optimized (are there any?), or alternatively, ideas/explanations regarding the (apparent?) near-saturation of gender imbalance among Shtetl Optimized readers.

    ————
    Readings in quanta, healing, gender, and STEAM

    @book{null, Author = {Barrett, Andrea},
    Publisher = {W. W. Norton}, Title = {Archangel},
    Year = {2013}}

    @incollection{null, Author = {Kathleen Ann
    Goonan}, Booktitle = {Hieroglyph: Stories and
    Visions for a Better Future}, Editor = {Finn, Ed
    and Cramer, Karen}, Pages = {38–73}, Publisher
    = {HarperCollins}, Title = {Girl in Wave: Wave
    in Girl}, Year = {2014}}

    @incollection{null, Address = {Princeton},
    Author = {Timothy Gowers}, Booktitle = {Circles
    disturbed: the interplay of mathematics and
    narrative}, Editor = {Doxiades, Apostolos K. and
    Mazur, Barry}, Pages = {211–231}, Publisher =
    {Princeton University Press}, Title = {Vividness
    in Mathematics and Narrative}, Year = {2012}}

    @book{null, Author = {Yannick Grannec},
    Publisher = {Other Press}, Title = {The Goddess
    of Small Victories}, Year = {2014}}

    @incollection{null, Address = {Princeton},
    Author = {Barry Mazur}, Booktitle = {Circles
    disturbed: the interplay of mathematics and
    narrative}, Editor = {Doxiades, Apostolos K. and
    Mazur, Barry}, Pages = {183–210}, Publisher =
    {Princeton University Press}, Title = {Visions,
    Dreams, and Mathematics}, Year = {2012}}

    @book{null, Address = {Champaign}, Author =
    {Jacques Roubaud}, Editor = {Ian Monk
    (translator)}, Publisher = {Dalkey Archive
    Press}, Title = {Mathematics: (a novel)}, Year =
    {2012}}

    @article{null, Author = {J. A. Sidles}, Journal
    = {Proc. Nat. Acad. Sci.}, Number = {8}, Pages =
    {2477–8}, Title = {Spin microscopy’s heritage,
    achievements, and prospects}, Volume = {106},
    Year = {2009}}

    ————
    PS I’ll be winter-hiking in the Cascades for the next two days, and will read responses with interest upon my return. Needless to say, I neither expect nor would welcome uniform responses!

  55. Job Says:

    Vazirani’s notes, under “Principle of safe storage”, describe how a CNOT applied in between two Hadamards has a similar effect as a measurement applied in between two Hadamards.

    This is what i was missing, thanks.

  56. Fred Says:

    Scott #40
    so, in essence, the real puzzle of QM isn’t that amplitudes can interfere in subtle ways, but that randomness is an irreducible building block of nature.
    If we consider a series of coin flips implemented as a QM experiment, what’s truly baffling is that somehow nature is able to come up with an “honest” 50/50 probability, which amounts to an effect without a cause.

  57. On-sidles-laught Says:

    I’m on board with your proposal professor Aaronson!

  58. aviti Says:

    the videos sucks…buffering all the way!!

  59. Gil Kalai Says:

    Regarding the history of the efficient physical Church-Turing thesis:

    “I don’t see that it explicitly formulates the classical ECT”

    The relevant sentence from Wolfram’s paper is: “One expects in fact that universal computers are as powerful in their computational capabilities as any physically realizable system can be, so they can simulate any physical system (footnote 3). (footnote 3) This is a physical form of the Church-Turing hypothesis.”

    It seems that the physical interpretation of CT and ECT became popular only in the 90s with quantum computers and Shor’s algorithm. Of course, it will be very interesting to see earlier references or more explicit references to the physical ECT. (Pascal’s reference above to Gandy’s early work is very interesting.)

    As far as I could see, in the Stanford encyclopedia of philosophy the item about C-T thesis does not mention “physics” interpretation before the mid 80s and from that time the item mentions Deutsch and Geroch-Hartle (that John Sidles also kindly mentioned) but not Wolfram.

    In the item about quantum computing from SEP it is stated that “Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance.”

    Of course, while the history is interesting, it is also interesting to examine the role of computation complexity (compared e.g. to other aspects of computing) in other sciences as highlighted in the IAS workshop. What is more important to our understanding of the physical world: notions from computational complexity? or perhaps computational tools like Wolfram’s “mathematica”? This is a tough question but it allows for a sincere answer in John-Sidles style: “Both”.

  60. Vijay Says:

    [Scott: I was in an airport and the comment didn’t get sent before take-off and I see that the thread is now closed. Apologies for sending this to an older post, but I didn’t want to put it in a new post and I definitely didn’t want to mail you a comment (though if there is a blog-related mail address, I would use that).

    If you don’t mind, could you please put this comment in the Walter Lewin thread? If it’s closed and you won’t, that’s fine and I thank you again for everything you shared there. Happy New Year]

    Dear all,

    about a day ago (around the post-500 comment mark) the main discussion appeared to have reached maximal, mutual misunderstanding and given way to growing animostiy that I found very painful to witness. I either misread the situation or both the thread has calmed down and I’ve grown used to my sense of discomfort to add a few closing comments.

    1. Some comments on this thread describe situations (rape, sexual assault, etc.) that are known to arouse strong emotional response or traumatic memories. My personal request if you are part of such a discussion in future would be to preface such a comment with a trigger warning and sexually explicit and other comments with a “Too Much Information” warning. I should have requested this earlier but was too hesitant to speak up. Of course, one could avoid the entire thread, but it didn’t start out as a thread on such things.

    2. Amy #491 and #535: Comments about sexual inexperience are used in a strongly derogatory manner and even to bully. In Amy #471, you said that one should not make assumptions about sexual experience. However, anongirl #461 and dorothy #485 were not objecting to your assumptions, but didn’t feel comments about another person’s sexual experience were neutral and dorothy #506 clarified why. I can attest to what she says.

    A girl in my Masters programme spent the better part of our first semester telling me that, like her, no one would ever want to be with a sexually inexperienced man. Her advice was that nerdy, inexperienced guys should serially proposition women in bars till someone said yes and that this would be the only way for nerdy guys to integrate into the social scene. I also saw two graduate women here in the US making fun of a guy once for not being man enough to reciprocate when a girl was hitting on him.

    All this to say that virgin shaming exists and a web search for “virgin as an insult” will throw up several school and undergraduate-related web-forums where the topic is being discussed. Girls say such things to other girls. I can’t recall off the top of my head, young adult fiction involving girl bullying that documents this but such fiction exists (the Skins TV show, maybe?).

    3. I feel like there are two distinct styles of discourse here which have been part of a fundamental obstacle to communication. An explanation that works spectacularly for students in a class may not work for all students in the class and explanations that work in one department do not work when that same course is taught in another department.

    Once, to understand what being mis-pronouned might be like for trans people, I spent a day mentally repeating every sentence about me involving the male pronoun to one involving the female pronoun. This gave me some tiny insight into their pain. When the topic came up in a discussion at work recently, this thought experiment completely bombed and didn’t work for anyone else.

    Amy, when say something like “I don’t suppose you’ve ever been poor” or offer some of your thought experiments, I have difficulty grasping the essence of what you are communicating because I get lost in the details of your thought experiment or scenario. For me, it would have worked better to have a more direct statement of what the issue with someone’s words was.

    Another issue which I am quite confident several readers here share is that we are allergic to speaking in terms of personal background and experience and using examples from personal life. I personally would prefer a discussion in which the positions were stated in explicit but abstract terms, and the abstract ideas were made concrete with hypothetical scenarios or even parables. This is a style that is common to many science blogs, even when they veer into non-scientific topics. Please note that I am not making a value-judgement about communication styles or devices. Certain styles work effectively for online-communities that are accustomed them while communication devices that work well in a different community may fail spectacularly in another one.

    I keep rephrasing this point every few comments because it seems pointless (or at least insufficient) to acknowledge the diversity of backgrounds, experience and perspective on a topic if we do not also pay attention to the, diversity, intent and efficacy of rhetorical styles involved.

    4. On some online forums, questioning someone’s gender is viewed as a form of trolling and in threads on passionate topics, can even get someone banned. There are many reasons for this. There is really no defense to such an accusation, it violates assumptions that people are acting in good faith, and also is a dirty game that everyone can play without contributing anything to the discussion. I have often been accused of being a woman pretending to be a man (and more often by women online than men) and it strikes me as a particularly insidious form of stereotyping. Since I, personally do not identify as a man first and then all else, being called a woman does not bother me. What does though, is the insinuation of dishonesty or intent to mislead.

    I shall stop here. Thanks again everyone for the discussion. I’m relieved that it is wrapping up, albeit somewhat inconclusively. Wish you a peaceful holiday break (if you get one) and a good New Year to you all!