Is There Anything Beyond Quantum Computing?

So I’ve written an article about the above question for PBS’s website—a sort of tl;dr version of my 2005 survey paper NP-Complete Problems and Physical Reality, but updated with new material about the simulation of quantum field theories and about AdS/CFT.  Go over there, read the article (it’s free), then come back here to talk about it if you like.  Thanks so much to Kate Becker for commissioning the article.

In other news, there’s a profile of me at MIT News (called “The Complexonaut”) that some people might find amusing.

Oh, and anyone who thinks the main reason to care about quantum computing is that, if our civilization ever manages to surmount the profound scientific and technological obstacles to building a scalable quantum computer, then that little padlock icon on your web browser would no longer represent ironclad security?  Ha ha.  Yeah, it turns out that, besides factoring integers, you can also break OpenSSL by (for example) exploiting a memory bug in C.  The main reason to care about quantum computing is, and has always been, science.

159 Responses to “Is There Anything Beyond Quantum Computing?”

  1. rrtucci Says:

    So, did P=NP during inflation, as Lloyd proposed?

  2. Scott Says:

    rrtucci: I hesitate to ask, but … where did he propose such a thing?

    Assuming P≠NP “now,” the answer to your question is of course no: if P≠NP is true at all then it’s a timeless mathematical truth. Logically, one could imagine that NP-complete problems were efficiently solvable during inflation (if anyone was around to solve them then 🙂 ) but not afterward, but I’ve never heard of such a speculation or of Seth Lloyd proposing it, nor can I think of a good basis for it. Sure, space might be expanding exponentially, but any one observer has a bounded causal patch and doesn’t get to exploit that fact—in fact, the inflation makes things much worse for such an observer, by inflating away most of the computer before it can return an answer!

  3. rrtucci Says:

    Lloyd proposed this in his paper about the universe as a quantum computer. He called it the inflationary quantum computer.

  4. Scott Says:

    rrtucci: Do you mean this paper? I just searched it and couldn’t find anything about inflation or P vs. NP. A link would be appreciated.

  5. Mateus Araújo Says:

    That was a very pleasant read.

    As for myself, I like to hope that quantum gravity will give us some exponential speedup for some algorithms. Not via closed timelike curves which, as you have pointed out, seem to powerful to exist, but via a subtler effect: superposition of metrics. This could give rise to a causal structure that is genuinely different, and yet free of paradoxes.

    There has been some exploration of this idea (see, e.g., arXiv:0912.0195), but nothing yet that comes close to the holy grail of exponential speedup.

    But hey, one can dream!

  6. Jon Lennox Says:

    I It’s seee‘s quoting you for a quantum computing explainer… It’s better than some press accounts, but still sort of meh.

    (It’s useful for scaling my “how well are they explaining things I don’t understand” heuristic, I suppose.)

  7. Scott Says:

    Jon Lennox #6: Had similar thoughts.

    On the other hand, I’ve learned over the years that, if I try to calibrate how well journalists explain things I don’t work on by looking at how well they explain quantum computing, then I end up a raving conspiracy nut who doesn’t even believe local traffic reports. 😉 The more charitable view is that QC really is one of the hardest things on earth for non-scientists to get right, because not many can get over the hurdle that “the place you need to be to understand this developing story” is not a lab with anything you can see or touch, but a 2n-dimensional Hilbert space.

  8. David Cash Says:

    Wearing my security pedant hat here: Heartbleed is not a buffer overflow. It’s actually sort of the opposite.

  9. fred Says:

    Scott, you wrote
    “when computer scientists say “efficiently,” they mean something very specific: that is, that the amount of time and memory required for the computation grows like the size of the task raised to some fixed power, rather than exponentially.”

    Now I’m getting a bit confused about memory.
    When we talk about the amount of memory required, does it matter whether the memory is made of bits (classical) or qbits (QC)? The “size” is the same regardless? (N bits or N qbits… but N is the input size).
    An internal register in a QC is clearly made of qbits and in that sense it can store more information (in QM mode) than a regular classical memory.. it seems “unfair” to equate 8 bits of classical memory with 8 qbits (esp given how much more complicated it is to build 8 qbits rather than 8 bits, but that’s a practical consideration).
    Maybe I never thought of the exact definition of a QC, like is there a formal of a Quantum Turing machine?

  10. Scott Says:

    David Cash #8: Thanks, I fixed the post! Is there a more general name for “the type of security flaw that keeps cropping up over and over and over because C doesn’t check that pointers to an object in memory are in bounds, and it’s nearly impossible for a human programmer to think of every possible way of exploiting that fact, and despite this known problem, people continue to write super-sensitive code in C, rather than in programming languages that make this particular kind of bug impossible”?

  11. Scott Says:

    fred #9: Yes, there’s a formal definition of quantum Turing machine; it was given by Bernstein and Vazirani in 1993 (but today almost everyone uses uniform families of quantum circuits, which are formally equivalent to QTMs and less cumbersome to work with).

    And in an efficient classical algorithm, the number of bits should grow only polynomially with the input size, while in an efficient quantum algorithm, the number of qubits should likewise grow polynomially. In both cases, the number of bits or qubits roughly corresponds to the actual number of particles in the actual physical memory, so that’s why (along with the running time) it’s a relevant quantity to bound. And yes, of course we think you can do somewhat more with n qubits than you can do with n classical bits (given the same amount of time), by subtly exploiting the fact that n qubits take 2n complex amplitudes to describe, and that the amplitudes (unlike classical probabilities) can interfere. That’s why quantum computing is interesting in the first place!

  12. Jerry Says:

    re: Scott’s Comment #7:

    It is the quantum leap from 2^n-dimensional Hilbert space to 3-D Euclidean space and the assertion that QM = QC that remain the hurdles to scale (analogy intended). Many of your blog followers are also scientists who are interested in the science hidden in QM, but you patronize those of us who don’t agree with you.

    If, ten years from now, it is demonstrated that QC is a myth, you are unlikely to change your field to medieval architecture. No, you will use that fact to advance computer science and complexity theory in the direction that truth takes us.

  13. fred Says:

    That “Shakespeare paradox” is a good one.

    We can replace Shakespeare writing a play with a search problem, and we go back in time to supply the solution index. We’ve done the search without doing any work.

    “Somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!”

    But the play still has to be a valid “solution” in the sense that Shakespeare has to accept it, the only stable loops are the ones where Shakespeare wouldn’t scream “That’s horse shite! no way I’m putting my name on this!”, i.e. Shakespeare brain acts as a polynomial checker for the play.
    But the problem is not about the play popping into existence, the problem is that if we imagine that every play Shakespeare ever wrote has been fed to him there’s an issue – Shakespeare brain itself has to pop into existence: the act of actually writing the play is modifying Shakespeare’s brain in a way that proofing a play doesn’t. So in the end the whole concept of a Shakespeare play would “dissolve”.
    The same happens with the search analogy, if you always feed back to answer, then no search algorithm has ever been implemented/designed, the very concept of a search vanishes from CS books!

  14. wolfgang Says:

    @Scott #4

    Could it be that rrtucci fell for his own April Fool’s joke?

  15. Scott Says:

    Jerry #12: On the contrary, some might call your comments on my last post patronizing. When you take away the preening puns and irrelevant Jewish asides, what’s left of them is a large ignorance about basic facts of QM—something that, indeed, I’m happy to help people with; I spend much of my life doing that—but then, and this is the part that gets me, a wounded insistence on having your personal opinion about QC being just like bigfoot or the Easter Bunny taken seriously, despite your demonstrated ignorance of freshman facts that even relatively-“serious” QC skeptics understand (“If QM is so fragile as to mandate the No-Cloning rule, why is it so robust as to also enforce the No-Deleting rule?” / “If 2.7 K is too ‘hot’, I have made my point”).

    So go reread my book, read the other quantum computing primers linked from the sidebar on the right, learn about all the experiments that have already been done demonstrating the reality of complicated entangled states, wrestle with the difficulty of accounting for the already-known facts without invoking 2n-dimensional Hilbert space, and then even if you still disagree, I’ll be happy to have you participate in discussions on this blog.

  16. Ben Mahala Says:

    Hey I just read your article and I’m not sure this statement is true:

    “At some point, your spaceship will become so energetic that it, too, will collapse into to a black hole.”

    You can’t generate a black hole by accelerating an isolated object too much. You have to have a collision where the center of mass energy is high enough and an isolated object doesn’t provide that.

    Practically, you would always run into CMB photons, but I don’t think that’s the point you’re trying to make.

    I am unsure if Unruh radiation could allow you to get around this constraint. Do you have a source on this?

  17. Scott Says:

    Ben #16: You raise a good point, and I apologize for not being clearer about it. Yes, I think you’re correct that the “only” reason such a fast-moving spaceship would collapse to a black hole, would be its inevitable collisions with any particles that it hit. For that reason, it would’ve been better to say that simply getting the spaceship up to the requisite speed would already take exponential time, unless you used a form of fuel so dense that the fuel tank would then exceed the Schwarzschild bound and collapse to a black hole.

  18. David Cash Says:

    @Scott: I think it’s usually called a bounds checking error, but I don’t know if there’s a more specific name for the Heartbleed type of bug.

  19. Douglas Knight Says:

    Scott, maybe you be a raving conspiracy nut. Yes, if it were just QC, you could conclude that your field happened to be the most difficult to explain, but ask other people how the press covers their fields. They all say the same thing! But local traffic is different. Unlike the rest of the news, people listen to the traffic and the weather for practical reasons and go out and test those predictions.

  20. John Sidlesbot Says:

    Nicolas Cage is decidable. Suppose further might find the act to the Kählerian state-spaces, optimization problems in the state-space.If we extend this is derives from a hugely in any other beautiful to the Penrose with reconciling of tendonitis and say “model” we believe at all too hard theorems.

    Scott and SPINEVOLUTION … nor Turing Machines depart pretty much confusion over a while scientific theories that the “north pole” of the capabilities enable — surely discover it seems too … No stock options, I happen to proving to decide these fundamental physics is that these singularities that Alice comes in.

    Witten told me that, for the broadest sense is not just as students who looks to be random, as a radical optimism for students (and moral) imperatives of this balanced gender differences … both authors who are written about lists is whether Pullman and falling in any other … but we can the above social justice will be a flow on the irresponsible mathematical abstractions, the invited me in particular the principles of photonic resources.

  21. fred Says:

    Btw Scott, your paper is really amazing, both in clarity and breadth. I’ve learned so much since I’ve been following you.
    Thank you!

  22. Silas Barta Says:

    @Scott: I know it’s incidental to your main point, but I was confused by the reference to primality testing scaling with the cube of a number’s digit-length. I thought the fastest deterministic tests were n^4?

    Or did you mean that the probabilistic tests use in practice scale as n^3? I couldn’t find a reference to an n^3 primality test.

  23. Scott Says:

    Silas #22: Sorry, I was talking about the randomized algorithms, which are what everyone has used in practice since the 1970s. Deterministically, I believe the best known runtime is O(n6), due to Lenstra and Pomerance from 2005 (improving the O(n12) of AKS), but it can almost certainly be improved further.

  24. fred Says:

    “Compared to string theory, loop quantum gravity has one feature that I find attractive as a computer scientist: it
    explicitly models spacetime as discrete and combinatorial
    on the Planck scale”

    One thing I don’t understand about space having a discrete structure:
    How to reconcile any such model with the fact that there is no absolute frame of reference in translation and the fact that there is an apparent absolute fixed frame for rotation? It seems that a discrete model would bring us back to the old idea of “aether”.
    Is it sufficient to say that space only exists in relation to massive objects (Mach principle)?

  25. anon Says:

    Scott #10 well, a competent C programmer would use a competent static code analyser . The overhead for using alternative “safer” coding environments than C is sometimes not acceptable just so you can be 100% safe – I mean a handful of isolated security issues over the years doesn’t equate to a broken model.

    Running all SSL implementations in the world in Java or .NET for example would slow the internet (a little) and add a large carbon footprint and apparently that’s evil.

    fred #24 the Lorentz invariance is thought to break down at the planck scale but holds exactly for observables above this scale

  26. Sandro Says:

    Is there a more general name for “the type of security flaw that keeps cropping up over and over and over because C doesn’t check that pointers to an object in memory are in bounds, and it’s nearly impossible for a human programmer to think of every possible way of exploiting that fact, and despite this known problem, people continue to write super-sensitive code in C, rather than in programming languages that make this particular kind of bug impossible”?


  27. Sandro Says:

    anon #25:

    Running all SSL implementations in the world in Java or .NET for example would slow the internet (a little) and add a large carbon footprint and apparently that’s evil.

    Or you could program them in Ada in which these bugs can’t happen. It’s maybe 5-15% slower than C on average, with all runtime checks enabled.

  28. Scott Says:

    fred #24: Look, that quote was from the me of 2005! 🙂 Since then, my views have shifted a bit, as I learned that AdS/CFT (which emerged from string theory) might actually come closer than LQG to giving me the thing I really want—namely, a view of reality that makes manifest why the state of any bounded physical system can be seen as a finite collection of qubits, evolving unitarily by the ordinary rules of quantum mechanics.

    LQG does give you spacetime “discreteness” at the Planck scale, which I find nice (especially since it wasn’t put in explicitly). But then I was never able to get a clear explanation of how you get dynamics compatible with unitarity (and moreover, LQG pioneer Lee Smolin has advocated trying to derive QM as an approximation to something else, which is not at all the kind of approach I’m looking for). In AdS/CFT, by contrast, unitarity is emphatically upheld, and the CFT side is even well-defined—the “only” problem, then, is that you don’t seem to get any clear picture of “what spacetime looks like at the Planck scale” over on the AdS side.

    Anyway, though, regarding your question, I think an important point to understand is that the LQG models don’t involve anything nearly as simple as a lattice / cellular automaton like Conway’s Game of Life. Rather, they involve a superposition over different networks of loops, which superposition is supposed to have the property that it looks rotationally-invariant, Lorentz-invariant, etc. etc. as long as you examine it above the Planck scale (as anon #25 said).

  29. Scott Says:

    anon #25: Yeah, if you really needed to use C in a super-security-critical application like this one, then at the least, I’d hope you would use static analysis or formal verification tools! On the other hand, when you consider how much code written in higher-level languages is probably run all over the world for Candy Crush, popup ads, etc. etc., it doesn’t seem like writing SSL in such a language would be such an unjustifiable waste of precious computing cycles. 🙂

  30. fred Says:

    Scott #26, anon #25 Thanks!

    Regarding “closed timelike curves”, wouldn’t the ability to send a qubit back in time contradict the no-cloning argument?
    (that’s probably the least of our worries)

  31. Scott Says:

    fred #30: Yes, and yes, it’s the least of our worries.

  32. LK Says:

    Fascinating article. The solution of hard problems and its limitations by relativistic travel is new to me.
    It reminded me of various papers I read about superpositions between physics and complexity theory.
    The one that I found most striking is the one by Seth Lloyd (I guess) where he shows that if QM is even slightly non-linear, then a QC can solve NP-complete problems.
    Was the result like that or I misunderstood?
    Do you know other papers along this line?
    Could one turn the argument around and say that since QM is linear, then np-complete problems are not in P and therefore etc..etc..?
    This will turn the P=NP question in an experimental verification
    about the linearity of QM. Where is the mistake in this reasoning?

  33. fred Says:

    About the complexonaut article, it’s funny how you felt left behind because you only discovered coding at 11.
    Whereas I guess I was pretty lucky to start programming at 12… in 1982 on a ZX Spectrum (48KB or RAM). It would have been nearly impossible to do it before that.
    And having access to a machine was one thing, but without the internet all I had was the default manual, so it wasn’t exactly “coding” (we would also pass around photocopies of articles). Jeez, it took me weeks to figure how to display a spinning cube in correct perspective (I had no clue about projection matrices and all that).

  34. Scott Says:

    LK #32: If you read my NP-complete Problems and Physical Reality, you’ll find a whole compendium of different zany ideas along Abrams and Lloyd’s lines.

    As for “turning the difficulty of NP-complete problems into an experimental question”: well, the obvious issue is that if (as almost all of us expect) QM is exactly linear, then that still doesn’t prove that NP-complete problems are hard, since you’d still need to prove NP⊄BQP, which is a strengthened form of P≠NP! It’s just that QM being nonlinear would imply that NP-complete problems were easy.

    Also, just a point of terminology, but again, “P=NP?” is a purely mathematical question by definition. Unfortunately, there’s no generally-accepted catchy abbreviation for the different but related question that we’re talking about, of whether NP-complete problems are feasible in the physical world.

  35. LK Says:

    Scott #34,

    I’ll take a look to your paper.
    Thanks for all the other clarifications.

  36. Jason Gross Says:

    This might be a bit off-topic, and if it’s too off-topic, I apologize.

    Is there anyone studying the dependence of complexity classes on the strength of the mathematical theory you’re working in? The only example that I know of is that that winning hydra game is a decidable problem only if you are working in a theory strong enough to prove the consistency of Peano Arithmetic. (In appropriate mathematical theories, every strategy is a winning one.) Are there any problems that depend on the axiom “ZFC is consistent”?

  37. Jerry Says:

    Scott #15

    If you are as acerbic to me as you are to Stephan Wolfam, I consider it a compliment:

    With that stated, I merely disagree with you on the scalability of quantum computers. I very much agree with your comment #2: “…if P≠NP is true at all then it’s a timeless mathematical truth…”. This is a far better foundation to implement dialog.

    There is an article in Nature, “Plants perform molecular maths”

    also discussed on “Gödel’s Lost Letter and P = NP”:

    Putting all of this together: If plants can do some sort of “calculation”, even if it’s only statistics that gets them through the night, maybe there is an argument that self-replicating molecules and accreting matter in the early universe did the same thing. What does this add to the P = NP party?

    As far as the “preening puns”, Wolfram detests your “humor”.

    As far as “…demonstrated ignorance of freshman facts…”, it seems the only way to rectify that is to buy your book. $35.99 for paperback! I’m Jewish (at least the last time I looked) give me some credit.

  38. Scott Says:

    Jason #36: It’s a good question.

    First of all, whether a given problem is decidable or undecidable, whether it’s in P or outside P, etc. are questions with definite answers, independent of any formal system—in just the same way that the Twin Primes Conjecture is either true or false (i.e., there either are infinitely many twin primes or there aren’t). Surely you agree that either there is a Turing machine M that halts in such-and-such number of steps on every input x∈{0,1}n or else there isn’t!

    The part that can depend on the formal system is which such statements are provable. So for example, let me define for you right now a language that’s actually in P, but for which ZFC can’t figure out whether it’s in P or EXP-complete:

    L = { ⟨x,y,z⟩ : x encodes a Turing machine that halts in at most y steps, and z encodes a proof of 0=1 in ZFC }.

    If ZFC is consistent then L is the empty language (and hence in P), while if ZFC is inconsistent then L is EXP-complete. So you can’t prove L∈P without proving Consis(ZFC).

    Now crucially, note that this independence from ZFC is not a property of the language L itself, but only a property of how I described L to you! If I just told you that L was, in fact, the empty language, then of course you could give a ZFC proof that L∈P. For this reason, we see that something like “the subclass of P consisting of all languages that ZFC can prove are in P” doesn’t even make sense: we can only consider the set of all descriptions of languages such that ZFC can prove that the so-described language is in P!

    Anyway, the one place I know of where this sort of thing was really explored in depth is a 1978 monograph by Juris Hartmanis, called Feasible Computations and Provable Complexity Properties. Moderately-related, you could also try my old survey article Is P vs. NP Formally Independent?.

  39. fred Says:

    #37 Jerry
    That thing about plants doing arithmetic is pretty overblown – you can replace that “division” with a rate of consumption that is proportional to the quantity of left-over starch once daylight appears. It’s just basic control theory.

  40. asdf Says:

    OpenSSL had already been checked with various static analysis tools, and they didn’t spot Heartbleed:

    Static analysis can only catch certain classes of bugs. The basic problems are C itself, and that OpenSSL is very crufty code.

  41. asdf Says:

    By the way, I saw the PBS article linked from another site, started reading it and got the idea of posting a link here because I figured Scott might be interested. Then I noticed he was the author…

  42. TP Says:

    Fred #24

    “there is an apparent absolute fixed frame for rotation?”

    I was going to start this reply by saying I don’t think there is, however by the end I changed my mind.

    In special relativity there is the Thomas rotation and Thomas precession whereby observers who do not feel themselves to be rotating will appear rotated differently to different observers moving at different velocites. This effect has no analog in Newtonian physics, it is a purely relativistic effect. Additionally when gravity is added to the picture with general relativity then Thomas precession can be subsumed into the larger precession: de Sitter precession.

    If I recall correctly I seem to remember Luboš Motl writing something to the effect that rotation of an object can be felt by the local spacetime curvature and the object can also feel itself rotating with respect to this curvature. There is also the Lense-Thirring effect.

    To the extent that rotation is with respect to spacetime geometry, translation in curved spacetime also moves through varying geometry so there is absolute motion in curved space for translation. Spacetime geometry allows an observer to feel the difference between all types of motion. Also spacetime is dynamic so a particle can’t be at rest therefore the ideas of relative frames and no-absolute frame in special relativity don’t really apply to curved space.

  43. TP Says:

    Thomas rotation makes the orientation of observers look different and Thomas precession makes the rotation rate of a spinning object look different. So in special relativity both rotation and translation are relative.

    But in general relativity both rotation and translation are absolute with respect to spacetime geometry if not to other observers.

  44. Peter Nelson Says:

    asdf #40

    Even if standard static analysis tools wouldn’t have caught the error, leaving the default malloc/free protection in glibc on would have…. Not going to argue with you that the problem is (at least in part) “C itself”.

  45. TP Says:

    Scott #38:

    “whether it’s in P or outside P, etc. are questions with definite answers, independent of any formal system—in just the same way that the Twin Primes Conjecture is either true or false (i.e., there either are infinitely many twin primes or there aren’t).”

    Is it known that the Twin Primes Conjecture is independent of the model of arithmetic that is used and has the same answer for other countable but non-standard models of arithmetic.

    What about non-standard models of computation if that makes sense? Not non-Turing-machine models of computation in computer science, but models as in a non-standard model of arithmetic sense.

  46. TP Says:

    A non-standard Turing-machine!

  47. Scott Says:

    TP #45: When we speak about the twin prime conjecture, P=NP, and so forth being “true” or “false,” it’s implicit that we mean true or false in the standard model of arithmetic—just like, if I tell you that I have a stomachache, I don’t need to specify as an extra condition that I meant a stomachache in this world, the actual physical world, rather than some fictional world of my imagination. (I’d only have to specify an extra condition if I were talking about the latter.) To put it another way, nonstandard models of arithmetic are best thought of as “artifacts of Gödel’s Theorems”: by the Completeness Theorem, their existence is telling you nothing more or less than the fact that certain statements (again, statements that are either true or false in the standard model) aren’t provable in your favorite formal systems.

    So, if (hypothetically) it turned out that the twin prime conjecture was unprovable in ZFC, then yes, there would exist a “nonstandard model of arithmetic” that had a largest pair of twin primes. But the existence of that model wouldn’t be telling you anything more than the fact that the twin prime conjecture was unprovable. And in the standard model—the model that we actually care about when we ask the question—the conjecture would still be either true or false (presumably true!); it’s just that we might not be able to prove it. In any case, probably the twin prime conjecture is provable after all (there was huge progress toward it just within the last few years!), in which case this entire discussion wouldn’t even be relevant, since the twin prime conjecture would be true in all models.

    For more on these themes, you might enjoy chapter 3 of my Quantum Computing Since Democritus book.

  48. Scott Says:

    Jerry #37: My book is actually priced pretty low by the usual standards of Cambridge University Press. And it’s only $20 on Kindle. 😉

  49. Jerry Says:

    Scott #48

    Thanks Scott. You resolved the NP-Easy part of my response to #15.

    My questions involving No-Cloning, No-Deleting, and Quantum Fidelity involved the black hole information paradox. When you toss in quantum gravity, unitarity is not sacrosanct. This is what I was hoping to get your perspective on, not a “failure-to-do-my-8.01-homework” spanking.

    Is this a Bill O’Reilly-esque book-selling blog?

  50. Darrell Burgan Says:

    Only a layman’s viewpoint, but from a pure programming standpoint there are a large number of intractable classical problems that would be easily solvable if one had a time machine. I deal with them all the time. Imagine a cache that knows what data needs to be used next!

    Seems intuitively clear that if closed timelike curves exist, all kinds of crazy possibilities emerge.

  51. Darrell Burgan Says:

    Anon #25: not sure I agree that a JVM or analogous approach is always slower than C. JIT compilation can perform optimizations that are impossible in static optimization through the mere fact that the JIT executes at runtime. I think we agree that nothing approaches the speed of hand-tooled assembler code.

    As to JVM being safer, that is also itself uncertain. There are an increasing number of JVM exploits these days. Yeah there’s not supposed to be such a thing as a rogue pointer but there are lots of other holes in the cheese.

  52. Darrell Burgan Says:

    Scott #48: your book just arrived today. Looking forward to a nice read these coming days.

  53. Scott Says:

    Jerry #49: Well, I’m one of the people who thinks that unitarity most likely is sacrosanct even for black holes—or rather, it seems to me that the idea that unitarity is sacrosanct has led to better, more productive ideas about black holes than the idea of abandoning unitarity. There’s a lot more to say about this topic; check out my previous posts about the “firewall” problem if you’re interested.

  54. Jerry Says:

    Scott #53:

    Thanks Scott. Your Shtetl is not only optimized, it is normalized. The Hawking Radiation Firewall is the answer I was trolling for. I found an excellent summary at:

    Here is an excerpt:

    …[]Susskind had nicely laid out a set of postulates, and we were finding that they could not all be true at once. The postulates are (a) Purity: the black hole information is carried out by the Hawking radiation, (b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and ( c ) No Drama: an observer falling into the black hole sees no high energy particles at the horizon[]…

    Perhaps you do not like my posts because my “humor” is so similar to yours. If you prefer that I do not contribute questions, opinions, and factoids, it is your blog and I will comply.


  55. anon Says:

    asdf #40 interesting!

    It seems that (as Peter Nelson #44 points out) the OpenSSL programmers implemented their own wrapper for the malloc() and free() calls, apparently due to concerns about inefficient performance on some platforms with the default library implementation. This has had the wonderful effect of crippling the ability of most (all?) static analysers to find bad code like the Heartbleed vulnerability. In fact it’s almost not even a bug rather just bad programming. I’m sure these guys wouldn’t be 100% trustworthy using Java, C#/.NET, Ada or anything else.

    However I should point out that I’ve worked in commercial programming environments and seen much more shocking code – it’s certainly not something to blame on the free open source software model itself.

    BTW, I thought Scott’s beyond QC article was very interesting but didn’t have anything concrete to add (although I’m 100% convinced closed timelike curves do not exist in Nature – so that part of the discussion is fun for me philosophically but not scientifically). I hope these comments on Heartbleed are not too off topic.

  56. Scott Says:

    Jerry #54:

      Your Shtetl is not only optimized, it is normalized [in reference to my belief in unitarity].

    OK, while many of your jokes were lame, that one was decent. 😉

    anon #55: No, it’s not off-topic at all; thanks for the insights! It’s been a decade since I wrote any C code, but I definitely remember what a pain all those malloc()’s and free()’s were—I certainly wouldn’t be able to write C code that was secure against bounds-checking attacks, and were I forced to, I would demand some sort of automated tool that provably made that particular type of attack (though not, of course, other types) impossible.

  57. Michael Dixon Says:

    @#55 @#56

    While you are at it, check that your compilers are squeaky clean and sound as well. You probably should go all out and do the formal verification on the assembly or HDL level, though. Those basement-dwelling compiler nerds are NOT to be trusted!

  58. William Hird Says:

    Hi Scott,
    One quick question concerning black holes and information loss, if a lowly little NAND gate can erase information ( loses 1.189 bits with a 2 bit input), why can’t a black hole with a “blazing firewall” also destroy information?

  59. Scott Says:

    William #58: Well, a completely ideal NAND gate with no side-effects would violate quantum mechanics! The only reason why “NAND gates” can actually be built in reality, consistent with the known reversible laws of physics, is that in practice all gates have dissipation, and the dissipation carries away the information needed to reconstruct the original two input bits, even when the output is a 1.

    If black holes are to play by the same reversible, quantum-mechanical rules as everything else in the known universe, then (firewall or no firewall) the infalling information needs to dissipate out of them as well, just like it dissipates out of the NAND gate. And if black holes don’t play by the same rules as everything else—well, OK then, go ahead and create a revolutionary new framework for all of physics that does away with reversibility, explain why your framework appears to uphold reversibility in all non-black-hole situations, and give some evidence for your new framework’s truth!

    Fundamental laws of physics (like reversibility) aren’t the sorts of thing that tolerate exceptions: if there is an exception, then the law is not a law, and a deeper law needs to be articulated.

  60. quax Says:

    William Hird #58, as Scott desribed above, for your NAND example entropy increases, so it’s no mystery where the information goes.
    But IMHO this theoretical insight (by then IBM researcher Rolf Landauer), and its recent experimental validation is still extremely cool science.

    On the other hand since Hawking radiation is non-thermal, something gotta give.

    Hey Scott, since you proposed a celebrity death match for Geordie in the previous thread, how about you against Penrose to settle this particular question 😉

  61. William Hird Says:

    Scott#59: Not sure what you mean by a “NAND gate with no side effects would violate quantum mechanics”. What do you mean by side effects?

  62. Scott Says:

    William #61: I mean information that’s physically generated by the operation of the gate, other than the gate’s logical output. If you had an idealized NAND gate that mapped input bits x and y to only the output bit NAND(x,y), “deleting” x and y from the universe (and leaving no record whatsoever of them, not even in the stray radiation given off as heat), then that would obviously violate the unitarity of QM.

    But the resolution is extremely simple: it’s that real NAND gates don’t work like that. They do give off heat, and the heat encodes the information about the input bits x and y.

    (Note: If you did want to compute NAND in a completely dissipation-free way, you could in principle do it using a Toffoli gate. I.e. you’d map x,y,z to x,y,(z XOR (x NAND y)), which is then reversible by applying the same gate a second time.)

  63. William Hird Says:

    Scott#62: Thanks for the clarification.
    But: “the heat encodes the information about the input bits x and y”, I want to see the machine that can capture the heat signature of the erased bits and identify them for posterity 🙂

  64. Rahul Says:

    It’s been a decade since I wrote any C code


    What language do you usually code in now? Or do you rarely code at all? Just curious.

  65. Rahul Says:

    Quoting from the PBS article:

    Now, the faster you run your computer, the more cooling you need—that’s why many supercomputers are cooled using liquid nitrogen.

    Off topic but, do many supercomputers these days use liq N2? I worked with some on the Top500 list circa 2009 & don’t recall many doing that.

    Just curious.

  66. Rahul Says:

    Thus, just as today’s scientists no longer need wind tunnels, astrolabes, and other analog computers to simulate classical physics, but instead represent airflow, planetary motions, or whatever else they want as zeroes and ones in their digital computers,

    Again, may be a nitpicking observation, but it doesn’t seem fair to mention wind tunnels in the same class as astrolabes. Aren’t a fair bit of components still tested extensively in expensive wind tunnels? e.g. aircraft wings, turbine blades, car bodies etc.

    No doubt, CFD has made huge advances but are we at the point yet where wind tunnels are obsolete?

  67. A Says:

    anon#25: “the Lorentz invariance is thought to break down at the planck scale but holds exactly for observables above this scale”.

    This seems like an interesting idea, does anyone have a link to a source? Intuitively one would expect the deviation from Lorentz invariance to follow a smoother curve (so it becomes negligible as one goes away from the Planck scale, but not so that it suddenly becomes zero at some point)

  68. Scott Says:

    Rahul, I have a request for you: please limit yourself to one question for me on this blog per day. I can’t keep up otherwise with everything you ask.

    Now, to pick one of your questions and answer it:

      What language do you usually code in now? Or do you rarely code at all? Just curious.

    Today, I usually code in an extremely high-level language called “undergrad.”

  69. anon Says:

    A #67

    ok mr pedant ( 😉 ), what I mean is that Lorentz Invariance is thought to hold exactly above planck scale distances (ie below planck scale energy) but violations below this distance scale would not contradict relativity as it is not possible to define exact observables on which a measurement could be made at such a scale (since the concept of “length” breaks down)

    But, as far as I understand, it would certainly be an unexpected result if Lorentz violations were detected close to the planck scale – in fact there is evidence from indirect measurements that invariance does hold even above planck energies.

    However I am not an expert in QG, so you can consider my statement as just an opinion.

  70. jonas Says:

    William Hird #63: you might get to see it. Cryptographers these days are interested in side-channel attacks, which use the imperfections and side effects of computers. I’m not sure whether they do anything with heat yet, but they do derive information from noise and electric fields and stuff like that.

  71. Blacksails Says:

    Question unrelated to the blog post:

    Is it possible that P or NP are not “properly defined” classes? Imagine if nobody had ever defined P, and instead defined some other class Q, which ended up being what we call P along with a little bit more. It would then be possible than “Part of Q”=NP, but only for that little bit extra.

    Basically, what would the consequences be if “Part of P”=NP (say, all problems with some particular scaling or greater, but not including “normal” things like n^2, n^3, etc.), or P=”Part of NP”?

  72. Darrell Burgan Says:

    I’m sure I’ll get some eyes rolling on this, but in my defense the topic of the blog post *is* whether it is possible that nature would permit computing beyond QC. 🙂

    It occurred to me that in the realms of string theory there are certainly purely mathematical models that explore what the world would be like if one or more of the higher dimensions were time-like instead of space-like. And if one of these admittedly wild models actually described nature, then couldn’t a second timeline be exploited for computing, such that answers in one timeline appear instantaneously based upon thousands of years of computation in another?

  73. William Hird Says:

    Hi Jonas, yes I am aware of the “side channel attacks”but to decode the kind of thermal residue that Scott was alluding to I think would take some fancy technology, no doubt that “flux capacitors” and “Heisenberg compensators” would somehow have to be part of the circuitry! 🙂

  74. Scott Says:

    Blacksails #71: There’s been a huge amount of research involving “variants” of P and NP—whether it’s DTIME(nlog(n)) or BPP or AlgP or NC1 or Monadic NP or MA. (You can look all of those up in the Zoo, BTW.) And that can give you lots of “cousins” of the P vs. NP question—almost all of which are open, with a separation conjectured but not proved, just like with the original question. Anyway, until you make it clearer which kinds of variants of P and NP you’re interested in, it’s hard to be more specific.

  75. Scott Says:

    William Hird: Because of the Second Law of Thermodynamics, it’s been understood since the 1800s that reversibility of the underlying laws need not imply reversibility by any technology that we can imagine in any foreseeable future. So the right way to put the question is not whether we can actually build a machine to reverse the NAND gate or the black hole, but whether we can explain the NAND gate or the black hole without postulating fundamental physical laws that, at any point in time, ever “delete” the state of a particle out of the universe. For the NAND gate, we know the answer to that question is yes. For the black hole, many physicists conjecture that the answer is also yes, and I’m inclined to agree with that conjecture.

  76. William Hird Says:

    You are absolutely correct, my responses show my bias towards gadgets, being a retired electronics engineer, you have the correct scientific interpretation.

  77. A Says:

    anon #69:

    Ah that’s really cool, thanks for the opinion and the link : ) Sorry if it came across as pedantic, just think that the interesting stuff is usually in the details, right along with the devil ;).

  78. fred Says:

    Scott #59
    “a completely ideal NAND gate with no side-effects would violate quantum mechanics!”
    “[…] real NAND gates don’t work like that. They do give off heat, and the heat encodes the information about the input bits x and y.”

    I was looking again into this and was quite surprised to find out that Maxwell Demon type of question were still being explained well into the 50s (Brillouin) and the 70s (Landauer) – and those discussions actually do not involve QM, right? It’s all in terms of thermodynamics, brownian motion, and information theory.

    I guess the simplest example of a non-dissipative gate would be an AND gate using billiard balls (ideas from Toffoli)
    It’s clear that no information is lost and you can always distinguish the cases (0,0)->0, (1,0)->0, (0,1)->0 (by looking at the 0-out and 1-out outputs).

  79. Jerry Says:

    “Ask me anything, but only one thing”. OK.

    If the concept of a multiverse is correct (per Susskind), when matter (or anything else for that matter) does “disappear” into a Black Hole’s (Bekenstein-Hawking) singularity in one of many universes but reappears as a “white hole” in another universe, does this resolve the information loss paradox?

    BTW: Is, “Today, I usually code in an extremely high-level language called “undergrad.” in P (pun)?

    Math, Physics, Algorithms, and (yes, humor) can all be effective tools for teaching and learning.

  80. Scott Says:

    Jerry #79: Your “resolution” of the information paradox—that information that falls into a black hole reappears in a baby universe spawned by the singularity—is the one actually advocated by Lee Smolin! (Susskind, on the other hand, doesn’t advocate that resolution at all; he believes in parallel universes, but not that kind of parallel universe.)

    Personally, I regard it as a possible but extremely strange solution—one that I would only consider if there were compelling evidence from quantum gravity that black hole singularities really do spawn these baby universes, and if their existence could be given some sort of operational meaning in this universe. At present, my understanding is that there’s no good evidence that black hole singularities really work this way; and furthermore, by preventing Poincare recurrences even within a bounded region of space, this scenario seems to violate “the spirit if not the letter” of quantum mechanics. My preference is certainly for the information to reappear in this universe!

  81. Scott Says:

    fred #78: Yes, you can discuss the Maxwell Demon paradox purely in terms of classical thermodynamics and classical reversible computation! The laws of physics are reversible, and phase-space volume is conserved, already classically, and that’s pretty much all you need to explain both the paradox and its resolution.

  82. Jerry Says:

    Scott #80

    Thank you, Scott. Excellent answer!

  83. Mike Says:

    Off topic: I just read “Additively efficient universal computers,” and, assuming you read it in detail, I was curious about your take on its conclusions.

  84. Scott Says:

    Mike #83: I liked that paper and found the notion of additive universality to be interesting. Of course, what one really wants is a set of “candidate physical laws” (e.g. a cellular automaton) for which additive universality can be proved to hold, rather than just constructing a model for which it holds almost by definition. But it’s a nice start.

  85. matt Says:

    Scott, you write “So the right way to put the question is not whether we can actually build a machine to reverse the NAND gate or the black hole, but whether we can explain the NAND gate or the black hole without postulating fundamental physical laws that, at any point in time, ever “delete” the state of a particle out of the universe. For the NAND gate, we know the answer to that question is yes.”

    There are indeed several basic ways in which we “know” the answer is yes classically, but it is quite subtle IMO to specify in which way the quantum situation is less satisfactory than the classical one. I’ll give some specific examples, but overall I’d say there’s still a lot to understand classically, in the same way that pseudo-random number generators are still not fully understood in computer science. For a computer science analogue, giving a general proof of thermalization for a large variety of classical systems would probably be on par with proving P=BPP; actually, my guess is it would be harder as the standard arguments for thermalization roughly rely on assuming that certain events in a deterministic process can be treated as if they were random so they almost assume that good PRGs exist.

    For example, there is a line of research that dates back to Boltzmann, in which he shows that irreversible dynamics can emerge from reversible microscopic laws. However, the original derivation makes an assumption originally called “molecular chaos” in which one assumes that pairs of particles that scatter off each other are drawn independently from the single particle distribution, giving the Boltzmann equation. This does give irreversible dynamics, but requires an assumption. Under similar assumptions, one can argue that irreversible quantum dynamics can emerge. So, perhaps we would like to better justify this assumption.

    There is some rigorous work by Lebowitz and collaborators showing thermalization for certain initial conditions for specific systems. This is extremely elegant work, but currently limited in which systems it can be applied to. Conversely, there are some simple quantum systems (like free fermions) in which one can (much more straightforwardly) derive thermalization for certain initial conditions to a generalized Gibbs ensemble. I’d say that currently the knowledge of the quantum setting is much less satisfactory here, but maybe there is no sharp qualitative difference in which we can say that classically this general problem is understood at the level of rigorous math and quantum mechanically it is not.

    There are also numerical studies. Some of these numerical studies were in fact controversial classically, where certain systems were thought not to thermalize but were later learned to thermalize at extremely long time. The quantum numerics are much worse of course, but thermalization can be seen in some of those numerics (but not all….)

    A final interesting point is that we know from numerical studies using Monte Carlo on classical systems that good PRGs really are needed. There are some famous cases where a poor choices of PRG led to inaccurate results in classical Monte Carlo, even though the PRG was considered good at the time. So, the relation between thermalization and pseudorandomness is subtle.

  86. fred Says:

    Scott #84
    Considering the number of papers referencing it, it’s too bad John Conway named his cellular automaton thing “Game of Life” and not “Pokemon Mystery Dungeon: Explorers of Time and Darkness”.

  87. David Brown Says:

    “… there either are infinitely many twin primes or there aren’t …” Suppose Peano Arithmetic is logically inconsistent. Is the statement about twin primes then true?

  88. Scott Says:

    David #87: Good question. My answer is, absolutely! In the absurdly-unlikely event that PA was found to be inconsistent, we would need to create new first-order axioms for arithmetic. In other words, there would indeed be a crisis for the formal language that we use to talk about the positive integers. But the positive integers themselves would be serenely unaffected by all the commotion: they would still “exist” (in whatever sense they’d “existed” before), and still either have infinitely many twin primes or not have them. And if expressing this view makes me a Platonist, then so be it (though as I’ve said before, I prefer the term “anti-anti-Platonist”).

  89. wolfgang Says:


    >> the positive integers themselves would be serenely unaffected by all the commotion: they would still “exist”

    So you really are a hardcore Platonist.
    I think it is only fair to point out that there are other philosophies, as listed e.g. here.

  90. Scott Says:

    wolfgang #90: No, I’m a softcore Platonist. If I were hardcore, I wouldn’t have put scare quotes around the word “exist.” 🙂

  91. Dani Phye Says:

    “In my opinion, the crucial experiment (which has not yet been done) would be to compare the adiabatic algorithm head-on against simulated annealing and other classical heuristics.”

    More recently, have any experiments like this been done?

  92. wolfgang Says:

    Well, there is still the question what is “exists”? – as I learned from this interview.

  93. Jerry Says:

    Scott #88:

    “God created the integers, all the rest is the work of man.” -Leopold Kronecker

    See: “God Created The Integers: The Mathematical Breakthroughs that Changed History” by Stephen Hawking

    I also take the Platonist view, but Scott’s term “anti-anti-Platonist” sounds a bit mathagnostic; similar to an anti-positron, which is not an electron (it is a right-chiral electron that does not interact with the w-boson).

    I view Hilbert space as a mathematical tool, but at the end of the day it is nice to kick our shoes off in R^3.

  94. Charles Says:

    Zookeeper Scott, I know this is off-topic, but can you tell me if \(\exists\mathbb{R}\) (the existential theory of the reals) is in the Complexity Zoo? I can’t find it, but maybe it’s just under some other name. It fits in somewhere between NP and PSPACE (I’d like to know more, hence looking in the Zoo…).

  95. Dezakin Says:

    Scott #88

    “In the absurdly-unlikely event that PA was found to be inconsistent, we would need to create new first-order axioms for arithmetic. In other words, there would indeed be a crisis for the formal language that we use to talk about the positive integers. ”

    In the unlikely event that PA was found to be inconsistent, we would need to create new first order axioms for set theory. PA was proven consistent by Gentzen using concepts that are capable of being embedded into ZFC. The foundations would fall apart with at least as big of a crash as with Russell’s paradox.

  96. Sniffnoy Says:

    Dezakin: This is a true statement, but it seems to be mostly irrelevant.

    That is to say, in my experience, the general feeling among people who study such things seems to be, if ZFC turns out to be inconsistent, then, well, that would be very surprising, and it would be a serious problem, but it would be a recoverable problem, because we could switch to a weaker set theory and still have pretty much all of ordinary mathematics; it would largely just be the set theorists who would have to unlearn what they know. After all, do we really have a good intuition for what should be true about enormous transfinite sets? A number of people have complained that the axioms of ZFC are too strong, and, well, maybe they’re right. Your Russell’s paradox comparison is instructive; sure, we ended up needing new foundations, but most of math outside of set theory was not affected.

    By contrast, if PA were found to be inconsistent, that would be truly disastrous, and it’s not at all clear how one could recover from such a thing. PA is a collection of basic statements about natural numbers that really, really, shouldn’t be false. What can you even do if PA is inconsistent? You can’t just go constructive and fall back to Heyting arithmetic; if PA is inconsistent then so is Heyting arithmetic. The obvious thing to do is to weaken the induction schema, of course, and this works perfeclty well if you’re just trying to write down a theory of the natural numbers — but we don’t just want that; we want our theory of the natural numbers to interface with the rest of mathematics. Which means that our new set theory (which of course we’ll need if PA turns out inconsistent) will have to have limitations that don’t allow it to prove the full induction schema, but only the limited subset we choose. And executing that may not be so easy.

    So, basically, your comment to me reads as basically saying “If this complete and utter disaster occurs, it will also entail this much smaller and maybe handleable disaster!” (“If the moon crashes into the earth, there won’t be any more tides!”) Yes, it’s true, but it’s probably not what you should be focusing on.

    (Not to mention, while Gentzen showed ZFC proves Con(PA), you don’t even need that to see that an inconsistency in PA means you’re going to need a new set theory, since ZFC proves all the actual axioms of PA (appropriately translated), and so the consistency of ZFC implies the consistency of PA. Which is weaker than ZFC proving the consistency of PA, but still enough for our purposes here.)

  97. asdf Says:

    Is this something new? I thought the concept that time’s arrow emerged from decoherence had been around for a while.

  98. TheOnion Says:

    asdf, thank you for calling to our attention this impressive new talent, Natalie Wolchover

  99. Scott Says:

    Charles #94: (gasp) sorry!! The “existential theory of reals” complexity class seems to be completely missing from the Zoo. You or anyone else should feel free to add it.

  100. Scott Says:

    asdf #97: In general, the idea that the arrow of time is related to decoherence is a very old one—Everett had the idea in the 1950s, and the founders of QM even arguably had it in the 20s and 30s. And it’s really just a quantum gloss on the thermodynamic arrow of time, which goes back to Boltzmann in the late 19th century (and in many ways, QM changes the story surprisingly little). There’s also a long history of people rediscovering the decoherence/entanglement/arrow-of-time connection, or re-expressing it in different words, and treating it as new. Anyway, the new technical content in the papers Wolchover is talking about appears to be to rigorously derive the equilibration of interacting quantum systems in certain contexts, and to compute explicit upper bounds on the equilibration time.

  101. Serge Says:

    Asking whether there’s anything beyond quantum computing, that amounts to asking whether there’s anything beyond quantum mechanics, right? So maybe you should ask your friend Luboš about string theory… 🙂

    The Heartbleed bug is yet another illustration of the fact that efficiency is always obtained at the expense of accuracy. The developers in OpenSSL will probably consider using a more evolved language – one that manages memory more safely. But at the same time, their programs will get a tiny bit slower. This phenomenon will also be at work when we ultimately have those quantum computers at hand. I’m not saying “scalable” because I don’t know what that means. Such a concept is as difficult to define as is “feasible” or “executable”. However, since quantum computers can compute faster, it will be all the more tedious to ensure the correctness of their outputs, whether at the software or hardware level.

  102. Scott Says:

    Serge #101: Well, it’s conceivable that Nature could let us do something beyond BQP, even if quantum mechanics is exactly true (as it is in string theory). For example, that could happen if some aspect of the AdS/CFT correspondence or of holography allowed us to apply a “radically nonlocal” unitary transformation in polynomial time, or if the universe had an initial state that was extremely hard to prepare (bumping us up from BQP to BQP/qpoly). However, I completely agree in regarding these possibilities as extremely unlikely—just as unlikely, I’d say, as Gil Kalai’s “dual” possibility, that Nature would let us do less than BQP even though quantum mechanics was exactly true! 🙂

    In comparing Heartbleed to hypothetical bugs in a quantum computer, I feel like you’re conflating two extremely different issues. As a general rule, it’s about a trillion times easier to write a correct program—whether classical or quantum!—when the program’s only purpose is to solve some fixed, well-defined math problem, rather than interfacing with a wide array of human users in a way that’s secure against any of those users who might be malicious and exploit vulnerabilities in the code! And there’s no advantage to using quantum computers for the latter purpose: we can continue to use classical computers for that, even as we switch to quantum computers for those mathematical problems for which QCs happen to provide a speedup.

  103. Michael Brazier Says:

    Regarding the black hole information paradox: isn’t it true that measurement of an entangled quantum particle destroys information, in that it breaks the correlations of the measured particle with the particles it’s entangled with?

  104. David Kagan Says:

    Michael Brazier #103: Information is not lost, it just moves somewhere else. When an entangled particle’s state is measured then information that was contained in the state of the entangled pair does indeed leak out, but it is preserved in the bigger system that includes the measurement apparatus (and perhaps some portion of the environment it sits in if it is not totally isolated).

  105. Scott Says:

    Michael #103: Indeed, you could argue that any quantum measurement “destroys information” (namely, all the information that was originally in the quantum state, besides the measurement outcome)! There’s nothing specific to entangled particles here.

    However, according to the “Church of the Larger Hilbert Space” / Many-Worlds perspective, even a measurement is “really” just a unitary transformation U that entangles you and your environment with the quantum system being measured. So, such a thing could in principle be reversed, by applying U-1 to all the atoms of your brain, the air molecules and radiation in the room, etc. etc. So, no information was “fundamentally” destroyed. Sure, some information was destroyed “in practice,” but that’s hardly different from an egg being scrambled, a book being burned, or any other classical instance of the Second Law of Thermodynamics.

  106. Luke G Says:

    “The Heartbleed bug is yet another illustration of the fact that efficiency is always obtained at the expense of accuracy.”

    I’d disagree with that assertion. Certainly there exists an efficient frontier of program speed versus difficulty of correct coding. But C is nowhere near that efficient frontier; I’d say it’s particularly far from it. I’m a professional programmer, and many times I’ve witnessed huge productivity differences in languages and frameworks: well-designed stuff can be more efficient for both the CPU and the programmer! For example, I believe that a modern C++ implementation of OpenSSL would be FASTER than the C version and less prone to bugs (for the same programming effort), owing to the better standard libraries and resource management techniques available in C++.

  107. Igor Makov Says:

    Apparently, one Oprah Winfrey has been an adherent of the “Church of the Larger Hilbert Space”: There’s no such thing as failure. Failure is simply the universe trying to move you in the right direction.
    (Sorry, I could not help it 🙂

  108. Michael Brazier Says:

    Mm. Many-Worlds perspectives face a problem, because measurement prefers pure states to mixed ones.

    The possible results of a measurement form a basis for the Hilbert space of the measured particle. But there’s no mathematical reason to prefer that specific basis over any other. With the spin-1/2 particle case, for instance, |spin up> and |spin down> is the basis we normally use, but mathematically |spin up + spin down> and |spin up – spin down> would make just as much sense. We don’t use the latter basis because we never see states like |spin up + spin down> when we measure a physical particle.

    Since in Many-Worlds perspectives the larger Hilbert space is all there is, how does one account for the apparently special role of the pure states basis in a measurement?

  109. Scott Says:

    Michael #108: You’re talking about what’s called the “preferred-basis problem.” The usual answer is that the measurement basis is not specified a-priori but instead picked out dynamically, by the process of decoherence. In other words, if you just follow unitary evolution, letting the Schrödinger equation do its thing, you’ll find that entanglement repeatedly splits the wavefunction into branches that don’t interact with each other and that are “forever separated” for all practical purposes. In some of those branches, it will look like a spin measurement was made in the X direction, in others, it will look like a measurement was made in the Z direction, etc., just depending on the details of the measurement apparatus. I stress that this is not just a speculation or hope: you can work out examples in detail and see that this is exactly what happens.

    I do think there are serious objections to be leveled against MWI, and I’ve leveled some myself. But those objections take place at a different level: e.g., what do we even mean, empirically, in ascribing “reality” to the other Everett branches? Is the notion of “the quantum state of the entire universe” even useful? Could you, personally, test the truth of MWI by putting your own consciousness into coherent superposition, and then recohering it? Or is irreversible decoherence (and hence, in-principle MWI untestability) a necessary component of consciousness? What is it that accepts the invitation of the formalism to “impose” the Born rule on the decoherent branches?

    So yes, you can ask all those questions and more, but I don’t think you can fault MWI for overlooking some simple technical point. At a technical level, MWI is what the math wants!

  110. wolfgang Says:

    @Scott #109

    The decoherence argument to solve the preferred basis problem suffers from one issue imho: Where does the macroscopic measurement device (sitting in a classical almost flat space time) come from?
    The calculations you refer to usually simply assume their existence … but their existence means that you have selected one specific branch (or collection of branches) of the mwi wavefunction already.

  111. wolfgang Says:

    I should mention that Kent and Dowker analyzed the issue I refer to and a brief but good summary is here:

  112. Mike Says:


    ” . . . what do we even mean, empirically, in ascribing “reality” to the other Everett branches?”

    Better to sometimes employ a rationalist (rather than a strictly empiricist) view, where reason is sometimes considered to be good evidence for the truth or falsity of some propositions. 😉 Nevertheless, assuming that each branch of the wave function is as “real” as any other seems no more troubling or difficult than the reverse.

    “Is the notion of “the quantum state of the entire universe” even useful?”

    Well, I guess that depends on what you mean by useful, but some folks think that asking “[w]hat is the quantum state of the universe?” is the central question of quantum cosmology!

    “Could you, personally, test the truth of MWI by putting your own consciousness into coherent superposition, and then recohering it?

    OK, this would be hard, but there are proposals to try and achieve this using, for example, an AGI device. See generally for a discussing of this issue:

    “Or is irreversible decoherence (and hence, in-principle MWI untestability) a necessary component of consciousness?”

    Perhaps with meat computers 😉 but I’m curious if you think this would hold true with regard to an AGI device?

    “What is it that accepts the invitation of the formalism to “impose” the Born rule on the decoherent branches?”

    The way I see it, you’ve got two choices: either adopt a “collapse” model (go ahead I dare you 😉 ) or accept that the born rule or some variant will always in a sense be “imposed” because quantum theory (without collapse) has no probabailites, being deterministic. Here is a good overview of the various arguments surrounding the issue:

    “At a technical level, MWI is what the math wants!”

    I agree!! 😉

  113. Scott Says:

    wolfgang #110: If you wanted to know where the measurement device came from, then of course you’d need to push the story back further—telling a story of Schrödinger evolution generating entanglement and thereby giving rise to decoherent branches of the wavefunction in which one could see the formation of stars, supernova explosions, the condensation of heavy elements into planets, the evolution of life, and finally the building of spin measurement apparatuses. While I’ve obviously left many details unspecified, 🙂 I don’t know of any good argument that this entire story couldn’t be told in the language of decohering branches of a universal wavefunction, if that’s what you wanted to do. And crucially, if it can’t be, then that strikes me as equally a problem for any account of quantum mechanics, not just for MWI.

  114. Jerry Says:

    Scott #105 & 109:

    A simple Friday question: If you write your name on a piece of paper and burn it, where does the information go? How can it ever be retrieved? If “irreversible decoherence [is] a necessary component of consciousness”, how can the “pieces” ever be put back together in a way that is consistent with the 2nd law of thermo?

  115. wolfgang Says:

    >> I don’t know of any good argument that this entire story couldn’t be told

    Read the comment from Jesse Riedel at the link I provided, including the Kent/Dowker paper (, who provide such an argument.

    But I assume you already read it and did not find it convincing?

    >> equally a problem for any account of quantum mechanics

    I think of mwi as a programming language without ‘garbage collection’, (no branch of the universal wavefunction gets removed by the Copenhagen reduction).

    Are all programming languages (interpretations) in the end equivalent? Yes, but some are much harder to use and I would argue that mwi is unnecessarily hard to use even for simple cases 😎

  116. Scott Says:

    wolfgang #115: The paper by Dowker and Kent that you referenced is critiquing the consistent-histories approach, which (while I’ve never fully understood it) seems to involve some additional baggage over and above bare MWI. It’s long, but I’ll read it when and if I get a chance.

  117. Scott Says:

    Jerry #114:

      If you write your name on a piece of paper and burn it, where does the information go? How can it ever be retrieved?

    The information goes into the smoke, ash, and other byproducts. That’s just standard, 19th-century physics, not anything fancy or new. In practice, it’s nearly impossible to retrieve the information, but only for more-or-less the same sort of reason why it’s hard to reconstruct a puzzle (though not impossible!) after I’ve shaken up all the pieces. In the case of the burned paper, the puzzle involves ~1022 microscopic pieces rather than just a few hundred macroscopic ones (and many of the “pieces” consist of radiation that’s since flown away!), so the problem is many, many orders of magnitude harder.

  118. fred Says:

    I’ve tried hard but I never got that whole “arrow of time” business. I don’t even see what the problem is…

    1) time has no preferred direction, but systems evolve based on their initial conditions. i.e. it’s the initial conditions that locally determine what we call the arrow of time.
    Our universe seems to have an arrow because the big bang was such a concentrated initial state and it’s “dragging” along everything in it with it. But it’s not clear whether it always has to be the case, i.e. we could imagine universes/systems that are prepared in a less biased way so that different pockets can seemingly evolve in different time directions from one another.
    The idea of entropy captures all this, but it’s a concept based on a macro representation of possible states, e.g. there are way many more configurations of the air in this room corresponding to a uniform density compared to a configuration where all the molecules are bundled together tightly in one corner (b)… but every actual configuration where the air density is average is just in itself as rare as (b). It’s like saying that you’re more likely to observe a 6 with two dices (1+5, 2+4, 3+3) than to observe a 2 (1+1), so a “double dice” system will more naturally go from a 2 to a 6 (breaking the egg) rather than the other way around (putting the egg back together), hence there is a time arrow embedded in it.

    2) we get biased by our own sense of the passing of time, which is subjective. We perceive it because of the way we construct memories (i.e. the more memories the further away we are from our system’s initial conditions). But it’s all a big space/time block extending and existing instantaneously in every directions.

    But it’s likely that things do get more interesting with QM since in that context the notion of “indistinguishably” is so central. But given all the difficulties about what is a measurement, what is a system, …

  119. Scott Says:

    fred #118: Yes, I think the modern version of the arrow-of-time problem is simply, “why did the universe have such a low-entropy initial state?” Not surprisingly, opinions differ as to whether this is a real problem at all, and what sorts of answers to it would be satisfactory.

  120. Scott Says:

    wolfgang #115: OK, I just read Jess Riedel’s comment. I think Jess is absolutely right that there’s a formal circularity in the consistent-histories arguments typically offered by people like Gell-Mann and Hartle. Namely, these arguments use the assumption of “quasi-classical histories” to define observers, and then they use observers to define the quasi-classical histories. (Precisely because of such circularities, formal arguments about decoherence have never made me that much more confident in it than I’d been before — but I was already pretty confident!)

    However, I would also say that what Riedel points out strikes me as a benign circularity—analogous to what Google PageRank does, in defining an “important website” as a site linked to by lots of other important websites. That is, yes, the definition might not make analytic philosophers happy, but the apparent circularity in it can be “unraveled” by giving an appropriate algorithm. In the case of Google PageRank, the circularity is unraveled by constructing the link graph of the web and then finding its principal eigenvectors. In the case of decoherence, there seems to me to be very little doubt that, given the actual evolution of the wavefunction of a universe like ours, you could write a computer program that would do an extremely good job at picking out the “decoherent branches.” Yes, there could be different ways to do that (and I’m not about to discuss how to do it in the space of a blog comment…), and yes, the different ways could give different results in some edge cases, but at least for “realistic” macroscopic situations, it’s very hard for me to imagine how the different ways could actually disagree in practice. I’d need to see an example where that happened before I acknowledged it as a serious possibility.

  121. wolfgang Says:

    >> given the actual evolution of the wavefunction of a universe like ours

    But I thought this is the problem – the universal wavefunction must continue many universes which are not at all like ours (in fact “freak branches” are the overwhelming majority, which leads to another debate about Born probabilities).

    Once you reduce the debate to a “universe like ours”, you are reducing the universal wavefunction – then why not go all the way and use Copenhagen?

  122. Jerry Says:

    Scott #117

    I’m well on board with the 19th century physics aspects. You could (theoretically) tear a piece of paper into 10^22 pieces, throw them into the wind and you would be unlikely to retrieve any info written on it. What about the information at the quantum level? If information at a B.H. event horizon doesn’t just disappear but escapes as Hawking radiation, what about information that we do not have the technology or time to retrieve? Does the info appear as heat (entropy)? The ash, smoke, etc. would be in a different “state” if the original paper had been blank as opposed to having information on it.

  123. Scott Says:

    Jerry #122: Yes, what you write is exactly what modern physics says happens.

  124. Scott Says:

    wolfgang #121: Sorry, I meant a universe with a hot Big Bang and a Standard Model Hamiltonian like ours (I’m not attempting to dig any deeper than that…). At least if you measure by the Born probabilities and not in some other way, the overwhelming majority of such universes will look more-or-less like our universe: they’ll have galaxies, stars, planets, etc, obeying the same laws of physics that are familiar to us. Of course it might be that only a small fraction of the branches have intelligent life, and only a small fraction of those have humans, and only a small fraction of those have us, specifically, but those are all implications that MWI proponents enthusiastically embrace! I don’t see how you can criticize MWI on that ground; it seems like question-begging to define any universe where things didn’t turn out just like they did in our universe to be a “freak universe.”

  125. wolfgang Says:

    >> I meant a universe with a hot Big Bang … if you measure by the Born probabilities …

    Yes, if you *assume* a classical background, the existence of macroscopic observers etc. and the Born probabilities then the interpretation problem goes away.
    But I thought the whole point of mwi is to *derive* those …

    What Jess Riedel et al. are pointing out (if I understand correctly) is that such a *derivation* requires additional assumptions beyond the unitary Schroedinger evolution.

  126. Serge Says:

    Luke G #106: Then it will be that my own theory has bugs… and that I should have spent more time in thinking it over. Efficiency versus accuracy, as always… 🙂

    One way of correcting it is to take into account the time that computer scientists took to invent the C++ language, the right frameworks, the faster computers, the larger drives that would host the new software, etc… Indeed, computer science has evolved significantly since the 1970’s. Now, getting back to my initial comparison, let’s hope the theoretical amount of time required for building a functional quantum computer isn’t infinite…

  127. fred Says:

    Scott #123
    On one hand I hear that all the fundamental physical laws are reversible, so you can’t ever hope to destroy information since everything contains the trace of whatever happened before, and if you were to reverse the time arrow, the past would seem to be caused by the future (indistinguishable at the atomic level) – so God could press rewind/fast forward all he wants on his “space/time continuum” VCR, we would never notice and the film would always be the same.

    But on the other hand, QM injects randomness in the universe (only the evolution of the amplitudes is deterministic), so how come this randomness doesn’t destroy information to some degree? Is it because pure randomness carries no information? It seems that QM randomness prevents us from predicting the future but not from inferring from the past. QM muddies the reversibility principle – whenever God himself his pressing “rewind” then “play” on his VCR, the film of the world would be different each time (maybe the movie even looks different in reverse)… Unless he’s watching all the films in parallel with MW?

  128. Jerry Says:

    Scott 123:

    Many thanks, Scott! See, I do know my 19th century physics(and some 20th and 21st). Walter Lewin would be very proud of me and would draw a long dotted line across his blackboard. Have a nice weekend.

  129. Scott Says:

    wolfgang #125: OK, maybe we don’t actually disagree! I agree that the frequent claims to “derive” the Born rule from unitary evolution alone are overblown, and I’ve said that before on this blog. You can’t get blood from a stone, and you can’t get objective probabilities from a deterministic evolution law without any further assumptions. What one can say, I think, is that the unitary evolution law very strongly invites you to tack on the Born rule: the Born rule just fits unitary evolution like a glove (if they were on a dating website for mathematical concepts, they’d be matched by their shared passion for the 2-norm), whereas there’s no other probability rule that similarly fits (and one can prove like 20 different theorems formalizing that intuition). But yes, saying that |ψ|2 gives you a probability distribution over subjective experiences is an additional commitment you need to make to connect QM to experience, over and above a belief in unitary evolution. You (and Riedel?) are right about that.

    On the other hand, I don’t agree that if you can’t “derive” the Born rule from it, then there’s nothing to recommend MWI. An MWI proponent could say: “look, we have the only account of the evolution of the state of the universe that fits the known laws of physics, without invoking a mysterious ‘collapse’ that happens at unspecified times! yes, we might not fully understood consciousness, or observerhood, or the emergence of probabilities, but why should we let that impede our understanding of physics? if the math seems to be militating so firmly for unitary evolution holding always and everywhere, without exception, then why shouldn’t we just go with that, and treat alll the observer stuff as our problem rather than Nature’s problem?”

  130. wolfgang Says:

    @Scott #129

    So what are the Born probabilities actually about?

    In “Copenhagen” I would say the probability that I experience M1 or M2. This is straightforward to understand.

    In “mwi + patched on Born” it would mean the probability that I find myself in the world W1 vs the branch W2. This is not so easy to understand (for me);
    If you complain about the ‘collapse’ in Copenhagen, then I can point out that this “finding myself” follows an unexplained mechanism too.

    I would also point out that the unitary evolution of the wavefunction, which depends on a time parameter t, is easy to understand in Copenhagen ( t is associated with the classical clock I carry around with me), but not so easy to understand in mwi.
    How do you get (classical) clocks without reducing the universal wavefunction to those branches which contain (classical) clocks?

  131. Scott Says:

    fred #127: From the MWI perspective, even quantum measurements are reversible in principle, because measurements never “really” happen: they’re just language used to describe unitary evolutions that entangle us with the quantum systems we’re measuring. Or to put it in your terms: yes, God is watching all the branches in parallel. And that being so, for Him to “rewind” the tape is as easy as changing the unitary the controls the branching process from U to U-1.

    Anyway, it occurs to me that I should state things more carefully. Here’s what I think is true: from Galileo to the present, no source of irreversibility—or even a hint of one—has ever been found in the microscopic laws of physics. Rather, all irreversibility that we know about seems to be tied up with decoherence, the Second Law of Thermodynamics, and stuff like that.

    This whole discussion started with the black hole information problem. There, the relevant point is that, if you want black holes to really, actually convert pure states into mixed states (as Hawking’s semiclassical calculation suggested), then you’d need irreversibility in the fundamental laws, rather than “just” thermodynamic irreversibility. That’s the part that many people (correctly, I think) regarded as suspicious, and these days I’d say that ideas like AdS/CFT have cast severe doubt on it.

  132. wolfgang Says:

    @wolfgang #130 😎

    >> classical clocks

    Actually, we can turn this into an interesting homework problem for mwi proponents: Assume that a measurement device is configured so that for outcome M1 a rocket of mass m is fired into outer space and for M2 nothing special happens.
    In other words, for |M1> the mass of the Earth will be reduce from M to M – m , while for |M2> the mass remains M. Therefore clocks on the surface of the Earth will tick slightly different, according to general relativity.

    This is not a problem for Copenhagen, since either M1 or M2 happens. But how does mwi handle this case?
    Does the unitary evolution of the universal wavefunction use clock time t1 or t2 or a combination of both?

  133. Scott Says:

    wolfgang: All your questions about MWI are good ones! The irony is that I’ve often been on the opposite side of this, asking variants of the same questions you’re asking to the MWI true-believers.

    On the other hand, I think intellectual honesty compels one to acknowledge the severe problems on the Copenhagen side of the ledger. Indeed, one could imagine an MWI proponent gleefully asking you: “so, you believe that wavefunctions evolve by the Schrödinger equation, except that sometimes—when they get too complicated or something—they suddenly and violently ‘collapse’? Oh please, tell me more about this ‘collapse’ law. Do you have to be conscious to collapse stuff? Can a frog or a robot collapse wavefunctions? At what point does the collapse happen: the optic nerve? the visual cortex? the soul? Or does it just happen whenever you, like, entangle more than some special number of atoms? What is the magic number, then?” (To get the full effect, you need to imagine the MWIers laughing hysterically as they ask these things.)

  134. wolfgang Says:

    @Scott #133

    I think the Copenhagen interpretation ultimately works best in combination with solipsism:
    The reduction of the wave function happens when *I* experience something 😎

    I guess this is why “shut up and calculate” is frequently the advice given to students …

  135. James Gallagher Says:

    Jesus, (RIP)

    why can’t you guys just maybe think that Nature does all the collapsing and you just get to observe it?

  136. Michael Brazier Says:

    Scott@133: The reply to the imaginary MWI proponent is simple: “I need only point out that, to date, nobody has ever seen a quantum particle in a superposition. If unitary evolution were indeed the whole story, we would have actual cases of unquestionably conscious observers being placed in a mixed state for a noticeable time and recohering afterwards – in fact it would be routine and unremarkable. In fact nothing of the sort has ever occurred; therefore there is almost certainly more to the story than unitary evolution. What that might be I cannot at present say, but that’s no reason to deny what’s before our eyes. Also, stop giggling, you look like an idiot.”

    IOW, the preferred-basis problem isn’t a simple technical matter that can be resolved by formal calculation. It’s a problem on the same level as the ones that give you qualms about MWI. Indeed, as the basic question is why we have to introduce the Born rule to get testable predictions out of the Schrodinger equation, the preferred-basis problem is a technical restatement of your first objection to the MWI proponents.

  137. Mitchell Porter Says:

    @Scott #133:

    ‘one could imagine an MWI proponent gleefully asking you: “… What is the magic number, then?” (To get the full effect, you need to imagine the MWIers laughing hysterically as they ask these things.)’

    Then they would be utter hypocrites, since they themselves are unable to give coherent answers regarding how many worlds there are, or exactly when it is that one world becomes two, or really any such details.

  138. Audun Says:

    Re Scott #88 (anti-anti Platonist)

    Your emphasis on double negation being different from no negation would seem to put you in the intuitionist camp 😉

  139. Scott Says:

    Michael #136:

      If unitary evolution were indeed the whole story, we would have actual cases of unquestionably conscious observers being placed in a mixed state for a noticeable time and recohering afterwards – in fact it would be routine and unremarkable.

    Sorry, that part of your response is just factually wrong. Decoherence (or ultimately, the low entropy of our universe’s initial state) explains perfectly well why we don’t see such things on the scale of everyday life. (Indeed, assuming a finite-dimensional Hilbert space, “macroscopic recoherences” shouldn’t start happening until the universe reaches thermal equilibrium, and there ceases to be any interesting evolution anyway. And in an infinite-dimensional Hilbert space, macroscopic recoherences need never happen.)

  140. Scott Says:

    James #135:

      why can’t you guys just maybe think that Nature does all the collapsing and you just get to observe it?

    That’s certainly a possibility; indeed it’s the one that GRW and Roger Penrose advocate. But then the burden falls on the proponent to say: how does Nature decide when to trigger a “collapse” and when not to? And how is it that we haven’t yet noticed the effects of such “objective collapses” in experiments—even with superconducting qubits involving billions of electrons, or with molecules of hundreds of atoms traveling through a superposition of slits? What sort of experiment do you predict will reveal these collapses?

  141. Utter Hypocrite Says:

    Mitchell Porter,

    Quickly scanning sources closely at hand, I’d say that loosely speaking a “world” is a complex, causally connected, partially or completely closed set of interacting sub-systems which don’t significantly interfere with other, more remote, elements in the superposition. Any complex system and its coupled environment, with a large number of internal degrees of freedom, qualifies as a world. An observer, with internal irreversible processes, counts as a complex system. In terms of the wavefunction, a world is a decohered branch of the universal wavefunction, which represents a single macrostate. The worlds all exist simultaneously in a non- interacting linear superposition. Worlds “split” upon measurement-like interactions associated with thermodynamically irreversible processes. How many “worlds” are there? The thermodynamic Planck-Boltzmann relationship, S = k*log(W), counts the branches of the wavefunction at each splitting, at the lowest, maximally refined level of Gell-Mann’s many-histories tree.

    This approach accepts the reality of the wave function and the QM formalism without bolting on a collapse postulate. This may not be “coherent” enough for you, and I don’t have the knowledge or background to argue the technical points, but a fair-mined person should, I think, acknowledge that this type of analysis is a reasonable, good faith effort to answer these difficult questions, and that the label “utter hypocrites” is a bit strained.

    Now it’s your turn. What about this ‘collapse’ law? Do you have to be conscious to collapse stuff? Can a frog or a robot collapse wavefunctions? At what point does the collapse happen: the optic nerve? the visual cortex? the soul? Or does it just happen whenever you, like, entangle more than some special number of atoms? What is the magic number, then?

  142. itai Says:

    Hi Scott,
    I thought about some anomaly stuff that exists in the basics of probability theory , that could have some implication in QM: Max Born probabilistic interpretation and Uncertainty principle, and maybe in TCS.
    And it will insert some set theory based math to physics( measure theory which is basis of modern probability is based on set theory) .

    QM physics and any statistical theory presume strong law of large number holds all the time ( when n->inf average=mean )
    But The strong law of large number holds only when the expected value of probability density function converges surely ( by the mean of Lebesgue integration),
    there are many probability density function that either the expected value or second moment does not hold this condition( they can converge by other types if integral such as improper reiman or gauge integral
    ( see here the status on integration definition in math
    gauge integral also has some connection with QM path integration)

    so if some wave function has no first or second moment according to Lebesgue integration then what is the SD and expected value of position for example?

    And if there is no such wave functions ( I don’t think it’s true because Cauchy/Lorentzian distribution is in use now) , then such limiting conditions should be taken into account .
    ( adding to the demand that integral |Psy|^2 dx =1,
    integral |x|*|Psy|^2 dx < inf
    integral x^2*|Psy|^2 dx < inf

  143. wolfgang Says:

    @Utter Hypocrite #141

    >> What about this ‘collapse’ law?
    Your questions are somewhat misguided to a Copenhagener, who is not talking about an “objective collapse’.

    *I* decide to reduce the wavefunction after a measurement, because it is not longer the most economic description of reality; This is a matter of convenience and therefore your questions cannot be answered as such.

  144. Jerry Says:

    Re Scott #88 (anti-anti Platonist)

    Like the Grandfather Paradox, not-not-Platonist could mean Scott has a 50% chance of being a Platonist and 50% of being an anti-Platonist. See:

  145. James Gallagher Says:

    Scott #141

    Sorry I opened my big mouth again, especially after saying I would retire from commenting lol 🙂

    It’s Easter, traditionally a time of peace, so I will tread carefully, I will just respond by saying that there is NO current experiment which indicates that collapse is not occurring – even the observations made in quantum zeno experiments are entirely consistent with a single path evolution in Hilbert Space – AS LONG AS that path is probabilistically generated! (ie every single collapse is probabilistic)

    However, as you know, I believe that we will start to observe performance issues which large scale QC which will not be explainable by decoherence issues. The “scale” at which this will happen is of course the thing I should be able to predict, before opening my big mouth again.

    Happy Easter!

  146. Utter Hypocrite Says:

    “”*I* decide to reduce the wavefunction after a measurement, because it is not longer the most economic description of reality; This is a matter of convenience and therefore your questions cannot be answered as such . . .”

    Look, I’m not saying that you shouldn’t adjust your thinking because you happen to come into possession of more information. But don’t tell me you are, as you seem to hint, a hardcore solipsist; if you think that not only the limited world we can observe, and the “universe” we can surmise, but everything there is or could ever be, depends on what you may happen to come up with.

    And this is your reason for not being able to at least try and answer my questions? Seriously, give it a try.

  147. wolfgang Says:

    >> Seriously, give it a try.

    OK, one more time (I promise it is the last time).

    We follow the advice of the famous philosopher Rumsfeld and divide the world into those three parts:
    1) the known unknown: the qubits which we can describe with a wavefunction.
    2) the unknown unkown: the internal state of the environment, which includes large parts of the observer.
    3) the known: my mental state (it is all I really know)

    Where one draws the line between 1) and 2) and 3) is up to the particular experimental situation. Perhaps you can arrange it so that nerve cell 563,783,123 in your right eye is part of 1) but in most cases it will be part of 2)

    I can ensure you that 1) will always be smaller than 2) and therefore talk about a universal wavefunction is not economic.
    I can also tell you that it would not be very economic to describe 3) with a wavefunction, because I know what I know.

    So when the wavefunction from 1) entangles with 2) during the experiment, decoherence sets in and when it finally reaches 3) it is reduced.

    But again, trying to describe this with one wavefunction would not make much sense … therefore your questions are misguided.

  148. itai Says:

    will you consider wave function without surly converges ( lebesgue converges ) of first/second moment ( no SD ) to be the unknown unknowns ?
    or will you consider them physically impossible ?

  149. wolfgang Says:

    @itai #148

    in many cases the lazy physicists imagine the experiment to happen in some sort of box (e.g. the laboratory), using appropriate boundary conditions to make sure the integral(s) over the wavefunction(s) converge.

    If one talks about the ‘wavefunction of the universe’ etc. your problem could become a real issue (how do yo normalize the universal wavefunction?)

    Btw one needs to keep in mind that in real experiments we often do not know the correct physics yet – this is why we do the experiment in the first place. So we do not know the wavefunction before the experiment.
    How does a mwi proponent a la Tegmark describe this situation? Did the multiverse split into different branches, with e.g. different masses for the Higgs?

  150. itai Says:

    @wolfgang #149
    I know in physics we normalize wave function, otherwise it was not legal distribution function.
    I talk about different problem,
    strong law of large numbers ( what assure us average as n->inf equals expectation) does not hold for all distributions (when first moment does not converges surely ), also some distribution have first moment but infinite variance ( second moment not converges surely ) .

    my question is whether wave function can act like this ?
    if it is physically possible what does it mean for Uncertainty principle .
    if it’s not physically possible wave function , does someone take this into account, so we can add some more constraints for legal wave function?
    ( i know Cauchy and Levy distribution are is in use in physics world i have no idea if it has any connection to wave function , i have numerous other distribution examples )

  151. fred Says:

    itai #150
    But in QM the probability distribution is from the square of the modulus of the amplitude. So the distribution is always positive, no? I don’t think it’s possible to get the moments you suggest with strictly positive values, no? (I could be wrong, I’m no expert)

  152. itai Says:

    fred #151
    The distribution of wave function in QM as far as i know are from -inf to inf , only the probability is always positive or zero
    ( absolute value of wave function squared ).
    So i don’t see any theoretical reason for wave function first moment is always integrable by absolute value nor second moment.

    Anyway, it is possible for positive values only distribution to have no second moment and do have first moment
    ( meaning variance is infinite ).
    for example, for X=1/sqrt(U) where U is uniform distribution.
    It has density function of f(x)=2*x^(-3) ,X>=1.
    Var(X) = infinity, but E(X) =2 .

  153. Douglas Knight Says:

    Scott: As a general rule, it’s about a trillion times easier to write a correct program…when the program’s only purpose is to solve some fixed, well-defined math problem, rather than interfacing with a wide array of human users in a way that’s secure against any of those users who might be malicious and exploit vulnerabilities in the code!

    That’s true, but presents a false dichotomy. One strategy for writing secure code is to formalize the requirements. Ideally in a DSL that generates the actual code. That doesn’t address most side channel attacks (timing, compression,…) but it does address many other attacks, such as Heartbleed and the recent validation problems. cf langsec. (All of which is completely orthogonal to your point.)

  154. srp Says:

    I don’t know if I buy the simple reversibility argument. Suppose we were to fracture a piece of glass into two pieces by applying a specific force vector to it. I don’t really believe that it would reform into a single piece if the force vector were exactly reversed. And I don’t think the problem is one of not being able to get exact enough coordinates to get the vector reversed. There’s something going on with the way molecular bonds work so that the newly created surface is not “open” to re-bonding with the other piece in the way it was configured before. (It might be instructive to compare putty-like to rigid materials in this regard.)

    Obviously, there must be some threshold size of object where this phenomenon would first set in, because single-particle processes are observed to be reversible. I don’t know if it’s the atomic, the molecular, the ten-molecule, etc. level where this happens. But it might be more instructive to focus on this sort of specific material system rather than generalized abstract systems to get some intuition about the mechanisms involved in irreversibility.

  155. Scott Says:

    srp #154: Yes, the glass would coalesce back together, and it’s not a matter of debate or opinion. It’s 18th-century physics, emphatically upheld by everything that’s come since.

    Keep in mind, however, that you would need to perfectly reverse not merely the motions of the glass molecules, but also the motions of all the photons that escaped, all the motions that those photons caused after getting absorbed by electrons somewhere else, etc. If you time-reverse everything (well, and also reverse left and right in weak decays, and reverse matter and antimatter, in the unlikely event that either of those is relevant 🙂 ), then you’ll certainly obtain another valid solution of the equations of physics. It will just be one with an extremely bizarre initial condition, one that has the bizarre property of causing a pristine glass to form out of fragments.

  156. srp Says:

    I don’t see in this case how to reverse a photon’s motion exactly since it doesn’t have a classical trajectory and seems like it might come up with a different “collapse” in terms of polarization, etc. when it gets “measured” by the fracture surface (compared to its “measured” state when it flew away from the fracture surface). I thought these kinds of measurements have stochastic outcomes, but likely there is some aspect of the theory I’ve overlooked.

    In any case, assuming I am wrong, perhaps there is room for some careful measurement of this phenomenon with very small matter clusters, gradually increasing their size (and maybe varying their rigidity) to see where the finickiness of the initial conditions sets in. It’s relatively easy (there’s a wide range of initial conditions) to get two fundamental particles to reverse a decay process or a collision that led to fragmentation. It’s almost impossible to do it with a storm window. Somewhere in between is the knee of the finickiness curve.

  157. Serge Says:

    Scott #102:

    “In comparing Heartbleed to hypothetical bugs in a quantum computer, I feel like you’re conflating two extremely different issues. As a general rule, it’s about a trillion times easier to write a correct program—whether classical or quantum!—when the program’s only purpose is to solve some fixed, well-defined math problem, rather than interfacing with a wide array of human users in a way that’s secure against any of those users who might be malicious and exploit vulnerabilities in the code!”

    Actually, this conflation was made on purpose. As Poincaré used to say, mathematics is the art of giving different things the same name. It seemed indeed interesting to me to compare an attempt to solve a fixed, well-defined math problem by quantum means, with an attempt to solve some real-world security issue by classical means. In both cases, elements of uncertainty are at work so that a plethora of malicious users could play the role of the fundamental indeterminacy of quantum mechanics. In such a view, the difficulties encountered in building a quantum computer appear as a consequence of some general principal one might call “the uncertainty of the fight against uncertainty”. It bears some ressemblance with the (possible) unprovability of the unprovability of P≠NP.

  158. Itai Says:

    The answer to your quest for a physical law that will explain those observation about computational models is the law of least action.
    where action = physics computational complexity.
    It is clear why is should be mathematical time complexity as it counts “actions” of computation machine and not actual time !
    Action is not always minimal can exlpain the problem in solving NPC problems !

  159. T Says:

    What about a warp bubble computer? Let me be the first to say that I don’t really know what I’m asking.