Zork’s bloogorithm

If you have opinions about quantum computing, and haven’t yet read through the discussion following my “response to Dyakonov” post, you’re missing out.  The comments—by QC researchers (Preskill, Kuperberg, Gottesman, Fitzsimons…), skeptics (Dyakonov, Kalai, …), and interested outsiders alike—are some of the most interesting I’ve seen in this two-decade-old debate.

At the risk of crass immodesty, I just posted a comment whose ending amused me so much, I had to promote it to its own post.  My starting point was an idea that several skeptics, including Dyakonov, have articulated in this debate, and which I’ll paraphrase as follows:

Sure, quantum computing might be “possible in principle.”  But only in the same sense that teaching a donkey how to read, transmuting large amounts of lead into gold, or doing a classical computation in the center of the Sun are “possible in principle.”  In other words, the task is at the same time phenomenally difficult, and fundamentally arbitrary and quixotic even if you did somehow achieve it.

Since I considered this argument an important one, I wrote a response, which stressed how quantum computing is different both because it strives to solve problems that flat-out can’t feasibly be solved any other way if standard complexity conjectures are correct, and because the goal—namely, expanding the human race’s computational powers beyond classical polynomial time—is not at all an arbitrary one.  However, I then felt the need to expand on the last point, since it occurred to me that it’s both central to this debate and almost never discussed explicitly.

How do I know that the desire for computational power isn’t just an arbitrary human quirk?

Well, the reason I know is that math isn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.

Let me put it this way: if we ever make contact with an advanced extraterrestrial civilization, they might have three sexes and five heads. But they, too, will have encountered the problem of factoring integers into primes. Indeed, because they’ll inhabit the same physical universe as we do, they’ll even have encountered the problem of simulating quantum physics. And therefore, putting the two together, they’ll almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.

161 Responses to “Zork’s bloogorithm”

  1. Vadim Says:

    For us to meet an alien race, they might well need to have the technology to travel at nearly the speed of light, given the distances involved. Maybe even arbitrarily close to it, with all of the related time dilation effects. In that case, perhaps they could use the same technology to build a computer that could take an unlimited amount of time to work on a problem while only seconds pass outside the computer. Then they could solve any problem whatsoever in constant time, and all of our concepts of computational power & efficiency would seem completely quaint to them.

  2. Scott Says:

    Vadim: See my NP-complete problems survey. Very briefly, that fun idea doesn’t work, because of the amount of energy needed to accelerate to relativistic speed and then decelerate. If you want an exponential computational speedup, then you need exponential energy, and that, in turn, takes exponential time to gather up (because of the Schwarzschild bound). The aliens, of course, would’ve figured all this out. 🙂

  3. Vadim Says:

    Would manipulating gravity to accomplish the same thing also necessarily have the same limitation?

  4. Scott Says:

    Vadim #3: Excellent question! As far as current physics can say, the answer is yes — again basically because of the Schwarzschild bound, or more generally, the holographic bound, which it’s thought that any quantum theory of gravity should satisfy. What you find, again and again, is that if you try to speed up a computation so that it runs faster than one operation per Planck time (i.e., ~1043 operations per second), then you necessarily use so much energy that your computer collapses to a black hole.

  5. Roger Says:

    The aliens will be smart enough to know that you can get bigger results by making unrealistic assumptions.

    Scott, I see that you concede the possibility that “scalable QC’s never become reality and yet no substantial rewriting of Physics is really required”. Maybe those aliens realize that the laws of physics do not imply the feasibility of scalable QC.

  6. Scott Says:

    Roger, did you even read what I wrote? I said that the world you describe would simply be a world where no one ever figured out what the truth was. (Perhaps because civilization collapsed, or because powerful idiots succeeded in stopping science.) Even in such a world, either QC would have been possible, or else some revision would have been needed to physics as understood in 2013.

    I told you a while ago that you were banned from this blog until you demonstrated the ability to comprehend something, anything, that anyone else said to you. I’ve let through a single comment of yours only because I wanted to take the opportunity to reaffirm that policy.

  7. Peter Morgan Says:

    On the QFT reading side, I’ve recently been finding Othmar Steinmann, “Perturbative quantum electrodynamics and axiomatic quantum field theory” to be a thought-provoking contrast to conventional textbooks.
    Richard Borcherds’ arXiv:1008.0129v2 [math-ph] “Renormalization and quantum field theory” is also different from the usual stuff.
    I would hope that my discussions of QFT, most recently arXiv:1211.2831v2, “Nonlinear Wightman fields”, would also be useful as a contrast (and in any case, ripping the ideas therein, as far as they yet go, will not take long).

    You were right that the comments on the previous post make an interesting read, but it took quite a while and there was some repetition.

  8. Douglas Knight Says:

    Scott #3: the holographic bound is about entropy and the Schwarzschild bound is about energy, right? So the holographic bound is about irreversible computers and doesn’t prohibit running a reversible computer fast. But Schwarzschild does.

    Is there some way of phrasing them that covers both physical claims? And does that physical claim apply to the full computational claim cleanly?

  9. Scott Says:

    Douglas #8:

      the holographic bound is about entropy and the Schwarzschild bound is about energy, right?


      So the holographic bound is about irreversible computers and doesn’t prohibit running a reversible computer fast. But Schwarzschild does.

    Yeah, sorry, I should have referred to a bound on operations that’s closely analogous to (but not identical with) the holographic bound. Seth Lloyd has a paper on this, for example.

      Is there some way of phrasing them that covers both physical claims? And does that physical claim apply to the full computational claim cleanly?

    Good question! But I’m running to dinner now. Maybe someone else wants to tackle this?

  10. Carl Says:

    Always with the aliens! You know of course that we have exactly NO evidence that aliens do or do not do science and math the same way that we do. And yet your response doesn’t strike you as question begging. Why? I blame Kant: http://blog.carlsensei.com/post/15415550062

  11. wolfgang Says:

    >> the problem of simulating quantum physics

    Let’s say you want to compute the spectrum of the He atom;

    It seems to me that the most efficient quantum computer to
    do this calculation would consist of 1 alpha particle, surrounded by 2 electrons and this system would be surrounded by the usual lasers and measurement devices.

  12. Vadim Says:


    I’m as far from an expert on either math or aliens as one can be, but I’d be absolutely shocked if any sufficiently intelligent alien race didn’t have concepts like natural numbers, arithmetic, geometry, calculus, number theory, etc. Surely you’re not saying that these things are constructs of the human mind rather than part of an objective reality. Could an alien race likewise not have its own concept of gravity, photons, stars, or entropy? Maybe some alien race out there isn’t familiar with some of these concepts, just like we weren’t 5,000 years ago, but just like us 5,000 years ago, that would mean they lack the ability to accurately explain what’s happening around them.

  13. John Sidles Says:

    One thing is certain … there is no stopping them … otherworldly mathematical intelligences already are here … and I for one welcome our new inter-universal Teichmüller Theory overlords!

  14. Scott Says:

    Carl #10:

      You know of course that we have exactly NO evidence that aliens do or do not do science and math the same way that we do. And yet your response doesn’t strike you as question begging. Why?

    One of the formative experiences of my adolescence was coming across, in a book about π, a reprint of a Japanese geometry treatise from around 900AD. I can’t read modern Japanese, let alone ancient Japanese. And Japan had zero contact with the West at that time. Yet merely from one illustration in the treatise, it was absolutely clear that the author was proving A=πr2, and using exactly the same “pizzafication” strategy that I would use to prove it.

    In math and science, I’d say there’s enormous diversity in the bad ideas! But it seems like in general, the better the ideas, the more uncanny the convergence you see across time and space (examples of Western and Soviet bloc mathematicians discovering the same important theorems around the same time, totally unaware of the other, are common as dirt). Indeed, because of this “convergence of deep ideas” phenomenon, it wouldn’t even shock me if, let’s say, human and alien group theory were more similar to one another than were Mayan and Babylonian arithmetic.

    Yes, it’s true that I can’t base my statements about Zork’s Bloogorithm on any conversations with Qveter Zork (only with his human counterpart…). If you like, I’m simply putting a falsifiable stake in the ground, drawing solely on my experience with math. Find me an advanced extraterrestrial civilization that has no concepts corresponding to integers, prime numbers, etc., and I will gladly admit that I was wrong. 🙂

  15. Scott Says:

    wolfgang #11:

      Let’s say you want to compute the spectrum of the He atom; It seems to me that the most efficient quantum computer to do this calculation would consist of 1 alpha particle, surrounded by 2 electrons and this system would be surrounded by the usual lasers and measurement devices.

    Sure, but there are two large drawbacks to that “quantum computer”:

    First, you obviously couldn’t use it to check whether your equations describing the He atom were the right equations! For that, you’d want a quantum computer where you got to control all the interactions.

    Second, you’d basically be in the same situation as the users of wind tunnels, or analog and mechanical computers, in the early 20th century. Every time you wanted to solve a new problem—say, involving a different atom, or a complicated molecule—you’d need to go back to your lab and do more engineering and chemistry. In the classical case, the advantages of having a single, universal, programmable simulation machine eventually became obvious, and transformed large swaths of science. Why wouldn’t you expect the same to happen in the quantum case?

  16. John Sidles Says:

    “If a lion could talk, we could not understand him.”

      — Ludwig Wittgenstein

    Perhaps not even if a friendly non-human intelligences showed us the solution to the Clay Millennium Prize’s Navier Stokes Problem! 🙂

  17. wolfgang Says:

    >> Every time you wanted to solve a new problem

    The point of a computer simulation is to approximate the behavior of something you cannot otherwise handle (e.g. collision of two neutron stars or the weather tomorrow).

    But what would be the point of simulating quantum systems if we can already do direct experiments?

    I assume here that if you master the art of holding a q.c. with n qubits together long enough – then you can simulate quantum systems which are approximately as complicated as n qubits. But you already mastered the experimental handling of n qubits…

  18. wolfgang Says:

    >> you obviously couldn’t use it to check whether your equations describing the He atom were the right equations

    But if you have a working quantum computer then you already know that your Schroedinger equation is correct …

  19. Scott Says:

    John Sidles #16: Any chance that in future comments, you could try fewer appeals to authority? 🙂 Personally, I think that Wittgenstein was mistaken, and we could indeed understand the lion pretty well. Based on my own observation of male lions in the Masai Mara, my guess is that most of the time he’d would be saying something like “YAWN … now where’s a good spot for a nap?” The rest of the time, I imagine it would have something to do with hunting or sex.

  20. Jair Says:

    It’s not obvious to me that sufficiently intelligent aliens would develop arithmetic as we know it. American mathematicians, Soviet mathematicians, ancient Japanese mathematicians, etc., at least have one thing in common: they’re all human. I imagine an alien might think very differently.

    The mathematics we do in practice is the subset of mathematics that we find very interesting. The concept of number is itself nontrivial, but even assuming the aliens have some similar notion, perhaps prime numbers, factorization etc. are not at all interesting to them. I really don’t think you can say prime numbers possess some objective, universal beauty that would attract interest from every sufficiently intelligent being, any more than there is an octopus considered so beautiful in the octopus kingdom that even humans would want to make love to it. Perhaps the aliens are interested in hard mathematical questions that to us would seem entirely unmotivated, and cannot be easily phrased in the language of mathematics as we have developed it. I really don’t know. In any case, I certainly don’t think it’s obvious that they would develop something like Shor’s algorithm!

  21. Carl Says:

    “And Japan had zero contact with the West at that time.”

    This is not true. By 900 AD, Japan had extensive contact with Korea and China, which in turn had contact with India and the West. I would be shocked if this proof you mention wasn’t taken from a Chinese source.

    Take Japanese Buddhist sculptures. Why do they have realistic human proportions instead of some other proportions like Polynesian statues or African statues? Is it because that’s just the “obvious” way to do it? As it turns out, no. When Alexander the Great invaded Afghanistan, he left behind Greeks who ruled the country for hundreds of years afterwards. They converted to Buddhism because Hinduism was caste based. Before that, there wasn’t any tradition of making Buddhist sculptures, but afterwards, Buddhist sculptures were made in the same “realistic” proportions as Greek statues. (Think of the famous giant Buddha’s of Bamiyan.) From there, the new Western style of sculpture spread to the east.

    We sometimes think that globalization is an entirely modern phenomenon, but it’s been going on for thousands of years.

  22. Scott Says:

    wolfgang: Let me try again.

    With quantum systems that occur in nature, you don’t in general know the full Hamiltonian that’s relevant, even if you do know the general rules of quantum mechanics. You’re also extremely limited in which observables you can measure. Having a programmable QC would let you make guesses about the Hamiltonian, check whether they match your observations of the “real” system, and if they do match, calculate numerous other properties of the “real” system that are impractical to measure directly.

    And if you’d mastered the manipulation of n qubits (where, say, n=100,000), you could use them to simulate, for example, a designer biomolecule, or a hypothetical new material, or a quark-gluon plasma, or a high-temperature superconductor, or the early universe, or countless other quantum systems that might be much harder than the qubits to get your hands on.

    This would be a big deal.

  23. wolfgang Says:

    Scott: let me also try (last time, I promise).

    the designer bio molecule: As Sean Carroll pointed out (more than once on his blog): we do know all the physics associated with bio molecules. We have no doubt what the Hamiltonian looks like.

    If we got this wrong then we will not be able to build a quantum computer.

    So why do you think it will be easier to set up and run a simulation of this molecule on a n=100,000 qubit computer than to observe and measure the bio molecule directly?

  24. wolfgang Says:

    Scott: let me try the very last time 😎

    Your claim is that there can be two different quantum systems qc (computer) and bm (bio molecule) with Hamiltonians H_qc and H_bm so that

    1) H_qc and H_bm are significantly different, so that qc can be handled without decoherence but bm cannot.

    2) H_qc and H_bm are essentially equivalent, because evolution of qc simulates the evolution of bm.

    I say this is close enough to a contradiction that I want to see a real argument how this can be possible …

  25. John Sidles Says:

    Scott says   “Personally, I think that Wittgenstein was mistaken, and we could indeed understand the lion pretty well.”

    That is unsurprising, Scott. Anatomic authorities tell us the relevant brain weights are:

    Tursiops truncatus: 1500 – 1700 grams

    Homo sapiens: 1,300 – 1,400 grams

    Panthera leo: ~240 grams

    Perhaps this is why we Sapiens don’t understand Tursiops all that well … the converse is less certain, eh? 🙂

  26. Rahul Says:

    @Jair above makes a good point. A 10th century Japanese author, or any author, no matter how long ago, so long as he was human has about seen the physical world at the same scale that we have.

    The problem with aliens is not only that they might be strange to our imagination; but they might be stranger than we could ever imagine. If they happened to be a race of nanometer sized creatures whether pizzafication would be a natural experience for them or something else is hard to know.

    In short, this gets too speculative. I’ll amend Scott’s quip and throw it back at him:

    “First find any extraterrestrial civilization at all and then we can consider arguments alluding to aliens.” 🙂

  27. Arka Bhattacharya Says:

    In the quantum adiabatic algorithm, the speed with which the hamiltonian of initial ground state is dragged to the hamiltonian of final state, whose ground state encodes an NP complete problem depends on the spectral gap of the concerned hamiltonian.

    So, I want to ask whether any current research has been done to derive some equation of the spectral gap in the particular plane which fairly converges to zero after a certain finite (poly time)number of operations (not exponential) ,but still helps to encode the concerned NP complete problem. I
    personally think that if we find some solution to decrease the spectral gap non-exponentially to zero, we may get some relationships among the classes of P, NP, co-NP and BQP, that may satisfy the skeptics to some extent. (The satisfiability of the skeptics with regard to quantum computation is itself an NP complete problem)

  28. jonas Says:

    Vadim: despite popular notions, shooting a computer at relativistic speeds doesn’t allow you to get the results faster, it only allows you to get the results slower.

  29. Scott Says:

    wolfgang #23 and #24: I think you’re being willfully dense!

    The fact that we know the Standard Model Lagrangian doesn’t mean we know the structure of some specific biomolecule that we’re interested in. The effective Hamiltonian for the electrons (for example) will include a potential that depends on the structure.

    Or think about high-temperature superconductivity. Despite knowing the equation Sean Carroll put on his blog, no one fully understands that phenomenon (as Sean would readily admit), in the sense that no one really understands the effective Hamiltonian (i.e., which approximations are appropriate and why) and why the ground state leads to superconductivity. I see no reason whatsoever why that problem would need to be solved as a “prerequisite” to building a scalable QC, and no reason why having a scalable QC to run simulations on wouldn’t help.

    And yes, even if two Hamiltonians H and H’ look completely different, one can simulate or encode the other, so that measuring eiH’t|ψ’⟩ tells you something about eiHt|ψ⟩. Conceptually, it’s no different from how a microchip looks different from an airplane, being tiny and square and so forth, yet what happens inside the microchip as it runs its simulation can tell you something about what will happen to the airplane.

    Finally, suppose you wanted to search through 10,000,000 possible biomolecules to find one that acted in a specific way, and for each one, you wanted to get the many-particle quantum effects right. Which do you think would be easier: to synthesize all 10,000,000 of those molecules in the lab and measure each one, or to program a universal scalable QC (supposing you had one) to simulate them?

  30. Gil Kalai Says:

    Greetings, Scott, and everybody,  from beautiful snow-covered Jerusalem. Such a major snow storm here is a once-in-a-decade event, and it is great that it is happening today.

    I want to compliment Scott for the very nice discussion thread on the previous post (continued here) regarding the quantum computer debate.

    Let me ask you something, Scott, about your feelings.  You wrote that you will be “a thousand times happier” if QC turns out to be impossible in principle than if quantum computing turns out to be possible. Let me try to understand it.

    1)   We agreed that showing that QC are impossible will require a profoundly different way to understand QM, on the level of the additional layers of physical laws. You go on to say about such a possibility that “we’ll have the opportunity to revise physics in probably the most far-reaching way since the 1920s.”

    2) Building QC will amount to achieving the “goal of breaking out of the ‘epistemic prison’ of classical polynomial time.” According to your beautiful answer to Wolfgang “if you’d mastered the manipulation of n qubits (where, say, n=100,000), you could use them to simulate, for example, a designer biomolecule, or a hypothetical new material, or a quark-gluon plasma, or a high-temperature superconductor, or the early universe, or countless other quantum systems that might be much harder than the qubits to get your hands on.” Indeed, I agree that this is a new amazing reality.

    Both these possibilities are fantastic, representing perhaps a once-in-a-century events. Of course, it is possible that none of them will be realized. I suppose that we do not agree on which possibility is more plausible. I  find the second possibility more surprising than the first and you, Scott, find the first possibility much much more surprising than the second. We have to remember that the second possibility embodies surprising events that already happened. Certainly the model of quantum computers and Shor’s algorithm are among the most surprising things I encountered in my career.

    But what is the reason to be happier or even “thousand times happier” from the first possibility? In the first possibility, of realizing why QC cannot be built, we will “just” learn something about our disabilities and also learn profound things about quantum physics. In the second possibility of building QC, we will actually be able to do things, to design new molecules, new medications perhaps, test automatically for the correct physics modeling for quantum phenomena and much more.

    (Three more little things: 1) In the debate with Aram Harrow we mentioned the possibility of a “quantum printer” that once your QC identify a new pleasant molecule you send it to the printer and realize it. 2) The surprising QC model, Shor algorithm, and quantum fault tolerance are likely to be embodied also in the first possibility if this is what the cards hold for us; and 3) In view of your comment to Wolfgang, you may consider, Scott, changing your “qualified yes” to “yes” for my point number 4 reading: “Building universal QC represent a completely new reality in terms of controlled and observed quantum evolutions, and also a new computational complexity reality.”)

  31. Scott Says:

    jonas #28:

      despite popular notions, shooting a computer at relativistic speeds doesn’t allow you to get the results faster, it only allows you to get the results slower.

    Right, so the idea would be to leave the computer on earth while you travel at relativistic speeds! Unfortunately, it still doesn’t work, because of the energy problem I mentioned before.

  32. Scott Says:
      I really don’t think you can say prime numbers possess some objective, universal beauty that would attract interest from every sufficiently intelligent being, any more than there is an octopus considered so beautiful in the octopus kingdom that even humans would want to make love to it.

    Funny analogy, but the pleasure of math is the pleasure of figuring out general, universal rules—the more general and universal, the better! If there’s a kingdom of technologically-advanced octopi, I’m sure their sexiest, longest-tentacled celebs would do nothing for our libidos and vice versa. I can only guess about their fashions, religion, and political structure. And yet, I’m extremely confident that the octopus nerds would care about prime numbers (which I imagine they’d express in base-8 notation…).

  33. Scott Says:

    Rahul #26:

      If they happened to be a race of nanometer sized creatures whether pizzafication would be a natural experience for them or something else is hard to know.

    I’m skeptical of whether there could be nanometer-sized intelligent beings—how do you get a brain that small? But let’s grant the assumption.

    It took us humans a long time (until the early 20th century, actually), but we eventually figured out the math relevant to enormous distance scales (namely, pseudo-Riemannian geometry), as well as that relevant to tiny distance scales (namely, QM). In the same way, why couldn’t these nanometer-sized beings eventually figure out Euclidean geometry?

  34. Scott Says:

    Arka Bhattacharya #27: There’s been a large amount of research on the quantum adiabatic algorithm over the past decade, both by Ed Farhi and his collaborators and by others. Searching the arXiv for papers with “adiabatic algorithm” in the title just turned up 55 results. A very short summary is that we do know examples where the spectral gap decreases exponentially with the problem size, and this is probably a common situation for hard 3SAT instances (or that matter, even random instances of many problems in P, like XORSAT!). But even assuming that’s true, the specific form of the exponential remains a big question. For example, there’s a speculative possibility, which hasn’t been ruled out yet, that the adiabatic algorithm could be made to solve 3SAT in something like 2O(√n) time.

  35. Scott Says:

    Carl #21: Thank you for the historical information. Yeah, I should have said “no direct contact with the West”; Japan was indeed in the same (large-diameter) connected component. The book was “A History of Pi” by Petr Beckmann, and I don’t have it with me (and Amazon doesn’t give access to the relevant pages). But it would be interesting to track down the actual origin of that manuscript, and see whether or not there was an independent rediscovery somewhere along the way.

    In any case, I get extremely ticked off by the relativist argument that “science must be subjective, since Chinese medicine was based on qi and yin/yang, while the Europeans had leeches and bloodletting,” etc. Again, that simply illustrates what above I called the diversity of bad ideas!

    My claim is that, while there are all sorts of forces (including just random drift) that draw separated cultures’ ideas further apart from each other, the need for conformity with logic and the physical world necessarily works in the opposite direction, and can produce spectacular convergence. And the main reason why we haven’t seen even more examples of that than we have, is that the need for conformity with logic and the physical world was a rather weak force until extremely recently in human history! Well, and also that, as you correctly point out, globalization was already going on thousands of years ago, so that we don’t have enough independent trials.

  36. Scott Says:

    Gil Kalai #30: OK, so maybe “a thousand times happier” was hyperbole. Having my current beliefs proved right would also make me happy. 🙂 But if QC turned out to be fundamentally impossible, then I’d feel like I was living through a sort of situation that’s only occurred 2 or 3 times so far in the history of science (a snowstorm in Jerusalem, if you like…), when the most basic ideas about the framework for describing the physical universe were up for grabs.

  37. Robert Alicki Says:

    I completely agree with Dyakonov thesis but also with Preskill comments (which do not differ so much ). For me QC was a challenge similar to perpetuum mobile in XVIII-XIX century. After 12 years of trials and errors I believe I know the answer : a fundamental conflict between stability of information storage and reversibility of its processing which does not harm classical but kills quantum large scale computing (see arXiv:0901.0811, OSID , vol. 19, no. 03 ). BTW, I think also that I deserve Scott Aaronson’s 100 000 USD prize!

    In particular Dyakonov’s analysis of ” Quantum computing as a sociological problem” is very interesting. In my opinion Quantum Information with its fore-front – Quantum Computing – is the first example of postmodernism in physics. It does not mean that all achievements of QI are worthless. There are plenty important results, but most of them, stripped out of the queer QI terminology (qubit, Bob and Alice, protocols, quantum oracle, etc..), could be well-placed in the traditional fields of mathematical physics, quantum statistical mechanics or foundations of quantum mechanics and, on the other hand, in experimental quantum optics or solid state physics.

  38. Arka Bhattacharya Says:

    Thanks Prof. Scott for the answer.

    Well sir, I would like to ask another question to you. Presently, we do not have a definite structure or size of the class BQP nor we know anything in depth about the problems which fall between the classes of P and NP like the Nash Equilibrium, but we have some speculations that the size of BQP may be larger than P and if studied in detail it may encompass classes like NP and all the problems that fall in between P and NP. So, if we investigate some problems in extreme depth like the factoring problem and Nash equilibrium and if somehow, in the future we have some theory which tells that BQP indeed encompasses NP, can it solve the P-NP problem from a quantum viewpoint? The P-NP problem seems to haunt me like anything and will surely like to explore it in depth during my doctoral studies, which I plan to start from this year.

  39. Scott Says:

    Robert Alicki #37:

      BTW, I think also that I deserve Scott Aaronson’s 100 000 USD prize!

    As I said in the prize announcement, a prerequisite to convincing me (and thus getting the prize) is to convince the majority of the physics community that your arguments are correct.

    Among other things, I want to know: if scalable QC is indeed impossible, then is there a polynomial-time classical simulation of all “realistic” quantum systems? if so, how does it work? if not, then how do you reconcile the nonexistence of such a simulation with QC not being possible? for can’t we consider any hard-to-simulate physical system to be a special-purpose QC that computes at least one thing (namely, its own time evolution) faster than a classical computer does?

    For me, these questions are the entire crux of the matter, and I don’t understand how any QC-is-impossible argument could succeed (or how anyone could even think it succeeded) as long as it remained silent on them.

  40. Scott Says:

    Arka #38: You’re basically asking about the entire subject of quantum complexity theory. For an introduction to this subject, I recommend checking out my Quantum Computing Since Democritus notes, or my more recent course lecture notes, or John Watrous’s survey article.

    Very briefly, just as we believe P≠NP, most of us also believe today that NP⊄BQP (i.e., that BQP does not encompass the NP-complete problems). But that’s not the P vs. NP question, it’s a different question. If NP⊄BQP, then certainly P≠NP also, but no one has any idea how to prove the converse. Also, I wouldn’t describe BQP being larger than P as a “speculation” — based on the evidence (like factoring being in BQP), it’s an extremely well-supported conjecture! Normally I’d reserve the word “speculation” for the negation of a conjecture. 🙂

  41. Arka Bhattacharya Says:

    Thanks Prof. Scott.

  42. John Sidles Says:

    Scott says  “Among other things, I want to know: if scalable QC is indeed impossible, then is there a polynomial-time classical simulation of all ‘realistic’ quantum systems? if so, how does it work?” […]For me, these questions are the entire crux of the matter.

    Please let me state (as a no-quibble no-authority-appeal no-snark personal affirmation) that this a (literally) wonderful summary statement, and moreover it is an entirely sufficient justification for pursuing QIT research in general, and in particular it shows us why BosonSampling is a wonderful new focus for research.

    Scott, you and I may commonly disagree on good answers, but with regard to good questions there is no much difference between us (which is of course the very best of all possible starting-points for discourse).

    One thing is certain: by the time we all reach agreement as to the precise meaning of Scott’s term ‘realistic’ (the quotes are his) we all will have learned a lot of quantum field theory, and much other good mathematics/physics/engineering too … and if we take Tony Zee’s advice, we will learn considerable history along the way! 🙂

  43. Scott Says:

    John #42: Thanks! Your excellent taste in questions is one reason why I’m happy to have you as a regular commenter here.

  44. chorasimilarity Says:

    Re:”Well, the reason I know is that math isn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.”

    By the same argument steampunk should be history, not alternative history.

  45. Rahul Says:

    Regarding Scott’s argument in #39:

    Just to clarify my understanding if someone will humor me a bit longer (apologies about being a non-QC, non-Physicist layman butting in, probably a bit naively):

    >>if not [polynomial-time classical simulation of all “realistic” quantum systems? ], then how do you reconcile the nonexistence of such a simulation with QC not being possible?

    This is the part I don’t understand. Why does one need to reconcile the two. Consider the two propositions: (a) A classical simulation of a quantum system does not work and (b) A quantum simulation of a quantum system also does not work

    Why cannot propositions (a) and (b) both be true. Sorry if I am not seeing the obvious.

    >> for can’t we consider any hard-to-simulate physical system to be a special-purpose QC that computes at least one thing (namely, its own time evolution) faster than a classical computer does?

    But isn’t that tautological? Yes, every system can compute itself . Even if the system is quantum.

    But that wasn’t the spirit of the whole QC program, was it? The idea is whether we can have a computer built out of quantum elements simulate another system that is itself quantum. Or do I misunderstand it.

    If we allow the systems simulating itself argument every classical physical system is a classical “computer” and every quantum system, a quantum “computer”.

    Therefore IMHO, the argument about a “special-purpose QC that computes its own time evolution” seems fatally flawed. What am I getting wrong?

  46. Rahul Says:

    To further clarify what I was saying: Let’s say I claim I’ve indeed built a computer (no matter quantum or classical) that can simulate a quantum system. It throws an answer. How does one verify if I indeed have built a working computer or not?

    By running an experiment,( or a series of experiments) where one runs a simulation and the actual physical “system” and then verify if the state of the system matches against the state predicted by the “computer”. Right?

    If so, allowing for Scott’s “special-purpose QC that computes its own time evolution” is a problem because then what would one have to compare the simulation against? The system is itself the simulation.

    Falsifiablity would be lost? Maybe I am making a blunder…..

  47. Vadim Says:

    chorasimilarity #44,

    I think steampunk not working out can be attributed purely to engineering challenges. If Babbage had successfully completed his difference engine and the subsequent analytical engine, general purpose computers would have been built over 100 years sooner than they were. Even without the analytical engine actually being realized, Ada Lovelace’s algorithm for it for enumerating Bernoulli numbers showed that people understood the motivation to create programmable calculating machines, they just lacked the actual ability.

  48. Scott Says:

    Rahul: Sorry, when I said “any system can compute its own time evolution,” I was speaking a little too loosely. What I meant was, “any system can perform the mathematical task of solving its own equations of motion—a task that, if we knew those equations, could (if we preferred) be stated purely mathematically, with no reference to physics.”

    A clear example of this distinction is provided by BosonSampling. Arkhipov and I defined the BosonSampling problem using some knowledge about what a linear-optical network does. Crucially, however, once we’ve defined it, the problem can be stated mathematically, so that whether a given device solves BosonSampling or doesn’t solve it becomes a straightforward mathematical question, of comparing the device’s actual outputs against a theoretical ideal. In particular, a priori it’s entirely possible that a given linear-optical network will fail to solve BosonSampling — and in fact that’s exactly what will happen, if (for example) the linear-optical network isn’t calibrated correctly. On the other hand, if your network is calibrated correctly, then it should solve BosonSampling, a problem that we can show has no polynomial-time classical algorithm under plausible assumptions.

    This means that to show that “scalable QC is impossible,” among many other things you’d either need to find a fast classical algorithm for BosonSampling, or else show that what a linear-optical network does is necessarily something other than BosonSampling — i.e., that my and Arkhipov’s prediction, based on 1920s quantum mechanics, necessarily stops being a good prediction when the number of photons gets large. There’s no way to avoid the need for one or the other.

  49. Scott Says:

    chorasimilarity #44:

      By the same argument steampunk should be history, not alternative history.

    Look, nothing in my argument even suggests that an extraterrestrial civilization shouldn’t be using steam-powered universal computers—or for that matter, jellybean-based universal computers or purely gaseous universal computers. (Probably it isn’t, but the reasons are detailed physics and engineering ones that have nothing to do with what we’re talking about!) What I’d find incredible is only a civilization reaching a sufficient level of technological advancement without some capacity for universal computation—I have almost no idea about the substrate.

    Incidentally, it occurred to me that David Deutsch’s The Beginning of Infinity would provide excellent background reading for a lot of what we’re talking about. I don’t agree with everything Deutsch says, but when it comes (for example) to the special role of universal computation in the physical world, Deutsch carefully sets out and justifies much of what I’m “holding to be self-evident” here.

    [I just now noticed Vadim’s comment #47, with which I completely agree.]

  50. wolfgang Says:

    Rahul #46

    >> The idea is whether we can have a computer built out of quantum elements simulate another system that is itself quantum.

    That was exactly my question at #24 !

    What we want is two very different systems qc and bm have an equivalent evolution – and what Scott seems to say is that this is no problem because we know that two very similar systems bm and bm’ have the same evolution.

    I am puzzled now …

  51. Scott Says:

    wolfgang: Maybe I should let someone else take a crack at explaining this. What I see myself as doing, is simply jumping up and down and insisting repeatedly on the Law of the Excluded Middle.

    Namely: Either there are quantum systems that can’t be efficiently simulated by a classical computer, or else there are not such systems.

    If there are such systems, then the Extended Church-Turing Thesis is false. Universal QCs, capable of solving all BQP problems, still might or might not be possible. But either way, probably the central contention of quantum computing theory has been vindicated. Classical polynomial time does not suffice to encompass the physical universe.

    If there are not such systems, then either our current understanding of the laws of physics requires revision, or else there are amazing, yet-undiscovered classical algorithms for quantum simulation.

    And I refuse to take any “refutation of QC” seriously, until it explains to me which of those possibilities is correct.

  52. chorasimilarity Says:

    Vadim#47 , Scott#48, agree with both comments, but let’s take the first line from the wiki page on steampunk:
    “Steampunk is a sub-genre of science fiction that typically features steam-powered machinery, especially in a setting inspired by industrialized Western civilization during the 19th century.”
    Now, as an alien, I just extended my tonsils around the qweeky blob about hmm, let me translate with gooble: “alternative reality based fiction on the past 5^3 years greatest idea”, and it tastes, excuse me, reads, as:
    “Turinkpunk is a sub-genre of science fiction that typically features computing machinery, especially in a setting inspired by industrialized Western civilization during the 20th century.”
    [I localised some terms, for example 1337=industrial Western civilization and OMG=20th century]

  53. Rahul Says:


    Thanks for the link to your Boson Sampling paper. I think I now see what your argument gets at. I’ll chew over that paper a bit. Very meaty.

  54. Leonard Ornstein Says:

    This thread, while interesting, is wildly ‘unscientific’ 😉


    Extraterrestrial Intelligence: the debate continues
    A Biologist looks at the numbers (from Physics Today, March, 1982)



    The Skeptical Scientific Mind-Set in the Spectrum of Belief: It’s about models of ‘reality’ – and the unavoidable incompleteness of evidence, for – or against – any model or fact.


  55. Scott Says:

    Leonard, your first link is broken (though the way I arrived at that conclusion may have been “wildly unscientific”; I only clicked it twice 😉 ).

  56. John Sidles Says:

    Scott Aaronson says (jumping up and down and insisting repeatedly) … “Either there are quantum systems that can’t be efficiently simulated by a classical computer, or else there are not such systems.”

    Richard Hamming, in his (justly?) celebrated essay You and Your Research, has this to say regarding black-versus-white cognition:

    There’s another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you’ll never notice the flaws; if you doubt too much you won’t get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don’t quite fit and they don’t forget it.

    A Hartmanis-esque shading of Scott’s manifesto might be:

    Perhaps our collective mathematical and theoretical understanding is approaching a level where we can hope for a reasonably rigorous assessment, of whether there exist realistic quantum systems that cannot be verifiably and efficiently simulated by a classical computer, or else there are not such systems.

    The key ambiguity-tolerating words are bold-faced (I hope!).

  57. Scott Says:

    John Sidles #55: Look, anytime someone says “either A is true or else it’s not true,” of course it’s also possible that A is ill-defined, or it doesn’t mean what people thought it meant, or it’s true in one natural interpretation but false in another. Even in those situations, however, if A looks perfectly well-defined, then it’s up to the advocates of the third positions to explain exactly where the ambiguity lies!

    As long as I’ve walked this earth, I’ve never once seen any conversation advanced by anyone saying anything along the lines of, “ahh, but the truth might be more ambiguous than your simplistic, black-and-white mind can fathom…” Explain where and how the truth is ambiguous, and then maybe we’ll get somewhere! 🙂

  58. Rahul Says:

    Scott’s #48 regarding Boson Sampling. A follow up question (very ingenious work BTW, I’m amazed):

    >>>Crucially, however, once we’ve defined it, the problem can be stated mathematically, so that whether a given device solves BosonSampling or doesn’t solve it becomes a straightforward mathematical question, of comparing the device’s actual outputs against a theoretical ideal. <<<

    The comparison can only be done for small n (# of bosons), right? So at larger 'n' you have essentially no benchmark? No way at all to test if or not your "computer" (aka linear-optical network) fails or does not fail to give the right result?

    So even if some size-dependent "decoherence" did creep in how would we even know, since the "right answer" will be unknown? Unless like in factorization, finding the answer was hard (classically) but verifying it was cheap. Is it so here for Boson Sampling?

    The day we do run a 100 boson experiment, what do we do to vet the results?

  59. Scott Says:

    Rahul #57: It’s an excellent question, and one we addressed in the paper (see Sections 1.3 and 6.1). The short answer is that, for BosonSampling, the most interesting values of n are around 30 or 40. For that’s small enough that a classical computer can eventually check the answers, yet still large enough that the quantum experiment would (ideally) solve the problem much faster—thereby providing clear evidence for a speedup.

    It’s true that once you got up to (say) 100 or 1000 photons, we’d longer even know how to check the answers feasibly with a classical computer, and that this is a fundamental difference between BosonSampling and NP problems like factoring. But, as I said, there’s still an intermediate range that seems extremely interesting experimentally.

  60. wolfgang Says:


    I think I understand where we are talking past each other.
    On the one hand there is the statement 1) that a scalable quantum computer qc(n) with n qubits can be built.
    Scalable means that qc(n+m) is just a larger version of qc(n) and imho it means that the Hamiltonian is restricted to certain forms due to symmetries …

    On the other hand there is the statement 2) that ‘special purpose’ quantum systems bm (of large size) cannot be simulated by classical computers.

    But my point would be that 1) does not follow from 2).

    Now Robert #37 indeed seems to say that 1) implies that ‘quantum memories’ need to be built, but he can show that they cannot exist – thus statement 1) must be wrong.

  61. Scott Says:

    wolfgang #59: Yes, I agree that (2) doesn’t imply (1). However:

    – If (2) holds, then I’d say we had at least “special-purpose quantum computers” that outperformed any possible classical simulation of them, that the Extended Church-Turing Thesis had been overthrown, and that the computational resources of the universe had been shown to exceed BPP. This would be a big deal.

    – Insisting that “the Hamiltonian is restricted to certain forms due to symmetries” seems way too restrictive to me. We don’t normally impose those sorts of restrictions on classical computers, so why should we do so for quantum computers? Indeed, why not just adopt a rough-and-ready criterion like: if you can only implement Shor’s algorithm with 5 qubits, then you surely haven’t demonstrated “scalability,” while if you can implement it with 5,000 qubits, then you surely have? Otherwise, even if you got up to 5 trillion qubits, a particularly tenacious skeptic could still point out that logically, you still haven’t shown “scalability”! 🙂

    (I should mention that there’s a very active research area that studies the existence or nonexistence of something called self-correcting quantum memories, in various numbers of spatial dimensions. Right now it’s looking like they’re possible in 4 spatial dimensions, not possible in 2 dimensions, and “just barely possible” in 3. But it’s worth stressing that, even if self-correcting memories had been totally, flat-out impossible in any number of dimensions, that still wouldn’t imply that quantum computing was impossible, since one can also protect quantum information using active error correction.)

  62. Gil Kalai Says:

    Scott (#39) If scalable QC is indeed impossible, then is there a polynomial-time classical simulation of all “realistic” quantum systems?

    Let me try to relate to this question. The way I see it is this:

    (*) If QC is impossible then it is correct to consider any physical system as a classical simulation of its own time evolution, and this means that there exist an efficient classical algorithm for the physical system.

    (Does it answer your question affirmatively, Scott? Let’s assume here that QC is indeed impossible):

    Scott also asked: “if so, how does it work?”

    This question is not clear. I suppose that we will have to figure it out one case at a time, no?

    (There may be obstacles in practice and in principle to our ability to simulate, including obstacles of computational complexity nature. (Probing the parameters needed to simulate a classical computer program can be computationally hard).

    Maybe the question is if an argument/theory for QI infeasibility will give us also as a byproduct better computational methods for quantum physics that are also computationally efficient. My answer is that I surely hope so, at least in some cases.

    Maybe the question is if computations from physics  are not relevant (for concrete predictions) as they become computationally infeasible. My answer to that is: yes, they are becoming irrelevant (for concrete predictions).

  63. Robert Alicki Says:

    “Robert Alicki #37:

    BTW, I think also that I deserve Scott Aaronson’s 100 000 USD prize!

    As I said in the prize announcement, a prerequisite to convincing me (and thus getting the prize) is to convince the majority of the physics community that your arguments are correct.

    Among other things, I want to know: if scalable QC is indeed impossible, then is there a polynomial-time classical simulation of all “realistic” quantum systems? if so, how does it work? if not, then how do you reconcile the nonexistence of such a simulation with QC not being possible? for can’t we consider any hard-to-simulate physical system to be a special-purpose QC that computes at least one thing (namely, its own time evolution) faster than a classical computer does?”

    Sorry, but I do not understand your logic. A large quantum system, (e.g. a large quantum computer coupled to a heat bath) can be simulated
    by a corresponding dissipative classical-mechanical system described by a set of collective degrees of freedom (e.g. analog computer) . Such a system can be regular or chaotic and in the later case cannot be simulated effectively in a polynomial time by a digital computer.

  64. Scott Says:

    Gil #61: So let me try to be more precise. If (*) holds, then given any “class” of physical systems (for example: molecules, quark-gluon plasmas, etc.), assumed for simplicity to be isolated (or to live in a static potential field), there ought to exist a probabilistic classical algorithm A that takes as input a string of n bits encoding the system’s “detailed initial configuration,” as well as another string of m bits encoding a measurement to be performed on the system t “time steps” after initialization (where the definition of a “time step” can depend on the energy, via the usual Trotterization), and that outputs a string of k bits encoding a sample from the probability distribution over possible measurement outcomes, which agrees with the physical distribution to within ε in variation distance. Furthermore the running time of A should be polynomial in n,m,t,k,1/ε.

    Do you agree with the above?

    If so, then the next step is simply to add in the hypothesis that the laws of physics are uniform — i.e., that the same fundamental laws of physics describe biomolecules, quark-gluon plasmas, etc. — in order to get the stronger conclusion that there should exist a single such classical algorithm A that works for any class of systems.

    If the above doesn’t hold, then I’d say that, at the very least, the Extended Church-Turing Thesis (as I understand it) is in jeopardy.

  65. Rahul Says:

    Scott #58:

    Thanks for pointing me to the right sections. 94 pages is going to take me a week to read! 🙂

    So what you’ve really proved (if the expts. work) is this: “For certain mathematical equations and at certain problem sizes a device with quantum-scale elements gets answers much much faster than solving those same equations with classical computers.”

    If I got that right, why the need for “quantum scale elements” even? Take something like a 2D-Laplace equation for heat transfer. Absolutely solvable computationally, yet given sufficiently messy boundary conditions it might indeed be faster to heat a metal disk suitably and measure the temperatures.

    Thus, even under the domain of classical, non-quantum experiments “the experiment would (ideally) solve the problem much faster”

    Why does that not count?

  66. Scott Says:

    Robert Alicki #62:

      Such a system can be regular or chaotic and in the later case cannot be simulated effectively in a polynomial time by a digital computer.

    We certainly don’t know that! For example, if you believe the holographic principle, then Nature should become “quasi-digital” again at the Planck scale (in the sense of the Hilbert space dimension being finite), so that there would never be “true” chaotic behavior in the technical sense.

    Furthermore, if there were “true” chaotic analog systems in Nature, then arguments going back to the 1970s suggest that such systems might be used to do much more than even a quantum computer could — for example, they might let us solve PSPACE-complete problems or even the halting problem efficiently!

    But let’s leave all that aside. You’ll point out, correctly, that given a “chaotic analog system” (or something we currently regard as such), we only get to measure or control its initial state to some limited precision. So then, we just redefine the simulation task as follows: your task is merely to sample, approximately, from the probability distribution over possible results of measuring the dynamical system, assuming the initial state was chosen uniformly at random from the smallest epsilon-ball in which you were able to place it.

    Under those assumptions, do you think the simulation can be done in classical polynomial time, for any “realistic/noisy” quantum systems? If so, how? And whatever your classical simulation method is, could it be applied, for example, to the understanding of high-temperature superconductors, or quark-gluon plasmas, or molecular structures for which quantum Monte Carlo suffers from the sign problem?

  67. Scott Says:

    Rahul #64: It doesn’t count because the slowdown, in going from the metal disk to the computational simulation of the disk, is not a superpolynomial slowdown. Indeed, it’s probably “just”(!) a large constant factor.

  68. Gil Says:

    Scott, I think that failure of QC implies that any robust outcome of experiment in quantum physics (just like in classical physics) can be demonstrated by an efficient algorithm. Certainly, I’d expect that the same algorithm can be apply for a class of experiments depending on some parameters like the ones you mentioned. You also want somehow some universal algorithm for all experiments at once. it looks ok to me. In any case, any reasonable statement of this kind for experiments in classical physics should extend to the quantum case.

  69. Scott Says:

    Gil #67: Wonderful, then we’ve actually made progress in this discussion! 🙂 You now know what sort of thing it would take, for me to take seriously any proposed impossibility proof for QC.

  70. Jay Says:

    Gil, don’t you think it’d be the “moral equivalent” of P=NP?

  71. Gil Says:

    And what is that?

  72. Scott Says:

    Gil #70: Uh, what you described yourself in comment #67.

  73. Gil Kalai Says:

    “So then, we just redefine the simulation task as follows: your task is merely to sample, approximately, from the probability distribution over possible results of measuring the dynamical system, assuming the initial state was chosen uniformly at random from the smallest epsilon-ball in which you were able to place it.”

    Scott, by the way, I am not sure you are correct also regarding the issue of chaotic dynamics. You cannot “merely” assume that the initial state is chosen uniformly in a ball. (I avoided this issue in my formulation by talking on robust experiments..)

  74. Gil Kalai Says:

    Scott #71, that’s amusing, you actually want the algorithm.. this reminds me of what Bacon said (according to Popper): “Give me a couple of years free from other duties,” Bacon somewhat unguardedly exclaimed in a moment of enthusiasm “and I shall complete the task – the task of copying the whole Book of Nature, and of writing the new science.”

  75. Gil Says:

    Jay, #69 “Gil, don’t you think it’d be the “moral equivalent” of P=NP?”

    Jay, (I am not sure I get you but whatever you mean apply) Just as much for classical physics as for quantum physics.

  76. Scott Says:

    Gil #73: Hell yes, I want the algorithm! Please remember that I don’t share your starting assumption that such an algorithm exists. I see no good reason to accept that assumption. I’m fine with Nature giving us the full power of BQP (and with BQP strictly containing BPP), if that turns out to be the truth—as I conjecture it will.

    So, given that, what else would you propose to do to change my mind?

  77. David Speyer Says:

    Scott at 51 writes “wolfgang: Maybe I should let someone else take a crack at explaining this… Either there are quantum systems that can’t be efficiently simulated by a classical computer, or else there are not such systems.”

    I used to be very confused whenever Scott would say this. I think I’ve straightened it out now, so I’ll try explaining it and see if this is what is confusing anyone else.

    Here is how I used to think: “But there are classical systems that can’t be simulated, those which are chaotic. For example, a maple seed dropping from a tree. If I have a polynomial amount of resources available to measure the weight of the seed and angle of the seed, velocity of the air molecules, etc, I still can’t make a good prediction of where it will land, because the turbulence will amplify my measurement errors exponentially fast.”

    I think the proper response to that is the following: Simulation should be understood in the sense of the Turing Test. If I used a cryptographically secure random number generator to generate a sequence of points normally distributed around the base of the tree, no one (meaning no polynomially bounded judge) could tell the difference between that an actually dropping leaves.

    Similarly, BQP = BPP wouldn’t mean that we could predict when a Uranium atom would decay, it would just mean that we could simulate a sequence of decays that was indistinguishable from a true Uranium atom.

    Don’t know if that confused anyone else, but it bothered me for a long time. Of course, I might still be wrong!

  78. Scott Says:

    David Speyer #76: YES, PRECISELY, THANK YOU!

    I tried to say this in comment #65, but you’ve said it much better.

    (Minor note: To probabilistically simulate the decay of a uranium atom, of course we don’t need a QC, or BPP=BQP! A simple Markov chain should work fine, at least if we’re allowed to stick in the half-life by hand. But a QC could help a lot for simulating proteins, quark-gluon plasmas, high-temperature superconductors, and so forth.)

  79. David Speyer Says:

    I now see that the issue of chaos is discussed above in comments 62-65. (I wrote the preceding comment after reading 51.)

  80. Vijay D'Silva Says:

    The idea of a super-intelligent alien race having discovered common mathematics in some form appeals to me. It’s the kind of belief that would make epistemic theology a theology for me.

    I’m not sure they need to have studied integers and prime numbers at an advanced level though. Maybe it is a civilization whose mathematical aesthetics tend towards smoothness so everything is perceived in terms of topological spaces and homotopies. Or maybe a civilization that does not perceive absolutes and discrete quantities but only a continuum and arbitrarily precise approximations of such quantities. Could they not model and study physics, and even computation without studying natural numbers at an advanced mathematical level?

    My comment only applies to their study of mathematics without assuming further context. I actually find it hard to imagine developing long-distance communication without studying discrete quantities.

  81. Lou Scheffer Says:

    Sorry I’m late to this fascinating discussion.

    Hamming has written on the question of mathematics that aliens might use in “Mathematics on a Distant Planet”. Basically his argument is that the mathematics used to describe physics will be very similar, in results if not in form. For abstract math he believes it could be very different. His argument, which I find convincing, is that much of our math depends on arbitrary definitions. For example, he notes that a +- unity slope sawtooth function is continuous, but if rotated 45 degrees is no longer continuous, and in fact no longer a function. But if math considered curves, instead of our rather arbitrary functions, then this is not true.

  82. Scott Says:

    Lou Scheffer #80: Thanks for the ref! But in my view, the fact that many definitions are more-or-less arbitrary, doesn’t imply that the content of the discoveries is equally arbitrary. For example, I can imagine you could develop calculus using curves rather than functions as the starting point, yet end up with mostly the same results. (Though now that I think about it, how do you define what you mean by a curve being “continuous,” without any prior concept of a function?)

  83. Leonard Ornstein Says:

    Re: Comment 53:

    URL should be:


  84. Scott Says:

    Vijay #79: Maybe it’s just me, but I find it vastly easier to imagine an alien race whose discrete math is a thousand times ahead of its continuous math by our standards, than the reverse! 🙂 I’ve always wondered whether historically, there was a huge “imbalance” in favor of continuous math because of the demands of physics and the impressiveness of what people could do as soon as they were armed with calculus. If we were starting from scratch, maybe we would take discrete notions as fundamental, and see continuous math as “merely a powerful tool” for reasoning about what happens to discrete objects in limits like n→∞ and ε→0. (As I like to put it, in CS we never use sums to approximate integrals; we only use integrals to approximate sums!) Maybe it took the birth of the computer age, the early 20th-century revolution in logic, and the development of combinatorics by Erdös and others for this view even to seem non-crazy (though I’m sure it still does seem crazy to many people…)

    What should hopefully be less controversial is the following observation: you can hardly do anything in discrete math without bumping into continuous notions, and vice versa. (Obvious examples: the growth rate of a function mapping integers to integers often involves π, e, and other transcendental numbers; the value of a contour integral depends on the discrete number of times you’ve traveled around a pole.) For this reason, it seems clear that our aliens would need both continuous and discrete concepts, but it’s an interesting question whether they could stress one or the other much more than we do.

  85. Jay Says:

    Gil #74

    My line of thought is that QM is unitary, so if any physical process (can be simulated using) is in P, then any reversal of a physical process is also in P, from which all instances of NP problems we can come to in our physical universe must be actually in P.

    This does not apply to classical physics which is not unitary, and yes your ideas are closer to the latter as what you’re thinking at is an effective theory for which some degree of freedom are lost, exactly what we think transform QM in classical physics.

    However, in the same vein we think classical physics could be reversed “if only” we could keep track of all information, your view would allow a distant observer to consider easy what we consider hard “if only” she could keep track all the information we’re loosing locally. In other words, from your point of view, we’d be locally complex, globaly simple. Or so it seems.

  86. Rahul Says:

    Scott #83:

    Till someone succeeds in a Physics theory that discretizes space / time the continuous might have at least that one trump card over discrete?

  87. Ajit R. Jadhav Says:

    I have tried reading it thrice, but face difficulty parsing this line by Gil (#67).

    >> “Scott, I think that failure of QC implies that any robust outcome of experiment in quantum physics (just like in classical physics) can be demonstrated by an efficient algorithm.”

    Here is my best try at understanding. Below, I first repeat the phrases from that line and supply what I understand by them:

    “Failure of QC”: the idea that a scalable QC is impossible to build.

    “an experiment in quantum physics”: a typical but specifically quantum mechanical experiment, say, the single particle double-slit interference experiment, or the Stern-Gerlach experiment, or an experiement concerning a violation of the Bell inequalities.

    “robust outcome” of the above: Self-explanatory. For example, in the Stern-Gerlach experiment, the outcome for a given (say, the next) particle as to whether its spin is experimentally found to be up or down.

    “efficient algorithm”: say, the one that runs in polynomial time on a classical digital computer.

    “can be demonstrated by [a given kind of algorithm]”: can be simulated by [the given kind of algorithm], because it is in the very nature of [that kind of algorithm].

    Now, putting it together, deliberately in some other words (so that we can be sure that the sense of the word is right), I get something like the following:

    If a scalable QC cannot be built, what it tells us about the nature of physical reality is this much: a specifically quantum mechanical outcome such as the violation of the Bell inequalities can (in principle) be simulated by a polynomial-time algorithm.

    Now, that looks like the exact opposite of what to expect, right?

    If by some principle we can’t at all build scalable QCs, it would be because the interactions within the subcomponents of a quantum system, and the interaction of the quantum system taken as a whole with the rest of the world, would somehow physically work in such a way that as the size of the composite quantum system increased, the decohering tendency would increase at even a faster rate.

    Given this kind of a physical reality (many of us would say: a third-class sort of reality!) for the quantum phenomena, why should it make it easier (polynomial-time) to simulate an outcome of _such_ phenomena? Why should a decoherence that increases far too rapidly with an increasing system size, should also make it efficient for computations to proceed?

    Why should decoherence increase computational speed?

    * * *

    I must have made an elementary mistake, because Scott (#68) seems to have understood Gil’s line, and also, none else has raised any flag about it.

    What, then, is my mistake in the reading? Thanks in advance for pointing it out.


  88. chorasimilarity Says:

    Zork’s bloogorithm is counterfactual thinking.

  89. Scott Says:

    Rahul #85:

      Till someone succeeds in a Physics theory that discretizes space / time the continuous might have at least that one trump card over discrete?

    Well, in current thinking about quantum gravity, the Hilbert space of any bounded region is finite-dimensional, because of the holographic principle, and the usual notion of a smooth spacetime manifold should break down (to what Wheeler evocatively called “quantum foam”) when you get to the Planck scale of 10-33 centimeters or 10-43 seconds. On the other hand, the resulting “discreteness” or “quasi-discreteness”—whatever you want to call it—is certainly much more subtle than spacetime being a lattice, or anything like that (since for one thing, a lattice wouldn’t be Lorentz-invariant, or even rotationally invariant). And also, quantum mechanics is still there, which means that you still calculate the probabilities of events by squaring amplitudes that are arbitrary complex numbers with norm at most 1, and transform the amplitudes by unitary matrices that also form a continuous group. But then when you make a measurement in quantum gravity, there should only be a discrete set of possible outcomes. Things like this, I suppose, are why John Baez once remarked that his favorite answer to the question of whether the world is discrete or continuous is that it’s “both, and neither”!

  90. John Sidles Says:

    David Speyer says (#76) “I used to be very confused whenever Scott would say this (#51). […] I think the proper response to that is the following: ‘Simulation’ should be understood in the sense of the Turing Test.” (lucid explanation follows).

    David Speyer, it’s good to know that other folks besides me are confused by Scott’s “jumping up-and-down assertion” (#51), and have followed a similar path to clarifying it. I hope you don’t mind if I commend to Shtetl Optimized readers the erudite (and highly rated) answer that you posted on MathOverflow to the question Is no proof based on tertium non datur sufficient any more after Gödel?. In (too-brief) excerpt your answer was:

    I find it more helpful to say that our systems of formal symbols and formal manipulation rules can describe more than one system. For example, Euclid’s first four axioms can describe both Euclidean and non-Euclidean geometries. This doesn’t mean that Euclid’s fifth postulate has some bizarre third state between truth and falsehood. It means that there are many different universes (the technical term is models) described by the first four axioms, and the fifth postulate is true in some and false in others.

    However, in any particular one of those universes, either the fifth postulate is true or it is false. Thus, if we prove some theorem on the hypothesis that the fifth postulate holds, and also that the fifth postulate does not hold, then we have shown that this theorem holds in every one of those universes.

    Following this reasoning, it’s evident that a key descriptor relating to the “Aaronson jumping up-and-down assertion” (#51) is the model universes in which we propose to rely upon it. Over on Gödel’s Lost Letter and P=NP, in response to a summary post by Gil Kalai, I’ve emphasized that (to use Speyer/Hartmanis language) the model universes of QC discourse include universes in which:

    Model universe [1]  the term realistic is defined with scrupulous respect for the continuum interactions of quantum field theory.

    Model universe [2]  the term simulate is defined with scrupulous respect for polynomial constraints upon verification resources.

    It seems plausible (to me) that the best way (only?) way to prove the formal falsehood of the assertion “quantum computation resources are an illusion” is simply this: exclude model universes [1] and [2] from discourse. Mathematicians of course are entirely free to impose this restriction, but alas, engineers and experimentalists are not!

    For this reason, it seems (to me) that most of the recent 1000+ comments here on Shtetl Optimized and on Gödel’s Lost Letter and P=NP have mainly not been about the question “Are BQP resources an illusion?”, but rather about the meta-question “For what models are BQP resources an illusion?”

    So whenever anyone jumps up-and-down (in Scott’s phrase) and delivers an oracular assertion regarding the question “Are BQP resources an illusion?”, it is necessary — as David Speyer’s remarks show us — to clarify precisely what model universe(s) the jumper has in mind (which is tough), and then decide whether these model universe(s) realistically encompass our shared universe (which is far more tough). 🙂

  91. Rahul Says:

    Scott #88:

    Quantum gravity and foams are all ingenious theories. But no experimental verification yet, right? Or am I out of the loop? Are we even close to an empirical test? I’m again watching from my vantage point in the skeptic’s gallery. 🙂

    Do we have candidates for non-lattice models of discreteness?

  92. Scott Says:

    Rahul #90: Loop quantum gravity and spin foam models are such candidates—depending on exactly what you mean by “discreteness”—but they remain very much minority tastes in the physics community, dissed (for example) by the string theorists. On the other hand, the holographic bound that I mentioned earlier is agreed on by pretty much everyone—certainly including the string theorists—because it can be deduced from arguments closely-related to those of Bekenstein and Hawking in the 1970s about black hole entropy. Even though no one has ever seen a black hole evaporate, those arguments are considered solid since they’re based on applying well-understood quantum field theory to the curved spacetime near a black hole.

    And if you accept the holographic bound, then you get that in any finite region of space, there can only be a finite number of perfectly-distinguishable states—not the continuum of states that you would “naively” expect if you thought spacetime was continuous. For example, if you measure the position or velocity of a particle in a region, you can only get a finite number of perfectly-distinguishable answers. Furthermore, the place where you start seeing that granularity is more-or-less exactly the place where you’re trying to resolve distances smaller the Planck length. However, whether that granularity can be interpreted in terms of “discreteness” of the underlying spacetime, or whether it’s something more subtle, remains a matter of dispute, and we’ll probably need a full quantum theory of gravity to know for sure (and even then, people might still disagree about its interpretation… 🙂 ).

    Experts are welcome to correct me if I got anything wrong.

  93. John Sidles Says:

    Ajit R. Jadhav asks (#86): “Why should decoherence increase computational [simulation] speed?”

    That is a perfectly reasonable question, that can be answered in multiple ways. One generic answer is that decoherent dynamical trajectories (both differential and discrete) can be unravelled in various ways (that all yield the same simulated measurements) and it is always true that *one* of those unravellings is a measurement process that acts to restrict dynamical trajectories to a low-dimension submanifold of Hilbert space.

    So one way to appreciate the debate regarding the general proposition “realistic BQP resources are an illusion” is as a debate about the specific proposition “realistic BQP resources are an illusion because field-theoretic (or string-theoretic?) decoherence generically restricts quantum trajectories to (nonlinear) submanifolds of Hilbert space having polynomial dimension.”

    That this restriction often is respected is not in doubt … Ignacio Cirac and to Peter Zoller were awarded this year’s Wolf Prize for showing us (as it seems to me) how much good physics lives on these low-dimension (nonlinear) quantum submanifolds. As for whether this restriction is invariably respected … that’s a very important open question, regarding which, opinions differ! 🙂

  94. Rahul Says:

    Scott: Thanks for a very nice explanation of the Quantum Gravity situation. That helps. You seem to have a Feynman-like knack for these things. 🙂

    If I may switch topics and bring up your Boson Sampling paper:

    If I speculate, one spot the whole ingenious Boson Sampling scheme may fail, is the requirement to get ‘n’ photons arriving at once. What’s the prior art on this. Even outside of your experiments what’s the most number of photons any one has successfully synchronized?

    Even theoretically, is there any estimate of how the probability of an ‘n’ photon coincidence might be expected to fall as n increases? Or is this a purely engineering matter?

    How low can these probabilities get numerically and still expect the experiment to work? What’s a phenomenological or empirical model for quantitatively modelling ‘n’ photon coincidences?

  95. Scott Says:

    Rahul #93: Yes, getting the photons to arrive at once, with as large a probability as possible, is the obvious engineering challenge with BosonSampling. Alex Arkhipov and I discuss that in our paper, and the experimental papers that came out a few weeks ago discuss it as well.

    I think people have measured as many as 6-photon coincidences—someone correct me if I’m wrong. Amazingly, though, the experimental BosonSampling papers a few weeks ago apparently represented the first time anyone had verified explicitly that the amplitudes even for 3-photon processes are given by the permanents of 3×3 matrices.

    With the photon sources that we have today, the probability of an n-photon coincidence falls off exponentially with n. That’s the reason why BosonSampling has so far been demonstrated only with n=3 and n=4 photons (the Walmsley group did 4 photons, but only with special initial states).

    But the experimental quantum optics people tell me that with technologies currently under development (like “optical multiplexing”), you might be able to scale up even to 20 or 30 photons and still get a decent count rate. So it will be exciting to see what happens with that.

    In the meantime, a major theoretical open problem is whether, even if you do lose (say) half the photons along the way, your experiment still samples a distribution that’s intractable to sample using a classical computer. And if yes, then under what sort of computational assumption could we show the hardness? Alex and I have been meaning to think about this, but anyone else is welcome to do it first.

    (Note: It indeed seems possible that ultimately, “pure” BosonSampling will not scale beyond a few dozen photons. To go significantly beyond that, you would want quantum error-correction, but if you have that, you could probably just build a universal QC instead.)

    For more about this, please go over to the BosonSampling thread from a few weeks ago.

  96. Ajit R. Jadhav Says:

    Rahul #85:

    0. You raised a neat point. There was a FQXi essay competition on somewhat similar lines. (What _it_ asked was whether reality was analog or digital!)

    1. In the context of mathematics, clearly, both discrete and continuous are necessary abstractions, and, more importantly, IMO, both represent first-level generalizations (of mathematics).

    The character of discreteness is there right at the beginning of mathematics, right with the primitive concept of natural numbers—numbers, as in the sense of the simplest primary-school arithmetic (or the pre-counting basis of arithmetic like the matching operation).

    The character of continuousness is already implied with the axiomatic geometrical notions such as “figure,” “line,” etc., as well as pre-geometrical operations such as matching shapes.

    2. In the context of physics, once again, both can be taken to encapsulate certain aspects implied by the very first-level generalizations pertaining to the physical universe.

    Here, imagine a 3D “jelly” of an otherwise uniform density. Select a finite number of points in it, and imagine a finite region (not necessarily spherical) around them such that the local variations of the jelly-density along any radial direction within that finite region is given by a certain bump function along that direction. The bump function is neat in that it goes down to zero (or the baseline density) right over a finite distance, and yet, it also is infinitely differentiable (C^infinity).

    The basic model should be clear: those finite regions of space are what you may call physical objects/entities/bodies/etc., and, the jelly may stand for an all-pervading physical aether.

    Since physical entities defined in this way are finite in both extent as well as number, the discreteness abstraction applies to them: you can distinguish one entity from the other by clearly demarcating its boundaries, and you can also count all of them taken together, using the discrete mathematics i.e. (non-zero) natural numbers.

    Yet, notice that the continuum abstraction also applies to such a view of the physical reality, because a crucial quantitative measure of it, viz. the density variation, indeed is continuous, smooth, in fact, also infinitely differentiable.

    3. So, neither has an upper hand—whether in physics or in mathematics.

    4. However, it is not clear what John Baez meant by “neither.” (Scott # 88)

    5. In general, the 20th century physicists were/are highly unimaginative, in my opinion. Or, perhaps, their out-of-hand rejection of the aether made them so (a lacuna still left made up for, even if they do now have some vague notions going in the same directions, e.g. “field”—which they define completely mathematically.) And, talking of the 21st century, and my view of it, I am already too old even to want to make too much of a difference. So, overall, guess you can’t expect too much better in the 21st century unless ideas like what I outlined above take root in the “avant garde” science… Berkeley, Cambridge, Gottingen, Harvard, MIT, Oxford, Paris, Stanford… Ah… the mind boggles (though it still is functional enough to list things in a strictly alphabetical order, even if the order itself doesn’t matter—ultimately, only truth does (i.e. if anything matters at all). [And, I now notice, I have used the expression: “matters,” not “counts.” I am happy it came out that way.]

    Will come back tomorrow (India time).


  97. Rahul Says:


    I kept plodding through your comment until you totally lost me at “In general, the 20th century physicists were/are highly unimaginative”.


  98. John Sidles Says:

    Rahul asks (#93): “What’s the most number of photons any one has successfully synchronized?”

    If one takes the view “a photon is just another qubit”, then Monz et al. “14-qubit entanglement: creation and coherence” (PRL 2011) is reasonably representative of the present state-of-the-art. For reasons not (yet) fully understood, the experimental data exhibit Kalai-type “superdecoherence” (as the authors call it), namely the observed decoherence rate scales as the square of the number of qubits for systems of up to 8 qubits (for larger numbers of qubits the entangled states proved to be too fragile for reliable scaling determinations).

    Whether the experimentally observed superdecoherence reflects mundane quantenfeldtheoretisch Dreckeffecten, or it reflects Kalai-type fundamental (thermodynamical? smoothed Lindbladian?) limits to quantum state preparation (originating in field dynamics?) is a crucial open question that (as it seems to me) we are not very close to answering.

    In any case, fans of noise will find much in this article that is worth pondering.

  99. Scott Says:
      In general, the 20th century physicists were/are highly unimaginative, in my opinion.

    Look, I agree that Einstein, Rutherford, Dirac, Fermi, Bose, Chandrasekhar, Schrödinger, Wheeler, Szilard, Yang, Yukawa, Feynman, Gell-Mann, Hawking, Weinberg, and Witten combined weren’t 1% as imaginative as blog commenter “Ajit R. Jadhav [E&OE].” But then again, who is? And those guys didn’t have the advantage of “E&OE” (whatever the hell that stands for). So surely you can be magnanimous, and admit they were at least a little imaginative?

  100. Rahul Says:

    Scott re. #98:

    The urge to keep signing every blog comment was itself a red flag……. (yes, whatever be E&OE; perhaps an imaginatively coded message? A ham call sign? ) 🙂

  101. Rahul Says:

    John #97:

    >>>If one takes the view “a photon is just another qubit”, <<<

    I think Scott might disagree with that? Isn't one of the point behind Boson Sampling that one doesn't need coherence between conventional qubits because that's not how they are using these photons? Or not? Maybe I mis-read the concept.

    N-photon-time-synchronization and multiple atom coherence aren't identical physical processes? Is photon desynchronization vs qubit decoherence identical or merely analogous?

  102. wolfgang Says:

    >> “E&OE” (whatever the hell that stands for)

    it stands for “errors and omissions excepted”, which some lawyers add to their letters so that one cannot sue them in case they made a mistake.

    So in this particular case Hawking, Weinberg, Witten et al. will find it difficult to sue Ajit R. Jadhav …

  103. Scott Says:

    Rahul #100: Right, our model doesn’t use photons as qubits at all; instead it just directly exploits the boson-number degree of freedom. The Knill-Laflamme-Milburn (KLM) proposal for linear-optical quantum computing would use photons to encode qubits, but that requires an additional resource, namely adaptive measurements (i.e., measuring a photon partway through, and rearranging the rest of the linear-optical network depending on what you find).

    It’s a very interesting question whether desynchronization should be considered a form of “decoherence.” My inclination is to say no, since desynchronization can occur for reasons (for example, poorly-calibrated photon sources) that have nothing to do with the photons interacting with their external environment. But it does have an analogous flavor, in that you lose the quantum interference terms you want, and your system basically reverts to being probabilistic and classical.

  104. Scott Says:

    wolfgang #101: Thanks for clarifying!

  105. Rahul Says:

    Scott #94:

    From the data for n=3,4,6 photon-coincidences that you mentioned would it be possible to already deduce a preliminary estimate of photon desynchronization scaling behavior similar to what @John described in #97 for the Monz experiment?

    Decoherence seems n^2; wonder if desynchronization would be similar…..

  106. Scott Says:

    Rahul #104: Heuristically (and assuming no error-correction), it’s clear that the probability of an n-photon coincidence should scale like pn, for some p<1. So the question is just how large you can make p. For example, if you had p=0.9, then you could let n be 30 and still get plenty of coincidences. But this is a question about future technology.

  107. Bram Cohen Says:

    Scott, you can’t transmute lead into gold, that would be ridiculous. You transmute either mercury or platinum into gold, via neutron bombardment.

  108. Bram Cohen Says:

    Scott, how many of these counterarguments to QM boil down to ‘the logical conclusions of this theory is so absurd that I predict that carrying out the experiment will demonstrate some as of yet unforeseen phenomenon which makes it impossible to do calculation in this way’? That’s more or less my stance, although of course it makes actually doing the experiment more, not less, compelling.

  109. Scott Says:

    Bram #106: Sure you can transmute lead into gold (though the Wikipedia article says that the reverse reaction is much easier).

  110. Rahul Says:

    Scott #105:

    The experimental paper by your Australian collaborators had these snippets deep in its bowels:

    (1) We obtain an average two-photon coincidence rate of 260 Hz

    (2) We obtain an average four-fold coincidence rate of 185 mHz

    I’m totally out of my depth here (eeks!), but does this mean your desynchronization scaling constant ‘p’ (from p^n) is p~0.026?

  111. Scott Says:

    Bram #107:

      how many of these counterarguments to QM boil down to ‘the logical conclusions of this theory is so absurd that I predict that carrying out the experiment will demonstrate some as of yet unforeseen phenomenon which makes it impossible to do calculation in this way’?

    All of them? 🙂

    Seriously, the main point on which the skeptics differ from each other is how clearly they recognize that we’re really talking about an unforeseen phenomenon: that you’d better postulate something new and exciting if you want to kill QC. Gil Kalai basically recognizes that, but many other skeptics don’t: they think it’s just obvious that QC can’t work (though their stated reasons differ), and they tend to see QC research as a giant boondoggle (or borderline fraud), which will fail spectacularly when it runs up against the a-priori obvious impossibility.

    A few skeptics, like Goldreich, Levin, and ‘t Hooft, go so far as to say QM itself will break down. (Though for ‘t Hooft, that’s a radical research proposal, while for Goldreich and Levin, it’s more like a matter of indifference! “Silly physicists, why would they even think something violating the Extended Church-Turing Thesis could possibly have been true?”)

    Other skeptics say that QM is fine, but QC is an absurdity, and they insist there’s no tension whatsoever between the two statements. So then, how do we reconcile QM’s picture of an exponentially-large wavefunction, with the in-principle impossibility of ever harnessing it for computation? At the least, don’t we need some big new principle on top of QM, which would restrict the form of the allowed Hamiltonians? Also, if QC is impossible, doesn’t that mean there should be a probabilistic polynomial-time classical simulation for any realistic quantum system? So what is that simulation? For many skeptics, these sorts of questions are best left unasked!

  112. Scott Says:

    Rahul #109: It sure sounds like it!

  113. Rahul Says:

    Scott #111

    That’d be bad news if it’s so small? 🙁

    PS. Had you estimated this before? Sorry if I reinvented the wheel; rather “re-discovered” your constant in my blog comment. Snooping around for ‘p’ was fun though!

    Too bad the other three labs don’t disclose their rates. Can you ask? Do the lab-rats lurk on here? 🙂

  114. Scott Says:

    Rahul #112: No, I hadn’t estimated it; thanks! Yes, I think we’d better ask the “lab rats” for details. There are many issues that we might not be accounting for, and I’d caution against attaching deep significance to the numbers (for example, it’s likely that the experimentalists know perfectly well some things they could do to increase p, and simply haven’t done them yet).

    Incidentally, maybe a more accurate scaling is pn-1, since if there’s one photon then there’s no coincidence problem — just wait for it to arrive! (Though even with one photon, of course photon loss and detector inefficiency are issues.)

  115. Rahul Says:

    Scott #113:

    >>>it’s likely that the experimentalists know perfectly well some things they could do to increase p, and simply haven’t done them yet<<<

    Precisely! My worry too. Maybe taking the rate too high has its own downside? Do the avalanche detectors saturate above a certain rate? I've no clue.

    Nevertheless, it was fun to try a back of the envelope calculation. I'm eager to hear the experts and be corrected.

  116. John Sidles Says:

    Scott asserts (#110): “Seriously, the main point on which the skeptics differ from each other is how clearly they recognize that we’re really talking about an unforeseen phenomenon:”

    Hmmm … are the obstructions to QC attainability really (in Scott’s phrase) “unforeseen”? From way back in 2003:

    Quant-Ph/0310130: Dreams versus Reality:
    Plenary Debate Session on Quantum Computing

    Derek Abbott  It seems to me, without a doubt, that small numbers of qubits have been demonstrated. So the real question for this debate should be: “Is it possible to scale quantum computers?” I think that’s your real question and if you look at the most sensible way of scaling, which is on silicon, in my opinion – because it’s a mature scaleable technology – you have then got to ask, “What is the decoherence time in silicon?” And all the papers say, “If you use pure silicon and blah, blah, blah, it’s all very good.” But putting my Con hat on, to help the Con team a bit, what the papers don’t address is that, I’ve got this scaleable quantum computer; I’ve got zillions of qubits on here; I’ve got all these A and J gates switching like crazy. That is a coupling into the environment. What’s going to happen to that decoherence time when they are all switching like crazy?

    Julio Gea-Banacloche  I think that it’s always a big jump to say that just because you have demonstrated something for, say, 100 qubits that you’re going to be able to scale that up 4 orders of magnitude, without encountering any unexpected problems. I don’t think there are any engineers here that will support such a point of view.

    Excepting the wholly admirable work of John Preskill, in what concrete respects are today’s QC enthusiasts better able to respond to skeptical concerns, than the enthusiasts of 2003?

    Other than to say “Doh! The QC skeptics were right!”, that is? 🙂

    The plain lesson of history (as I read it) is that the efforts of today’s QC enthusiasts are best focused upon addressing the strongest critiques of QC skeptics … of which there is no shortfall! 🙂


    PS: Rahul, those are great posts. You might try an envelope calculation (aka “Fermi calculation”) of how many photons the star Sirius had to emit for Hanbury-Brown and Twiss to detect one correlated pair. The point being that a reasonably well-validated experimental rule-of-thumb is this: the higher-order the quantum correlation detected, or the greater the space-time separation associated to that correlation, the more entropically profligate is the quantum source that produced that quantum correlation. Hmmm … is there a thermodynamical law of nature that forces this empirical finding upon us? Don’t ask me! 🙂

  117. Scott Says:

    John Sidles #115: Sorry, I meant “unforeseen on the basis of currently-understood physics.” Anyone can announce that they think QC will never work. The hard part is to fit that belief into a coherent picture of reality, which doesn’t grossly contradict QM on experiments that have already been done.

  118. Rahul Says:

    Scott #113

    >>>Incidentally, maybe a more accurate scaling is p^(n-1)<<<

    That shouldn't change the constant though since it's all about ratios?


    Also, based on John's comment of n^2 scaling for decoherence and Scott's p^n scaling for desynchronization it seems that Boson Sampling should have worse scaling behavior at least at asymptotic large n limits? I hadn't realized that. Thought we were expecting Boson Sampling to scale better than plain ol' qubits. I guess it's the constants that matter for now?

    Might be moot, for now, looking at what sort of small n’s we are all stuck at.

  119. Scott Says:

    Rahul #117: Yes, the constants are hugely important for the time being.

  120. Rahul Says:

    A probably naive experimental question: How does one move from n=4 to n=5 or n=6? That’d need three photon pairs.

    How does one get the non-linear crystal to generate yet another pair for us? The first two pairs seem on the forward and backward passes. What’s the change in setup needed? Wasn’t obvious to me.

    Or is n=5 as yet impossible with the current strategy until multiplexing becomes possible?

  121. Justin Dove Says:

    Hey Rahul,

    There are tons of factors at hand, so it’s tough to divide two numbers, take a square root, and determine whether or not it’s ‘right’. However, I’d say you’re right in an ‘order of magnitude’ sense. Those numbers also take into account coupling efficiencies, losses in the apparatus, detector efficiencies, etc. That’s why the best achieved 8-fold rate to date (~1 mHz) is substantially better than what your calculated p would predict: it didn’t have as much hoopla to deal with.

    Scott, your n-1 trick won’t fly for a single photon. In short, we need a way of distinguishing the real counts from dark counts, stray photons, and such. Coincidence counting is always necessary when it comes to SPDC. To be really precise, a three-photon experiment would need a six-fold coincidence with number resolution on the idlers, but there are games to be played and tradeoffs to trade that can get us pretty good results with 4-fold coincidences. However, we can really only get one free pair out of such trickery, so the high-photon-number scaling of coincidences would still look like 2-to-1.

    Without getting into too much detail, the best I can say is: yes, SPDC sucks! The reason we have such low rates is because raising the brightness will introduce more higher-order terms (multiphoton fock states). So we are stuck with low base rates. Add in the fact that the window for a coincidence is on the order of nanoseconds, and you’ll quickly see that the scaling to multiple-pair coincidences is going to suck. Even still, you have to deal with a number of other issues that are going to make that scaling worse.

    As far as making p better, I’m not sure that there is really a ton out there. Sure, people are working at improving these SPDC sources, even in my own group, but the bottom line is that we all know there is not much hope of seeing them scale to much higher numbers. We can pick up some gains here or there, but we’ll still be dealing with a low brightness, exponentially falling off, probabilistically generated photon source. This should not be seen as a fault of the BosonSampling result. It is merely a statement about the state of current single-photon-source technology. We need to examine other avenues for scalable, reliable single-photon sources. I can imagine that people can squeeze out a few more photons with SPDC with some really hard work, and one-up the current BosonSampling results, but the endgame is going to require new strategies.

  122. Scott Says:

    Thanks so much, Justin!! That’s incredibly helpful.

  123. Justin Dove Says:

    Regarding getting more pairs (#119), we can use more crystals, or continue to do more passes through one crystal but with some added fast switches to separate pairs and then delay lines to get everything back in sync. Nevertheless, you’ll see that the scenario gets worse and worse as we keep adding photons. So in the spirit of my last post, SPDC is really not the answer for significantly higher numbers.

  124. Justin Dove Says:

    @Scott (#121): No problem! I’m actually a theorist by trade, so I’m sorry that I can’t get too down with precise numbers and explicit capabilities, but I think there’s enough conceptual obstacles to see why it’s senseless to put too much weight behind our current methodology.

  125. Rahul Says:


    Thanks! No, not a fault of Boson Sampling, but a good reality check. It might be an idea ahead of its time.

    In any case, fact remains that the n=30 to 40 range of real interest will probably remain inaccessible over, say, a 5 year horizon (my guess). Bummer!

  126. Joshua Zelinsky Says:

    Regarding the issue of whether other species would have any notion of prime numbers, it may be noteworthy that primes arise in natural contexts. One of the most striking examples is how evolutionary pressures have caused cicadas of many varieties have lifecycles that are a prime number of years.

  127. Raoul Ohio Says:

    Re Vadim: “I think steampunk not working out can be attributed purely to engineering challenges.”

    My take is that the reason steampunk does not work out is that steampunk is a joke. If you don’t get it, you might have trouble with QFT.

    I have wasted lots of time analyzing why jokes work, so I offer to give some pointers for those in the dark.

  128. Ajit R. Jadhav Says:

    @Rahul #96:

    Seems consistent with the rest of what you write.

    @Rahul #99, @wolfgang #101, @Scott #98, @Scott #103:

    About E&OE: It stands for the nearest commonly accepted English equivalent of the original Marathi phrase: “chuk-bhool deNe gheNe.”

    The exact English translation of the original Marathi phrase would be: “mistakes-forgotten [things] give take.”

    When I was a growing up kid (and in the times before that), the Marathi phrase used to be put on sundry bills (i.e. cash memoes), just the way E&OE is, but not in so fine a print as for the English phrase.

    And, not just the tone but also the sense of the Marathi expression is slightly, but very definitively, different. It’s more in the nature of an acknowledgement of fallability, and of friendly reminder in a casual “smalltalk” sort of a tone that a certain margin is both expected and available (to effect corrections).

    In the Marathi phrase, the legality of it is only implied; it is not direct the way it is in the English phrase of E&OE. In Marathi, a certain to-and-fro is directly mentioned; in English, the direct reference is to the rather formal word: “exception.” (In fact, these days, on sundry cash memos, most shop-keepers use: “chuk bhool kshamasva,” whose exact translation would be: “[I/We] [do and/or can] stand apologized for mistakes and forgotten [things].”)

    It’s only out of convenience that I use the English expression… It’s one of those minor adjustments one makes so that communications with the non-Marathi speakers can at all proceed.

    The reason for including that: … I have been poor (to very poor) in all written communications, esp. that in English. (My schooling till 10th standard was in Marathi medium, and even in college, including in COEP, it was common for teachers to informally engage in Marathi even while in the classroom. For my first job interview, I had memorized a few English lines such as introducing myself.) On iMechanica, I have sometimes edited and re-edited my posts, just to correct grammatical errors, some 5–10 times! So, I thought including it as a signature would come in handy to flag the fact that errors can easily creep in, in my writing or I can edit the post. Apparently, it leads to some misunderstanding on its own account. Oh well… It seems deliberate…

    I don’t think that even if I were not to add that E&OE thingie, Hawking, Weinberg, Witten et al. would have thought of suing me. I think they all are already highly over-paid and much over-honored, and in some sense, they all would internally know this fact. The latter usually makes the intellectual sort of people so content on the social side of life and so dull in terms of the sensitivity to criticism—any criticism, even a valid one—that they usually don’t come to consider any legal options so easily. It isn’t always the idea of honoring the principle of free speech, or of taking criticism in one’s own stride. No. It isn’t always that. With people like the ones mentioned here, it is rather the matter of having developed an academic kind of a thick skin, a skin which is very thick to any criticism—even a valid criticism.

    wolfgang, that’s the reason why they wouldn’t even think of suing me, even if I don’t include my customary [E&OE] end-note.

    @Scott #98:

    LOL! I won’t fall for that bait, Scott. (For an illustration of the kind of resolve I have here, search for and find “twitpic_8nrfwf_467.jpg” on the ‘net.)

    But, anyway, seriously, when one talks of a huge group of physicists (those of an entire century, in one breath), the only proper object of comparison can be: an equally big group, say, physicists from some other century—but not a single individual. Elementary.

    Here, I suggest a comparison of the 19th century physicists with those from the 20th century. And, in reference to this comparison, I do reaffirm my remark.

    When it comes to being imaginative, from among your list, Schrodinger is the only physicist who I can unhesitatingly say was as imaginative as any other, ever, including Faraday and Maxwell and Boltzmann in the 19th century, and Newton and Gauss earlier. I don’t know much about Fermi (and many others you mention), but IMO, Dirac certainly wasn’t as imaginative as Schrodinger was. Remember, the issue is not sheer IQ, or the mathematical manipulation ability, or the ability to surprise the somewhat less endowed peers who nevertheless are very adept to the technical manipulations.

    The issue is imagination. Imagination, as in creating new concepts and theories, in creating a different look at reality.

    Einstein was imaginative, to a certain extent, but not to as much an extent as the world has been taught to think he was.

    Feynman easily is the most overrated physicist from your list. I don’t say this out of a personal dislike of his style. (He was recently described somewher as “too busy being the genius from Bronx,” and I loved that characterization!) I say it out of some authentic appreciation of the history of the development of QM.

    Once Schrodinger had already shown, entirely single-handedly, how to apply the Hamiltonian formalism of classical mechanics even to the (then very highly) troublesome set of the observed quantum phenomena, and once Dirac had already used the Poisson brackets, think how much of an imagination would it take to think of applying yet another, very very well-known, and completely equivalent formalism of classical mechanics: viz. the Lagrangian i.e. the action-based one, for one’s research. Feynman did _that_.

    BTW, I am glad that you don’t mention Heisenberg. Much of his work was done for him by others, especially Born (whose contribution was the greatest, being in terms of basic ideas, concepts and imagination), Jordon (who supplied the technical acrobatics but did it rather well) and Pauli (who worked out the Hydrogen spectrum using Heisenberg’s approach when Heisenberg himself had tried but couldn’t succeed).

    Coming back to your list, and so to Feynman, if Feynman were to have one-quarer of the imagination he is accused of, he could have satisfactorily explained the renormalization procedure. As to the credit given to him for the idea of the Feynman propagator, just consider how a mostly isolated guy, a mere bachelor’s degree holder, a complete outsider to the university and research system, and a mostly poor one at that, living on a farm near a small town, could come up with the idea of the mathematical idea which is now named after him: Green’s function. _He_ was imaginative. In contrast, realize, Feynman had been _taught_ Green’s technique.

    As to the people like Wheeler, Witten and all, just look at their output. Witten is a highly intelligent master of obscurantism, and they all aim not be understood. But Wheeler is not so smart, in _that_ department (of life). He gives himself away first in formulating and then at all considering for such a long time, his “delayed choice” set of arguments. As to so many other 20th-century and present-day Americans: ditto.

    One final comment: If anyone ever feels that Schrodinger is being over-estimated here, also consider the originality of ideas in his later research, that related to biological systems. (It has taken 50 years for biology plus something, anything, to become a fad.) And, consider the fact that from among all the young and not so young quantum physicists hungry on anything to pounce on, very alert to spotting even the smallest feature of the new quantum theory being put together, in that entire crowd, it was Schrodinger once again who first grasped the issue of entanglement. (Stupids believe it was Bell.) QM was born in a very short span of time, in, say, just 4 years (’24–’27). And then, I guess it was for several years, perhaps another 4–5 years, over which not a single one of them (whether Heisenberg or Slater, Jordon or de Broglie, Bose or Einstein) realized that there was something like entanglement in QM—but Schrodinger did. You have to pause and think about why. And the reason he could do it, was _not_ the fact that he was taken in by the Vedic philosophy or the Upnishads. The reason was: apart from teasing the maths of it, and in fact before doing any maths at all, he took his own time to look at the physical reality—and, to imagine.

    The rest of your list. I don’t know about many others like Szilard, Yang, Yukawa, et al. … This very fact tells you something concerning the kind of physics they did, doesn’t it? Their physics, as contrasted from the 17th/18th/19th century physics?

    Imagination is not the same as some very intricate and intelligent technical acrobatics of the mathematical/abstract kind. True imagination certainly isn’t.


  129. Ajit R. Jadhav Says:

    A correction:

    The exact English translation of the Marathi phrase: “chuk-bhool deNe gheNe” would be:

    “mistakes-[and]instances of forgetting [are to be] given[,] taken.”

    In Hindi, “bhool” is a verb which means: to forget, and it also is a noun which means an instance thereof. The Marathi word apparently comes from Hindi.


  130. Rahul Says:

    @Joshua Zelinsky #125:

    How likely is it that the cicada-prime-cycle observation is just chance?

  131. Gil Kalai Says:

    An excellent point raised by Ajit(#86) is why is it the case that complicated behavior of decoherence will make the computational power needed to describe our physical world easier than what we expect rather than perhaps harder.

    To quote Ajit: “If by some principle we can’t at all build scalable QCs, it would be because the interactions within the subcomponents of a quantum system, and the interaction of the quantum system taken as a whole with the rest of the world, would somehow physically work in such a way that as the size of the composite quantum system increased, the decohering tendency would increase at even a faster rate.

    Given this kind of a physical reality (many of us would say: a third-class sort of reality!) for the quantum phenomena, why should it make it easier (polynomial-time) to simulate an outcome of _such_ phenomena? Why should a decoherence that increases far too rapidly with an increasing system size, should also make it efficient for computations to proceed?”

    The short answer is quite simple: Decoherence does not raise the computational power of BPP. Rather, decoherence (like chaotic behavior for classical processes) reduces the variety of “robust quantum processes” that can be meaningfully simulated at all.

    This good point, as some other related points raised here by several commentators are related to the issue of modeling, hardness-of-simulation and computational complexity that was discussed here at “shtetl optimized” several times in the past. For example here is a related comment I made in 2008:

    “The difficulty in simulation is often a difficulty in modeling. If we do not know the precise model (or the model contains some “secret ingredients”) we have no reason to believe that simulation will be easy, even if nature can run the process easily.

    This comment applies not only to “computational complexity” but to the more basic level of how well can we simulate, given our knowledge, even with an unlimited computational power. (Sort of “learnability”.)”

    Computational complexity can be regarded as the “second-order” notion of hardness in the context of simulations and scientific predictions. The “first-order” issue is simply what is your limitations when you are given an unlimited computational power. Often, when matter get harder they can get computationally easier as well. Often, when what you can do regardless of the computational power is more limited the computational complexity needed to exploit what can be done is lower. For example, the task of predicting the weather a year from now is harder than the task of predicting the weather tomorrow. But the CC for the best weather prediction a year for now is much lower, the best we can do is simply to take a value from a table.

    The issue of chaotic systems that Robert mentioned(#62) is indeed very important. (Robert raised this issue also in the GLL debate, and it was also raised by Michel.) Robert’s statement: “Such a system can be regular or chaotic and in the later case cannot be simulated effectively in a polynomial time by a digital computer,” is not accurate. When the system is chaotic it is not that the system cannot be simulated in polynomial time, it is that the system cannot be meaningfully simulated at all.

    Scott’s treatment (#65) of complexity and chaotic behavior also seems incorrect. Replacing the input to be a uniform distribution on some ball instead of a point is irrelevant. The whole point is that the output is extremely sensitive to the distribution of the input whether this distribution is a single-atom distribution or different. Great sensitivity on the input, including unknown but relevant degrees of freedom, is the crux of matters also in the quantum case. Talking about robust processes in comments above was my way to condition on this not being the case.

  132. Rahul Says:

    After reading Gil’s #130, what’s wrong with an argument like this:

    Let’s assume that we find decoherence / superdecoherence will eventually always set in and defeat *all* attempts to build scalable QC’s.

    Well then, haven’t we proved that the problem we were trying to model (i.e. a QM System ) is simpler than we thought it is? Isn’t the decoherence horizon a quantum-classical boundary for the simulation? If beyond the horizon the problem reduces to a classical simulation; then we won’t need a QC to simulate it after all (so long as we can build one good enough to simulate everything before the horizon)?

    What gives? I know I’m getting something wrong; what is it? Or did I not make sense at all? :p

  133. John Sidles Says:

    In accord with the sage advice of field theorists Steven Weinberg (The Quantum Theory of Fields) that “our immersion in the present state of physics makes it difficult to understand the difficulties of physicists even a few years ago, or to profit from their experience” and Tony Zee (Quantum Field Theory in a Nutshell) that “any self-respecting physicist should learn about the history of physics, and the history of quantum field theory is among the most fascinating”, here is a sequence of articles that concretely illuminates the key points of Gil Kalai’s thoughtful (as it seems to me) comment #130.

    These articles highlight the key elements of a thrilling three-act scientific drama, that has been unfolding for 37++ years (so far) …


    ACT ONE  In which Nature inspires scientists to envision experiments exhibiting the full dimensionality of Hilbert space

    Key reference  Baum et al. “Multiple-quantum dynamics in solid state NMR” (JCP, 1985)

    Key quotation  “Recently developed solid state multiple-quantum NMR methods are applied to extended coupling networks, where direct dipole-dipole interactions can be used to create coherences of very high order (~100). […] Multiple-quantum events are thus recognized as originating in rather fundamental processes in coupled systems.”


    ACT TWO  In which Nature shows us that high-order quantum dynamics is far more difficult to observe than first envisioned, whereas low-order simulation is far easier

    Key reference  Lee et al. “Quantum treatment of the effects of dipole-dipole interactions in liquid nuclear magnetic resonance” (JCP, 1996)

    Key quotation  “We compare a quantum mechanical treatment to a corrected classical model […] and show that the quantum picture can be reduced to this corrected classical model when certain assumptions about the retained dipolar couplings are valid. […] This work has generated interest and controversy in the NMR community over the last few years, particularly before we or anyone else were able to develop additions to the conventional theory which would explain these results. […] We show that the combination of quantum and classical pictures provides enormously better predictive power and computational convenience than either technique alone.”


    ACT THREE  In which far-reaching practical applications are found for the lessons of Nature

    Key reference  Antzutkin et al. “Multiple quantum solid-state NMR indicates a parallel, not antiparallel, organization of beta-sheets in Alzheimer’s beta-amyloid fibrils” (PNAS, 2000)

    Key quotation  “Relatively little is known definitively about the molecular structure and supramolecular organization of amyloid fibrils. […] Our data analyses proceed by quantitative comparisons of experimental MQNMR [multiple quantum nuclear magnetic resonance] signal amplitudes with numerical simulations. […] The MQNMR methods used in this work represent a general approach to investigations of supramolecular organization in amyloid fibrils.”


    Conclusion  QC research presently is in the middle of Act Two of a multi-decade scientific drama, that exhibits striking parallels with prior dramas. In particular, students of FTQC/BosonSampling science are well-advised to study the evolutionary interplay of mathematics, science, engineering that is exhibited in the (still unfolding!) history of multiple quantum nuclear magnetic resonance (MQNMR). 🙂

  134. chorasimilarity Says:

    TIL via G+ about this article on arxiv, may be a bit relevant for the general discussion here:
    A Snapshot of Foundational Attitudes Toward Quantum Mechanics
    by Maximilian Schlosshauer, Johannes Kofler, Anton Zeilinger.

  135. Ajit R. Jadhav Says:

    Dear Gil #130:

    Thanks. I appreciate the efforts you have taken to make me see the issues, and though I do get some vague sense of it, I also am sure I really don’t yet get the total picture. … It will need some (hard) work on my part: starting at the very starting points, reading and re-reading, and then developing and re-developing/checking my own understanding along the way. Given my pace, obviously, it will take a long, long time.

    So, please, carry on these discussions with others. Though I will be reading the comments and all, AFAIAC, I won’t be actively participating in this issue.

    … ou may laugh at it, but still, just to let you know: I am immediately going to hit arXiv.org:quant-ph/9809016v2, and at least a few references therein. Really. In fact, also Scott’s essay for high school students before that. Once again. … You can never tell what small but important thing you had missed earlier; you can never tell what crucial thing—known or unknown to others—which you spot only on the n-th reading. Things like that do happen, esp. with the starting material, no matter how “simple” it may be. … I am unusually slow in developing an understanding, but I sometimes do, when I do, it usually is solid. So, there. … Thanks again, and bye for now.


  136. Joshua Zelinsky Says:

    @Rahul, possible but unlikely. In this context there’s a coherent evolutionary hypothesis that explains that heavily parasitized collections of species should tend towards having prime periods. The argument would be stronger if we had more examples of species that had multiyear lifecycles. Unfortunately, the set of them is small.

  137. Lou Scheffer Says:

    Some recent results

    seem to show some very odd reasons for the 17 and 13 year locust cycles. Apparently, after one year with lots of locusts, the predator bird population will grow, then decrease, reaching a minimum 17 years later (before that generation of locusts even hatch). This timing is not understood, to say the least.

  138. Vanzetti Says:

    Scott #31

    >>>Right, so the idea would be to leave the computer on >>>earth while you travel at relativistic speeds! >>>Unfortunately, it still doesn’t work, because of the energy >>>problem I mentioned before.

    Won’t it be easier to use suspended animation? Freeze yourself for a billion years while your computer solves the problem, unfreeze, rejoice.

    There you go, arbitrary large speed-up. No infinite energy required. The only problem is the eventual heat-death of the universe.

  139. Sniffnoy Says:

    I don’t think getting to primes is really the relevant part — presumably, if they have discrete quantities, they’ll have primes.

    In the cicada example, discreteness comes from years. This is essentially the same idea that Scott suggested above with the contour integral example — the discreteness comes from the fundamental group of the circle, counting how many times you’ve circled something[0].

    I seem to recall a comment above to the effect of “what if everything’s so fluid they can’t even define a region of space to circle around?”, but I can’t seem to find it right now. And I must admit I don’t have an answer to that other than “that sounds pretty farfetched”.

    But the point is, the fundamental group and other similar things show how discrete things can arise from continuous definitions. And we often study them in terms of these discrete things, because they’re easier to get a handle on[3]; whether aliens would also find it so… no idea.

    [0] OK, so considered as elements of π1(S1), integers can only be added, not multiplied, which doesn’t get you primes. But then you take the endomorphism ring…
    [3] “All math is either combinatorics or linear algebra”, a remark I’ve heard attributed to Karen Smith.

  140. Robert Alicki Says:

    Gil in Comment #67 gave the following perfect formulation of the problem which I call

    Thesis I

    ” .. any robust outcome of experiment in quantum physics (just like in classical physics) can be demonstrated by an efficient algorithm”

    Comment: This is essentially the Bohr’s correspondence principle describing the emergence of classical world for large N and supported by numerous examples based on “environmental dequantization”. The meaning of N is tricky. It is not always the number of constituents (elementary particles, atoms, etc.) but rather the number of excitations or “relevant quantum number” characterizing the dimension of the relevant Hilbert subspace. For example, there is a lot of confusion concerning recent experiments with macroscopic systems at very low temperatures. Even a human body cooled to, say, 10^{-10}K is in a perfect quantum ground state and one can produce quantum superpositions of this ground state and a single phonon state. However, those states are not “macroscopic quantum states”, as often claimed, because for them N=0,1 and NOT the number of atoms in a body. On the other hand, the state containing on the average 100 phonons will be for sure a localized semiclassical coherent state and not the quantum eigenstate of the body’s Hamiltonian.

    In my paper, I gave a motivation for the

    Thesis II

    The universal mechanism of information protection is based on state separation by free energy barriers increasing with N. This implies that gates must be irreversible, each gate dissipates at least ~ kT N of energy.

    Comment: In classical physics this relation between stability, irreversibility and energy cost is obvious (analyze e.g. writing on a blackboard or carving letters in a stone). It seems that the quantum models like 4D-Kitaev and alike violate Thesis I – some encoded quantumness survives. But even then they do not violate Thesis II and hence QI is not possible. Moreover, those models seem to be overidealized (non-generic) and probably any density of defects can destroy their “quantumness”. Notice, that defects are not Schoenheitfehlers which can be engineered out, but a consequence of thermodynamics, they decrease free energy F= E – TS by increasing entropy with a very small energy increase.

    Summarizing, one cannot expect a universal proof of Thesis 1,2 (similarly to the proof of the II-law or Church-Turing Thesis), but rather a growing body of evidence supported by numerous examples which finally convince the community.

  141. Hecky Says:

    I’d still like to hear what the argument is for the non-arbitrariness of mathematics. I have yet to encounter a non-arbitrary reason for this view (and in particular, a non-aesthetic one). Characterizing the results of mathematics as “discoveries” is already to beg the question as to their status. Convergence on the same results by humans and other creatures could be explained in terms of similar cognitive capacities yielding similar aesthetic preferences.

    I haven’t read Deutsch’s new book. Does he still believe in a Platonic number realm somewhere in the multiverse lol?

  142. Scott Says:

    Hecky #140: If even extraterrestrials discovering (oops, sorry, inventing) the same mathematical results as us wouldn’t impress you, then how about the following. Ideas that were once considered the weird inventions of mathematicians—from complex numbers, to differential geometry, to Lie groups—repeatedly turned out to be absolutely central to how the laws of physics work. Still “arbitrary,” since maybe the universe has the same arbitrary aesthetic preferences that we do?

    Yeah, I thought so lol.

  143. mkatkov Says:

    Scott #140 It is not necessary. Think of the code of the universe projected to our brain capacity (e.g filtering the code, e.g. the set of codes the brain can accept). Given different brain capacity the projection would be different. Therefore the whole phenomenological experience would be different, and concepts would be different. You are implicitly using here anthropic principle here, although in different place you were apposing it.

  144. Ajit R. Jadhav Says:

    Scott #141:

    “Repeatedly” is not the same as “always.”

    Think what it means physically—i.e., ultimately, in the concrete physical universe—if you have two cakes in a plate and eat three of them so that you are left with a -1 cake in it in the end.

    The point isn’t that the idea of a physical existing -1 cake is a very weird notion, though this much is true. The point is that in physical reality, you cannot apply the operation of eating three cakes when only two of them physically exit—you cannot apply the operator the third time, because there is nothing on which it can apply.

    It isn’t easy or even possible to find the referents in concrete reality for even as simple a notion as a negative number.

    It’s a bogus and fraud-filled idea, which became especially respectable—even a ruling paradigm—only in the 20th century: since some ideas invented by mathematicians did later on find some valid application in some physics theory, therefore, any mathematical invention will.

    Numerically speaking, counter-examples abound far more than the examples. If at all one has to err, it would be better to err on the other side, aesthetically speaking. This way, the intellectual poverty would be less embarassing.


  145. Hecky Says:

    Scott #141: It’s an intriguing suggestion but I’m forced to agree with Ajit #143. Tying the non-arbitrariness of math results to their usefulness for doing physics leaves out a vast majority of the results. We can regard as arbitrary all those theorems that aren’t useful in doing physics, and you would raise no objection? Your claim struck me as being about the non-arbitrariness of math in general, not just those results you would cherry-pick on account of their empirical fruitfulness.

    Let’s just cut to the chase: am I going to get a straight answer out of you on the philosophical (which, it should be made explicit, is NOT to say non-empirical) question (i.e. “exactly why is mathematics non-arbitrary?”)? From reading your blog, I’m well aware of your attitude toward philosophy. It may be the kind of question you simply don’t care to spend much time discussing, and that’s perfectly reasonable. You are a brilliant fellow with limited time in the day and I genuinely do not want to waste any of it with annoying philosophical questions you regard as intractable. I just took your view that “math isn’t arbitrary” to indicate that you had an actual argument for this claim — one that applies to mathematics generally — and I am curious to hear what it is.

    The “convergence argument” doesn’t impress me simply because it fails to explain why mathematical results are non-arbitrarily convergent as opposed to contingently convergent. What is it about the methods used to derive the results that guarantees their convergence across cultures and (we presume) across interstellar species? Peter Freyd in that lecture on anti-philosophy of mathematics you posted admits he has no response to someone who says that ET number theory might contain theorems which contradict our number theory. I think the most you can say to such a person is that this shows that ET number theory differs from ours. Their cognitive condition might differ from ours in such a way that their number theory works for them (for their perceptual apparatus, for their region of the cosmos, etc.) but happens to contradict ours. But to insist that the ET theory is not genuine number theory is to assume that ours is genuine without giving an argument beyond the self-evidence of its theorems for us, which is to say its apparent non-arbitrariness, bringing us back to square one. Why not just admit the self-evidence the theorems have for us could be a contingent function of our cognitive equipment or some other arbitrary quirk? Why resist it? Shifting the measure of convergence from other cognizers to reality itself is a better strategy, but it doesn’t get us all the way there given the theorems it fails to cover. And we still have to make assumptions about the nature of reality, e.g. that it accords with some theorems for reasons that are non-arbitrary, as opposed to contingent.

    Terminology point: I realize you’re making a joke at my expense with the term “invention”, but it’s also a term I would resist for the same reason as “discovery”, i.e. it begs the question at hand (albeit in the other direction). I am happy to call mathematical results “proven” since tying it to proof methods doesn’t beg the question of their status.

  146. Scott Says:

    mkatkov, ajit, Hecky: Let’s clearly separate two issues:

    (1) mathematicians’ choices of what to study, and

    (2) their standards of truth and falsehood.

    Obviously, many people might agree that (1) was arbitrary (or somewhat arbitrary), without also agreeing that (2) was arbitrary. On the other hand, if (2) were arbitrary, then I suppose there wouldn’t be much point in arguing whether (1) was arbitrary!

    I had assumed, before, that we were only talking about (1) — you might say I was trying to give the defenders of arbitrariness a fighting chance! 🙂 Yet Hecky #144 explicitly talks about alien number theory “contradicting” Earth number theory, so I suppose I need to address (2) also, which I’ll do in later comments.

    Before that, though, one important clarification. When I say “mathematicians’ standards of truth and falsehood,” I’m not talking about whether we believe the Axiom of Choice or large-cardinal axioms, or other “set-theoretic exotica.” In the small parts of math affected by those choices, obviously you do get different results depending on which axioms you choose. Or to put it another way: just as there’s more than one group, so we’ve learned that there’s more than one set-theoretic universe. And even in elementary arithmetic, we know from Gödel that no one formal system can suffice to prove all the true statements. But that’s not what I’m talking about here either. I’m only talking about those things mathematicians claim to know. For example, could mathematicians be wrong about the infinitude of the primes, or the Pythagorean Theorem, or 2+2 equaling 4? Could “an equally-valid alien math” address those same questions and reach different answers?

    And please don’t waste anyone’s time by talking about how you can get different answers if you change the question—e.g., in non-Euclidean geometries the Pythagorean Theorem has to be modified, if “2” meant 3 then “2+2” would equal 6, and if my grandmother had balls she’d be my grandfather. If you use anything like that to argue for type-(2) arbitrariness, then you immediately unmask yourself as an ignoramus and forfeit the debate…

  147. Scott Says:

    So, regarding (1): I agree that there’s plenty of arbitrariness in what mathematicians study. The most obvious examples are in recreational math, where people have proved all sorts of theorems about Chess, Go, and Tetris, or about special patterns in base-10 arithmetic or the Gregorian calendar. But even in “serious” math, tens of thousands of papers get written where the reader wonders, “why on earth would anyone study that?,” and in many cases there’s no good answer. Or people may disagree, with some mathematicians finding something “obviously important” and others finding it obviously unimportant. None of this should surprise anyone.

    What is interesting and surprising is that if you look at the enduring ideas in math—to pick the first few off the top of my head, complex numbers, groups, finite fields, determinants, eigenvalues, Gaussians—then they show up so often, in so many seemingly-unrelated contexts, that even if you thought they were “arbitrary” when you first encountered them as a freshman, further study would force you to revise your original opinion.

    If you want to call these sorts of things “arbitrary inventions,” then maybe you should also call North America, oxygen, and penguins “arbitrary inventions”! I mean, complex numbers arise when you look deeply enough into pretty much any conceivable phenomenon with any quantitative aspect whatsoever. And 500 years ago, people didn’t understand that, and today they do. If that wasn’t a “discovery,” then I don’t know what the word “discovery” even means, or in what cases one should use that word.

  148. Scott Says:

    Now, regarding question (2), whether aliens could have a number theory that contradicts ours, yet that “works for them” as well as our number theory “works for us.” The problem, as I see it, is that the people who discuss such things almost always do so completely in the abstract—so that they never need to think through what such an “alternative number theory” would actually mean, yet can still accuse others of being closed-minded and unimaginative. 🙂 Therefore, to sharpen intuitions, let me propose the following questions:

    Could there be an undiscovered integer between 6 and 7? (Carl Sagan asked that in one of his books—maybe one of the many times he was high on pot 🙂 )

    Could there be only finitely many primes, due to a mistake in the Euclid’s proof that’s gone unnoticed for 2300 years?

    Could a(b+c)=ab+ac be merely an approximation that fails for extremely large integers—all existing proofs of “distributivity” relying on flawed assumptions?

    The problem should be obvious. Generating questions like this is like writing postmodernist papers. It’s too easy: I can keep doing it for hours, without ever having to think. But life is short, and most scientists and mathematicians understandably want to spend their time on possibilities whose exploration could present a meaty, nontrivial challenge or teach them something new. Therefore, if I want to convince anyone that any of the above questions are worth taking seriously, the burden is on me to give a positive suggestion for how their answers could possibly be yes in any interesting way. Otherwise, in some sense, these questions haven’t earned their claim on anyone’s attention.

    Incidentally, the same point could be made about arbitrariness debate (1). Do you think it plausible that aliens would have no concept of prime numbers, but they would have some other, completely different “multiplicative building-block of the positive integers,” the likes of which no human mathematician has ever imagined, but which is just as fundamental as the primes? Great, fascinating! Please tell us more! In some sense, your idea can no more be “disproved” than can Bertrand Russell’s teapot orbiting a distant star—and the “math is arbitrary” folks can always take refuge in that banal observation. However, if you can’t say anything more about your alien-primes, then it seems to me that the speculation has no more claim on people’s limited time than the orbiting teapot does.

    Speaking of limited time, I’d better get back to finishing my Quantum Computing Since Democritus book! 🙂 (Yes, after 7 years, it’s now in the final proofing stage…) I hope I’ve made my views a little bit clearer.

  149. mkatkov Says:

    Scott: you are appealing to sufficient condition (to describe reality given to us in perception), and require constructive refutation for necessary condition. You are RIGHT it is sufficient to have sufficient condition to describe our perception for the purpose of action. And you are RIGHT it is very hard to give constructive refutation, giving the program of perception and believes sitting in out heads. As the opposite example, ET can have built-in, say, algebraic proof systems (with not-digital (quantum?) computing device ), where you do not necessarily need numbers, and can work in complex space with simple action whether the system is consistent or not, or in quantum case just measuring (whatever it means) probabilities and acting according to them. Or Sum-of-Squares to find optimal set of parameters for the action. You do not need digits here.

  150. Ajit R. Jadhav Says:

    Hi Hecky (#144) and Scott (#145, #146, #147):

    First of all, an error correction in what I said (#143). I should have said:

    “It isn’t easy or even always possible to find the referents in concrete reality for even as simple a notion as a negative number.”

    In the correction, I have added the adjective “always.” There can be a physical context in which negative numbers can be put in 1:1 correspondence with some aspect of physical reality, though it wouldn’t be as basic/low-level/less abstract a physical context as that involved in the simple sum operation as applied to positive integers.

    * * *

    Now, coming back to the things being discussed. Allow me to occasionally expand the scope of the discussions just a bit, though I hope that you would see that, ultimately, I strive to provide my positions precisely about the things that are being discussed here.

    About ET etc. To cut a long story short, I think my main point has been this: the moment you say “number,” you already have assumed that there is a consciousness that operates conceptually (and has reached the concept of number).

    However, and this is the major point, not all consciousnesses must deal with reality using our (human beings’) distinctive cognitive mode of dealing with reality, which is: using concepts, or using the conceptual faculty.

    It is conceivable that some ETs could not only become aware of the physically existing quantities, but also effectively deal with them, using some mode other than conceptual. In contrast, the scope of any mathematical concepts (whether “number,” or “size,” or “shape” etc.), and relationships among them, is limited only to human consciousness.

    Done with ET. Now, about (i) mathematicians (LOL!), (ii) what they choose to study, (iii) what their standards of truth and falsehood are—if any, and also, let me add: (iv) what kind of methods they must use to establish the mathematical truth.

    Mathematics is a science of quantitative measurements. It is the science that studies the methods that allow for quantitative measurements to be specified and carried out.

    Now, quantities do exist in the physical reality. They are a given in the perceptions. But concepts/ideas/techniques i.e. methods of mathematics aren’t. In physical reality, two mangoes exist, two oranges exist, two apples exist, but the concept “two” doesn’t exist.

    The concept of two involves a conceptual being’s way of isolating or distiguishing that particular quantity—the size of two—from the size of one and the size of three (and every other size, including that of pi, square-root of 2, etc.) It involves a certain method which a conceptual being must follow or execute (which, here, is: counting), while taking a specifically conceptual view of the relevant, physically existing, quantitative facts. For the concept of “2” the relevant facts are: the similarities of (the size of) “2” with (the sizes of) “1” and “3” etc., and the differences of (the size of) “2” from all other sizes.

    The fact that the referent quantities exist in reality, directly leads to the ultimate (or the most basic) standard for establishing the validity of the mathematical concepts. The standard is this: it must be possible to specify both the referent measurement method (invented by and residing in the consciousness of the mathematician) as well the referent quantitative aspects of the physical entities (i.e. their quantitative characteristics, attributes, relationships, etc.) which exist in physical reality and which together form the context for that mathematical concept.

    Note, it’s not just the physical existence of quantities (which a non-conceptual ET, too, could grasp and must deal with), but also some specific, conceptual-level, method of measuring those quantities (which are unique to a conceptual consciousness, viz. to human beings, and are invented).

    Mathematical concepts are primarily concepts of methods—i.e. of operations. The objectification of mathematical concepts is one-step indirect: the counting process comes first, and then the integers i.e. objects representing the end-products, differing in sizes, of that process/operation.

    Any mathematical method is ultimately validated in reference to the quantitative facts of reality, but a crucial requirement worth noting is that qua a method of using consciousness, it must have identity. It must refer to a valid context in a valid manner. It must be possible to integrate a new mathematical method or technique with the already existing mathematical knowledge (including the knowledge of how to apply the same).

    Some ancient genius discovered that in physical reality, if a (mentally isolated) group of entities has a size of 2, and another (mentally isolated) group also has a size of 2, and if a third group is mentally made up which encompasses these two groups and nothing else, then the size of the third group is 4; and, also that both “2” and “4” can be quantitatively measured in reference to the unit “1” (thus casting the unit-perspective which is crucial to forming a concept); and then following many such observations, after some more ingenious thought process, he invented the mathematics (arithmetic) of counting, a process or operation with which his consciousness could establish the quantitative i.e. mathematical truth: “2 + 2 = 4”.

    To use his invention, each of us must reproduce the same thought process. It is possible to do so because the mathematical truth does correspond to a quantitative relationship that exists in physical reality. However, note, to use his invention, you must also learn to use the particular method in which he used his consciousness (without really affecting anything else in the external reality apart from itself). You must learn a particular technique of dealing with the physically existing quantities, viz., the counting method, which is a method of measurement, i.e., a mathematical method.

    All (valid) mathematics is a science of such methods of measurements. All such methods directly or indirectly correspond to some or the other physically existing quantitative relationships. All mathematical objects (i.e. concepts) are simply either the correspondents or the end-products, of such methods.

    The reason we think that the pre-20th century mathematical notions are so indispensable that they should be indispensable to any one (even to an ET—provided he is a conceptual being) is because in some terms, we not only have been able to delineate to ourselves (or to define) the method of measurement involved in these concepts, but also that we have been able to validate the physical contexts for the quantitative relationships encapsulated by them.

    When you know the roots of a mathematical concept—both the measurement method it involves/encapsulates and the kind of physical context it presumes—you don’t even think of calling it arbitrary. For example, (so much of) classical mathematics, e.g. that up to the 19th century or so.

    The only time you are at all tempted to call mathematics as arbitrary is when when you are unable to do either or both—the quantitative method and the physical context. For example, so much (though not all) of 20th century mathematics.

    Let me give you not just one, not just two, but as many as four broad examples of the 20th century mathematics that is good (!!): (i) the mathematics of the random walk, or the stochastic theory of the Wiener processes or of martingales, etc (though, personally, I am not equally sure about Kolmogorov’s axiomatics of the probability theory); (ii) the deterministic chaos theory; (iii) the catastrophe theory; (iv) the wavelet theory.

    Yes, it is of course true that, like any other original innovation, a (valid) advance in mathematics, too, might find (an uncalled for) resistance, initially. It is only to be expected that some of such resistance or the arguments it uses, would make (an invalid) reference to the need to establish a physical correspondent. It may sometimes so happen that even for a valid mathematics, the inventing mathematician himself is unable to fully supply the same—not at least immediately. It therefore might be a prudent policy to allow a certain margin to any new mathematical theory and its originator—even if it appears arbitrary to you.

    However, to put that sane advice in some context, let your mind pause over the four aforementioend advancements of mathematics that are very specific to the 20th century, and then ask yourself: Really, how much of a grand resistance these four advancement have had to face? And, how rapid was the acceptance after the initial resistance? For how many years did the initial resitance last? Can the initial enthusiasts from the same generation be found alive to tell the story? Do they have the T-Shirts (or some carefully preserved old journal issues or personal notes) to prove that they “been there, done that”? No? Or yes?

    Now, ask yourself another question: What is the ease with which the true identity of the method and the actual physical context for the other 20th century mathematics could be presented by any one—let alone by the promulgators of such “mathematics” themselves? For how long would we have to wait? Would our great-great-grandchildren get to see something? Or would it be sometime even later, when all the existing and the projected increases debts caused by the American Social Security and Medicare etc. are paid off by the present-day Americans’ great-great-great-etc-grandchildren?

    It is clear that the techniques “invented” by these other 20th century mathematicians would not be potent enough to give even approximate answers to questions like that. And, the reason for this circumstance squarely lies in the kind of “mathematics” (and “mathematical physics” etc.) they (actually) did.


  151. Ajit R. Jadhav Says:

    @Scott #147:

    Regarding your book: “Quantum Computing Since Democritus,” if it isn’t too late already, I have a suggestion for you. (Or, to anyone writing a popular QC book.)

    By and large, people are, at least these days, after all that deluge of all those popular science quantum books, quite familiar with the quantum weirdities. They are able to appreciate the fact of quantum superposition, of the state space being exponentially large in size. No need to spend too much time on that part, even though, QC popularizers invariably do!

    But there is something that is not at all obvious to the “lay”man—i.e. to virtually anyone outside of the QC community (e.g. someone like me). It is this: What concrete steps does a quantum computer undergo, in performing a quantum computation, and how do the specifics of this step differ from the usual, classical computer.

    Please note, this question is not about the QM-rooted explanation of the quantum parallelism involved in a quantum computation due to entanglement or superposition. NO! Instead, the question is this:

    If I have to add 2 + 3 = 5, in a conventional computer, what I do is this: I (i) fill the input register “R1” with the value “2” i.e. make sure that a certain voltage pattern corresponding to the value “2” appears in it, (ii) fill the input register “R2” with the value “3”, i.e. another voltage pattern, (iii) “pull the lever” (or advance the CPU clock) so that these two input values are “sent through” the “add” box (i.e. the two voltage patterns get applied to the circuit portions or gates corresponding to the “add” operation), and (iv) “read” the output registers (i.e. find out what kind of voltage pattern has appeared on the output register of the “add” blackbox, during the same CPU cycle, etc.

    Now, what different things would happen if it’s a quantum computer?

    Tell me that, even while keeping your discussion in parallel to the above sequence. (Ok. Take some example other than that of addition, if you want. Take integer factoring. But if you do that, make sure to go back and give in full detail the description for the classical computer first, before getting to a similar operation in a quantum computer.)

    It does not matter to the layman facts such as that the quantum evolution be unitary. (“Hey, can’t you guys just figure out what denominator to use so these different terms all add up precisely to 1? Do you think finding such denominators is a big challenge?”)

    What matters are things like these: How do you “fill in” the input data to a quantum computer? Are these input devices classical or quantum? If classical, then how and when does the quantum part of the operation begin? After the CPU clock advances by a step? Is there at all a CPU clock also in a quantum computer? And, if the filling-in of the input is not classical, does it mean that your “filling in” process is probabilistic in nature? If inputs themselves are probabilistic, how come you guys hope to get answers that are certain (at least in theory)?

    What does a quantum evolution at all mean? Does it mean applying some voltage to a qubit, or shining some laser light on it? Does doing so change the superposed quantum wave before it is measured? Do different kinds of processing (such as “add” versus “subtract”) mean different kind of quantum evolutions?

    Do I have to repeat the same quantum computing step again and again, many times over, just to get a probabilistic output? How many times? If for a large number of times, then how does its net result still differ from a corresponding classical operation?

    Here, for instance, refer to arXiv:quant-ph/0305045v1.

    In Problem 1, reading this paper, I do not get to know how the two input values (respectively, False and True) are to be fed into a quantum computer. How can it be done—just a schematics is expected. (Just the way, we didn’t talk about transistors and bias vs signal voltages and flip-flop circuits, but went straight to registers, voltage patterns, circuits and the output voltage patterns produced after advancing the CPU clock.)

    And, another question (in the context of the same paper): It seems, I either get the outcome “2” or the outcome “4.” Ok, so what’s the big deal? I mean, if you are going to feed the XOR box with a pair “P1” which is such that the XOR box evaluates to “False,” then it means that the Quantum box gave the output “2.” Otherwise, there is pair “P2” which gives the answer “4.” Alright. But does it mean I have to control feeding in “P1” vs “P2”? How do I accomplish it? Or, is the whole game about just figuring out, after the fact, whether it was “P1” or “P2” to begin with, but touching the pair just once? If so, i.e. if the input pair is unknown to me, then who prepares that pair for me? How? Any relation to any real world problem, here?

    So, see, (even) in that (very well-written) paper, a layman still doesn’t come away understanding anything real about a QC because he still doesn’t know such things as: (i) what problem is to be solved, (ii) how it is solved on a conventional computer right from the input to output, (iii) what all possible input values can be for the given example, (iv) what specific of these values a given input was for a given example computation, (v) how this specific value was fed into the quantum computer and what was the difference of such feeding in from the conventional computer, (vi) what range of all possible output values were, (vii) which particular specific output values was thrown up by the quantum computer, (viii) how this output value was read from the quantum computer. And, only after covering all these (and similar) questions, these two: (ix) how does the progression from the input value to the output value happen in a quantum computer, and, (x) how does it differ from a classical computer.

    Writing for the layman is difficult because so many things that are simply a part of the culture of a given learned audience, are not even heard of by him, let alone be a part of his well-integrated context. Here, I wanted to emphasize, the general nature of the quantum weirdities is known to the “lay” audience, the exponential increase in the state space is easy to see for him (and needs to be stated but need to be dealt with in too much detail), but the specifics of the concrete steps of a quantum computer vis-a-vis a classical computer is where the game lies, as far he is concerned.

    Best wishes for your writing project,


  152. John Sidles Says:

    Scott says Quantum Computing Since Democritus is now in the final proofing stage

    This is terrific news … please let me say that I sincerely admire this effort and definitely will instruct my relatives regarding this book’s top-rank for holiday/birthday giving.

    Only time will tell which this text will take its place beside scientific works like Phlogiston since Becker, The Luminiferous Aether since Fermat, The Philosopher’s Stone since Zosimos, Absolute Space and Time since Newton, Consistency & Completeness since Hilbert, The Analytic S-Matrix since Chew, and even Fusion Power since Artsimovich eh? A sobering thought! 🙂

  153. Gil Says:

    1) Here is an amusing thought experiment: suppose we make contact with an advanced extraterrestrial civilization, which are precisely as knowledgable as us regarding mathematics, and computational complexity. Is it possible that these extraterrestials will have precisely the opposite interpretation to ours? Namely, that they will regard all our knowledge as supporting the belief that an efficient for 3-SAT algorithm is on the way to be discovered?

    2) What is the simplest way, Scott, to see that BosonSampling belongs to QuantumSampling. Namely, to similate the BosonSampling machine on a quantum computer?

  154. Scott Says:

    Gil #152:

    1) Interesting. Since you stipulated that the aliens have precisely the same knowledge as us, which of our current knowledge do you think they could point to, in arguing that a fast 3SAT algorithm is on the way to being discovered?

    2) It’s not hard to give a direct simulation of a bosonic system on a qubit-based QC — indeed, the basic encoding was described by Seth Lloyd in 1996, and even by Feynman in the early 80s. For details, see Theorem 3.12 in my and Alex’s paper.

  155. John Sidles Says:

    LOL … I have taken the liberty of lightly amending Gil’s #152:

    David Hilbert asks (in the year 1900): “Suppose we make contact with an advanced extraterrestrial civilization, which is comparably knowledgeable to us regarding basic mathematics, and (in particular) in regard to  computational complexity  consistency and completeness. Is it possible that these extraterrestrials will have precisely the opposite interpretation to ours?

    Answer  yes.

  156. Jon Lennox Says:

    If folks haven’t read Ted Chiang’s Nebula award winning science fiction short story “Story of Your Life”, it might be an interesting counterpoint to this conversation. First contact with aliens for whom F=ma is a weird, baffling approach to mechanics, but the principle of least action is obvious and intuitive.

  157. Gil Says:

    Scott, thanks for 153# (2) as for (1) the question is if the misguided aliens could have our same development with the opposite interpretation as follows:

    T=current alien’s time.

    T-80 The digital computer is anticipated by Turing and others; various algorithmic tasks are proposed

    T-60 Digital computers are built, (T-60 – T… and gradually improved),

    T-60-T Algorithms for matching, sorting, FFT, linear programming, primality, etc. are gradually discovered, algorithms for integer programming, TSP, scheduling, factoring, and many other problems are not found but the aliens are very hopeful.

    T-40 Cook, Levin and Karp make their revolutionary discoveries: The alien’s interpretation is: “We thought that we need MANY algorithms but now we KNOW that we need only ONE universal algorithm.”
    Based on the NP formulation the task of finding this one algorithm is called “The creativity gap challenge,” or briefly CGC (the aliens can pronounce any sequence of consonants with no difficulty).

    T-30 While most researchers try to find algorithms for NP complete problems some propose to create physical devices that go beyond Turing machines for that purpose.

    T-20 Hastad discovered his sharp reduction from 3-SAT to approximating 3-SAT. The alien’s cheerful interpretation is: “We thought that we need 3-SAT on the nose, but now we KNOW that we need only an approximation! We are epsilon away!”

    T-10 Khot’s unique game conjecture is considered as a useful intermediate goal on the way of finding the ultimate algorithm. There is no agreement on how useful toward the universal algorithm it is.

  158. Jay Says:

    Scott #147

    >Could a(b+c)=ab+ac be merely an approximation that fails for extremely large integers—all existing proofs of “distributivity” relying on flawed assumptions?

    It seems hard to make sense of such an idea, but Greg Egan actually gave it a good try in the following story:

    Luminous — A pair of researchers find a defect in mathematics, where an assertion (X) leads to a contradictory assertion (not X) after a long but finite series of steps. Using a powerful virtual computer made of light beams (Luminous), they are able to map and shape the boundary between the near-side and far-side mathematics, leading to a showdown with real consequences in the physical universe.


    I like to think the universe he describes in this story as the universe a formalist should expect.

  159. Gil Says:

    While prime numbers are central objects in mathematics it looks that they were ignored and forgotten for long periods of time. (See here http://mathoverflow.net/questions/106848/at-what-times-were-people-interested-in-prime-numbers .) Imagine our surprise had we made contact with an advanced extraterrestrial civilization, utterly uninterested in prime numbers, and yet play chess fluently.

  160. marathi Swabhiman news paper lokmat Says:

    marathi news paper : Maharashtra’s Amazing Style Nitesh Saheb Ranes Swabhiman Myspace marathi newspaper Follow Consideration https://twitter.com/swaabhiman

  161. Spin Rewriter 4.0 Says:

    Helpful information. Lucky me I found your web site unintentionally, and I am shocked why this twist of fate didn’t came about earlier! I bookmarked it.