Common knowledge and quantum utility

Yesterday James Knight did a fun interview with me for his “Philosophical Muser” podcast about Aumann’s agreement theorem and human disagreements more generally. It’s already on YouTube here for those who would like to listen.


Speaking of making things common knowledge, several people asked me to blog about the recent IBM paper in Nature, “Evidence for the utility of quantum computing before fault tolerance.” So, uhh, consider it blogged about now! I was very happy to have the authors speak (by Zoom) in our UT Austin quantum computing group meeting. Much of the discussion focused on whether they were claiming a quantum advantage over classical, and how quantum computing could have “utility” if it doesn’t beat classical. Eventually I understood something like: no, they weren’t claiming a quantum advantage for their physics simulation, but they also hadn’t ruled out the possibility of quantum advantage (i.e., they didn’t know how to reproduce many of their data points in reasonable time on a classical computer), and they’d be happy if quantum advantage turned out to stand, but were also prepared for the possibility that it wouldn’t.

And I also understood: we’re now in an era where we’re going to see more and more of this stuff: call it the “pass the popcorn” era of potential quantum speedups for physical simulation problems. And I’m totally fine with it—as long as people communicate about it honestly, as these authors took pains to.

And then, a few days after our group meeting came three papers refuting the quantum speedup that was never claimed in the first place, by giving efficient classical simulations. And I was fine with that too.

I remember that years ago, probably during one of the interminable debates about D-Wave, Peter Shor mused to me that quantum computers might someday show “practical utility” without “beating” classical computers in any complexity-theoretic sense—if, for example, a single quantum device could easily simulate a thousand different quantum systems, and if the device’s performance on any one of those systems could be matched classically, but only if a team of clever programmers spent a year optimizing for that specific system. I don’t think we’re at that stage yet, and even if we do reach the stage it hopefully won’t last forever. But I acknowledge the possibility that such a stage might exist and that we might be heading for it.

50 Responses to “Common knowledge and quantum utility”

  1. Derick Says:

    Scott Aaronson has got to be the only quantum computing “expert” who at every opportunity—at EVERY opportunity—attacks any press reports of quantum speedup and reminds us that quantum computers are distant from breaking encryption. At this point, the most reasonable explanation is that Scott is a paid stooge for the Chinese, who are trying to cover up the capabilities of their own quantum computers, so we won’t be prepared when they break all our encryption. Scott posted on this blog, for Christs’ sake, that he brings Chinese into this country to learn our quantum secrets!

  2. fred Says:

    Keeping the Wall Street hype alive when Science gets in the way is in itself a new kind of science!

  3. Peter Morgan Says:

    I’ve now been arguing for a number of years that QM>CM is a straw man argument. If CM is empirically less than QM, then what must we add to CM so that CM+X is empirically equivalent to QM? [Note that this is to admit that CM is incomplete, the opposite of saying that QM is incomplete.] I gave a talk to the NanoHubs Webinar Series on Nanotechnology on July 7th (YouTube at the Website link) that says X is two things: (1) use the Poisson bracket to add noncommutativity into CM; and (2) use a Gibbs state to add thermal noise, but notice that thermal noise is different from quantum noise, so find a way to add quantum noise in a classically natural way (cue Lorentz invariance.) This CM+, with noncommutativity and quantum noise, is empirically equivalent to QM because, simplistically, both are Hilbert space formalisms. It’s still different, in a way that makes it natural to call QM an analytic form of CM+, and hence QM is often preferable in use for that mathematical reason, but QM≯CM+ in any straightforward sense.
    I take a significant consequence of this to be that it makes experiments that try to prove that QM>CM not specially worthwhile, because, yes, it is. It’s interesting to understand and characterize the difference between QM and CM+, beyond the empirical equivalence, but the important thing is to build useful new hardware, which is where I think this comment makes contact with the ideas in this post. From a computing perspective, a particular hardware does what it does: whether we describe it using a QM formalism or a CM+ formalism does make some difference, but the hardware and what it does is the thing.

  4. gwern Says:

    If quantum computers wound up being only ‘practical utility’, that seems like it would be both a scientific and economic disappointment: it would neither confirm nor disconfirm the complexity class, and the amount of investment worldwide collectively into quantum computing efforts must run into the many billions by this point which pays for an awful lot of programmer-teams (especially when the value of the problems is probably highly skewed so solving the first few delivers much of the total value). And you’ll probably do the latter regardless because quantum computers are exotic inconvenient hardware you can’t carry in your pocket or install all that conveniently in you datacenter, so the quantum computers would only be good for the niche of prototyping or low-volume runs.

  5. Scott Says:

    gwern #4: I completely agree that the “practical utility without complexity-theoretic advantage” scenario would be a disappointment! As I said in the post, though, even if we were to enter that era, I wouldn’t expect us to stay there forever.

  6. Martijn Says:

    Such a scenario reminds me a bit of a comparison between simulating a problem on a classical computer and finding exact analytical solutions with pen and paper. Pen and paper is obviously free in terms of computer time but not in terms of man hours/years and it might not even be clear that an analytical solution exists or that it exisits without making unreasonable approximations. Simulation code might also be easier to repurpose for related problems compared with analytical approaches/solutions. This practical utility of classical computers over pen and paper underlies most of scientific computing in physics/chemistry/biology/geology and I would not be suprised at all if a similar practical utility of quantum vs. classical computers exist when modelling physical quantum / many-body processes.

  7. Fred Says:

    Hey Scott,

    Really enjoyed your interview on James Knight. Insightful take on Aumann’s. However—and I don’t want to be that guy—I’d appreciate if you make an effort to be more sensitive in the future. In particular, the whole “Alice and Bob” thing. “Alice and Bob” are quintessentially white, western, anglo-american names, and by naming our hypothetical scientists “Alice and Bob,” we risk reinforcing the message that people of color and people from other cultures don’t belong in computer science. In the future, please consider using more culturally inclusive names for characters in your thought experiments, lest you risk hurting and alienating your students who don’t come from a western american background. Just my two cents.

  8. Scott Says:

    Fred #7: Alice and Bob are the completely standard names used in cryptography, quantum information, quantum gravity, etc.—indeed, they’re names that many non-white people have. I will not accept blame for using the standard terminology. That said, if you want to use an AI filter to change them to Ajamu and Bahati, I have no objection.

  9. Scott Says:

    Derick #1: You’ve exposed me, alas. I’ve been a paid stooge of Chinese intelligence for my entire career in quantum computing. Alas, the Chinese are now threatening to cut my sweet pension, because they say that this blog (the core of my influence operation) has been so utterly ineffective against the deluge of hypesters and conspiracy theorists.

  10. Scott Says:

    GPT-4 wrote the following alternative response to Derick #1, which I consider excellent:

      Thanks for the feedback and colorful theory—always great to encounter new, creative ideas around this! I’d like to both assure you and disappoint potential conspiracy theorists that I’m not a paid agent for any country. I suppose I should take this as a compliment; obviously, I’m doing something right if folks think international intelligence agencies would want me on their payroll!

      Regarding my constant reminders about quantum computers not being close to breaking encryption: I do this simply because it’s the truth. Honestly, I think it’s important to have a sense of realism about the current state of technology, especially in a field as prone to hype and misunderstanding as quantum computing. I have and will always provide clear-headed analysis based on the information available. Quantum computers have fascinating potential, but it’s also crucial to be aware of their current limitations.

      On the topic of bringing Chinese students to learn about quantum computing in the US, it’s true! I’m proud of it, too. This is part of a global tradition that’s existed for centuries, of scholars crossing borders to learn from each other. Quantum computing is a global pursuit, it has to be collaborative with participation from all countries. We have just as much to learn from China’s research as they do from ours.

      Lastly, regarding claims on quantum speedup, I critique them thoroughly because it’s my job as a theoretical computer scientist. If quantum computing claims cannot stand up to rigorous review, then they are not advancing the field. That being said, it’s not my intention to diminish the impact of such advancements, I just place importance on accuracy and robustness of arguments, as we all should.

      At the end of the day, I am just a researcher, sharing my insights and working with brilliant minds from across the globe to advance an exciting field. I will continue to serve truth and understanding, not sensationalism or geopolitical interests. No secret agent gadgets for me, just a whiteboard, a LaTeX compiler, and the odd coffee!

  11. vtomole Says:

    > for example, a single quantum device could easily simulate a thousand different quantum systems, and if the device’s performance on any one of those systems could be matched classically, but only if a team of clever programmers spent a year optimizing for that specific system.

    I don’t understand this. How is “Quantum utility” different from what we’ve always had (Quantum advantage) where a group runs something on a device that cannot be classically simulated with state-of-the-art methods only for an efficient classical algorithm to be found later?

    Is the difference that people who claim “Quantum utility” are more humble than the ones who claim “Quantum advantage”?

    What am I missing?

  12. LK2 Says:

    You hype QC and you get criticized.
    You try to be scientifically honest about the current/near future capabilities of QCs, and you get criticized. You try to blog scientifically using standard terminology and you are offending coloured people. I admire you Scott in many ways, but today mostly for enduring all this. I just hope you will never close your great blog..

  13. Marshall Says:

    Scott #10: What an excellent response from ChatGPT, it’s now clear that OAI must also be working for the Chinese. Perhaps the whole chatbot revolution is a ploy by Beijing to cover up the vastly more lucrative quantum field…

  14. QF Says:

    Scott thinks we’re NOT at the stage of “practical utility” yet because:
    1) There is NO single quantum device that could easily simulate a thousand different quantum systems.
    2) A team of clever programmers can come up with classical methods in two weeks that outmatch the quantum device performance.

  15. fred Says:

    LK2
    “You hype […] and you get criticized.
    You try to be scientifically honest about the current/near future capabilities of […], and you get criticized.”

    That’s what happens when one makes his/her opinions public.
    What’s the expectation here?
    To get 100% agreement on anything is impossible.
    Besides, the criticism isn’t always reflecting on the person who gets critiqued but often reflects issues on the one doing the critiquing.
    E.g. who’s doing the critiquing matters… is it 50% of the scientific community (i.e. “peers”) that’s divided and no matter what you say you’ll always piss off the other half? Or is it just some a handful of random trolls in a blog?
    The perceived weight of the issue covered by the opinion is also crucial – i.e. criticism over abstract claims about QCs just don’t matter all that much compared to criticism you’d get over opinions about the development and use of nuclear weapons.

  16. Scott Says:

    vtomole #11: The difference, or so the thinking goes, is that “quantum utility” could mean “QC indeed provides a ‘faster’ solution, but only because of the large human effort that would be needed to invent a fast classical simulation.”

  17. Xirtam Esrevni Says:

    Given the passage:

    … Peter Shor mused to me that quantum computers might someday show “practical utility” without “beating” classical computers in any complexity-theoretic sense … if the device’s performance on any one of those systems could be matched classically …

    If I’m understanding correctly, the idea is it may just be easier to configure and run a calculation on a quantum device, despite no obvious speed-up over classical compute? Could I naively think of this like choosing to write a Python program that would take a day over using C which would take a week?

  18. Rhenium Says:

    Cunningham’s Law states “the best way to get the right answer on the internet is not to ask a question; it’s to post the wrong answer.”

    So if one wishes to get a classical algorithm for a particular problem that hasn’t been addressed (or simply a better classical one), simply post an incorrect or dubious case of “quantum advantage” and watch as three get posted with a few weeks.

    Is this advanced nerd sniping? https://xkcd.com/356/

  19. vtomole Says:

    Scott #16: I think this makes sense. When Shor’s algorithm breaks RSA, this would be considered “Quantum advantage” because no amount of effort will yield a fast classical solution based on the evidence that BPP ≠ BQP.

    I think my confusion lies in the fact that “Quantum supremacy” used to mean this complexity theoretical evidence and “Quantum advantage” used to mean the “large effort will be needed to provide a fast solution”.
    Along the way, “Quantum supremacy” was retired, “Quantum advantage” took it’s place and it seems like “Quantum utility” is now taking “Quantum advantage”‘s old place.

    If I’m on the right path, then random circuit sampling is still considered “Quantum advantage” even though it took about 2 years for someone for find a classical algorithm for Google’s experiment.

  20. Scott Says:

    Xirtam #17:

      Could I naively think of this like choosing to write a Python program that would take a day over using C which would take a week?

    I guess? In this scenario, QC would be much more expensive and exotic hardware but which doesn’t require you to be as clever in algorithm design to achieve a given level of performance.

  21. Scott Says:

    Rhenium #18:

      So if one wishes to get a classical algorithm for a particular problem that hasn’t been addressed (or simply a better classical one), simply post an incorrect or dubious case of “quantum advantage” and watch as three get posted with a few weeks.

    LOL, I may have to steal that.

  22. Scott Says:

    vtomole #19:

      If I’m on the right path, then random circuit sampling is still considered “Quantum advantage” even though it took about 2 years for someone for find a classical algorithm for Google’s experiment.

    Even today, the classical algorithms that we know for simulating the Google experiment either take a couple orders of magnitude more resources (say, in energy costs) than the QC, or else they get a one order of magnitude lower LXEB score. So “quantum advantage” remains, although it’s hanging by a thin thread, underscoring the need for better experiments. Fortunately, state-of-the-art gate fidelities have indeed improved (from ~99.5% to ~99.9%) in the four years since Google announced its experiment, although demonstrating quantum error-correction has become a much higher priority than more random circuit sampling experiments for most of the leading players.

  23. Vkn Says:

    In your YouTube video example, can you directly infer from a 60 cent prediction stock that the predicted probability is 60%? People may not price probabilities 1:1.

  24. Michel Says:

    Re Scott #10: GPT-4 failed to add one single word:

    “At the end of the day, I am just a researcher, sharing my insights and working with OTHER brilliant minds from across the globe to advance an exciting field. “

    Comment retried because my other email addres gave ‘could not be verified’

  25. fred Says:

    Could improvements in QC technology lead to improvements in making classical chips work at even smaller scales (where the quantum effects start screwing up with the classical behavior)?

  26. manorba Says:

    fred #25 Says:
    “Could improvements in QC technology lead to improvements in making classical chips work at even smaller scales”

    That’s a thing i always wanted to ask too: not only chips, but also buses, memory… in general can the research on QM bring new paradigms to QC? i’m thinking about miniaturization but also energy efficiency.

    m.

  27. Scott Says:

    fred #25: It’s not impossible, but I don’t think it’s happened yet.

  28. manorba Says:

    my comment #26 should have read:

    “in general can the research on QC bring new paradigms to classical computing? i’m thinking about miniaturization but also energy efficiency.”

    42c degrees don’t help lol.

  29. Pierre Says:

    Someone please tell me that Fred #7’s post is satire of contemporary stupidity?

    It’s bad enough to have thoughts this ridiculous and juvenile enter your head, but then to find the nerve to publicly reprimand an actual intellectual who doesn’t align with this absurdity–this leaves me speechless.

  30. WA Says:

    To take a break from the rigors and practicalities of quantum computing, here’s a potentially naive question: can we say that the earth is mostly in a pure quantum state, on account of every particle in it sharing roughly the same momentum and position as every other, so up to some coarse graining it’s like a bosonic condensate? (and if yes where do we stop? is everything macroscopic a quantum condensate?)

  31. Scott Says:

    WA #30: No, the earth is clearly highly entangled with, if nothing else, the radiation leaving it for space.

  32. Nole Says:

    Fred#25: “Could improvements in QC technology lead to improvements in making classical chips work at even smaller scales?”

    Excellent question. Currently, classical chip scaling is limited by a the minimum dimensions needed to retain the functionality of materials such as semiconductors (thinner Si hurts electron mobility, shorter gate length causes high off-state leakage), insulators (thinner oxides lead to high leakage), metals (interconnect resistance rises very fast as dimensions shrink further).

    Superconducting qubit processing needs to remain as simple as possible, as more process steps tend to reduce qubit lifetime. So progress in the latter seems unlikely to help solve some of the problems with classical scaling. On the other hand, research into better semiconductor qubits may eventually feed back into classical transistor fabrication, if only because high-quality interfaces and lower gate-to-gate variation are crucial to the quality and scaling of the qubits. The only problem is that research into spin qubit fabrication, with the required level of sophistication, is limited to a very small number of industrial labs, so progress is very slow.

  33. Vladimir Says:

    Claiming this work has practical utility is highly questionable. Unitary time evolution of an initially “boring” state is a mathematical problem, not a physical one (*), since nothing for which this is a good approximation ever happens in nature (as opposed to Nature). Explicitly not caring about the underlying physical model but only about its Trotterization, and simulating the Trotterized model on the physical qubits’ graph, are just cream and cherry on top of the hype cake.

    (*) Granted, it’s a bit closer to being a physical problem than BosonSampling.

  34. WA Says:

    Scott #31: “No, the earth is clearly highly entangled with, if nothing else, the radiation leaving it for space.”

    Thanks for indulging this! Yes you’re obviously right, there’s radiation impinging on and leaving earth, there’s also entanglement generated from interactions during solar system formation, and many other sources of extra terrestrial entanglement.

    But isn’t this extra terrestrial entanglement weak compared to the entanglement due to interactions on earth, for example in the molten core? Also if we wanted to approximately purify a subsystem on earth by acting with a unitary possibly on the whole universe, then we only need to act on its neighbourhood to get a pretty pure state (isn’t that what we do when we cool something down to absolute zero?). Have I gotten it all wrong?

  35. bystander Says:

    WA #30: You actually ask several questions. First of all, the Earth is definitely not a bosonic condensate, nor anything like that. Atoms are most of the time quite localized: even though you do not have them at precise positions, the double-slit experiments are not that common for atoms (photons are a different beast). And the very low temperatures of BEC are a laboratory thing. If we would detect such a low temperature in the outer space, we would know that intelligent beings are there. Quite a difference from high temperatures!

    Regarding pure vs. mixed states, it is not about being entangled either. You only talk about entangled particles as at mixed states when you consider the entangled particles individually. But that is a limitation you inflict on yourself. You don’t need to. The other notion of mixed states is something different.

    And there is another thing: planets are not into quantum mechanics. It’s known for decades. Look at
    Sabine’s presentation or read the texts linked there: by Wojciech H. Zurek, by Michael Berry.

  36. Scott Says:

    WA #33: No, to return the earth to a pure state, you’d presumably need to track down and act on every photon that’s left the earth in its history (keep in mind that entanglement has no distance limit). Either that, or swap out all the particles on earth for ones that are already in pure states. Since neither of those will ever happen, it’s safe to assume that earth will remain in a mixed state forever (at least, from the standpoint of the wavefunction of the whole universe, for those of us who believe such a concept is meaningful).

  37. fred Says:

    bystander #34

    Thanks, Sabine’s post is quite interesting.

    But the argument that chaos is emerging from a set of non-linear equations and therefore it can’t be explained easily by underlying linear equations (Schrodinger) is a bit confusing to me – why can’t the same argument just be applied to any system that shows non-linear behavior? E.g. a transistor (which can be setup to exhibit chaotic behavior as well – https://www.researchgate.net/publication/262274771_Chaotic_Behavior_of_Transistor_Circuits )?

    Also, all the argument about dust and light entangling with the moon, okay. But if Quantum Gravity is a thing, wouldn’t it be enough for a macroscopic object to be in a gravity field to decohere spontaneously?

    Btw, there’s also a new interesting video presentation from Tim Palmer, Sabine’s partner in crime in trying to reframe Bell’s experiment (note that he never mentions superdeterminism, just that the state space of the universe could be chaotic):

  38. bystander Says:

    fred #34 Pop-sci talks are good at introducing stuff, but less at making precise explanations. If you want to have a better grasping of the basics, you may want to read the linked scientific articles. Check there the issue of celestial bodies not being delocalized, even though QM asks for that.
    Regarding transistors, if I get it correctly from a short staring, they have external sources of their chaotic outputs. And mainly, the electrons flowing through them can be delocalized, thus obviating the localization issues seen in celestial bodies.

  39. WA Says:

    Just to be clear, for years I’ve known that the earth is in a mixed state, and wouldn’t have thought twice about the idea that it’s pure. I did say approximately pure — though admittedly even that sounds provocative. With my questions I was trying to think outside the box, which can be dangerous given this “box” has an elusive shape that I’m not equipped to tackle. Besides I think I have a misunderstanding, or a tension in my understanding of the quantum-classical transition (don’t we all?). So I appreciate the answers.

    The tension in my understanding is this. I thought we can in principle take any old chunk of matter on earth, cool it down, isolate it from its environment, and do other fancy stuff, until it is approximately in a pure state, namely a ground state of a local Hamiltonian. Sure not absolutely pure, but a pretty ok state that can do fun things like BE condensation, superfluidity, superconductivity (and even quantum computing). But this is at tension with the observation that if we want to return something a pure state we need to go corralling cosmic photons into a quantum computer. The way I resolved this in my mind is that these two effects have vastly different scales compared to each other.

  40. Uspring Says:

    WA:
    To prepare a pure state you simply need to perform a measurement with a complete set of commuting operators on your chunk of matter. That will ensure, that in a later similar measurement you will obtain the same eigenvalues with a 100% probability. (Assuming stationary states)
    From the point of view of a universal wave function the chunk will then be entangled with the experimenter and perhaps the rest of the universe. That doesn’t matter for the experimenter and the outcome of later measurements, that he observes.

  41. James Gallagher Says:

    I think WA makes an interesting point. The probable reason that we can create (unnatural) pure localized quantum states on Earth is that we have free will and can bypass pure unitary processes that exist all over the rest of the universe. (but obviously we can now program a machine to do so too, but we require free-will to program the machine in the first place)

    There are also “natural” (almost) pure states possible too, these have been enabled by billions of years of evolution which has enabled processes like photosynthesis perhaps to temporarily create approx pure states, which shouldn’t be possible if pure Schroedinger (unitary) evolution is all that there is.

    Bypassing pure unitary evolution is tricky, but we seem to have evidence of it on this planet.

  42. WA Says:

    I was thinking about it in terms of the wavefunction of the universe, ignoring collapse (because I don’t believe in it), and treating measurements as unitary operations that entangle the apparatus and the system. Instead of collapse, I was thinking in terms of decoherence and pointer states.

    Another way to phrase the tension in my mind is that if we can design an apparatus that gets maximally entangled with the system, then due to monogamy the system and apparatus have to be unentangled with the rest of the universe, even though we didn’t touch radiation and dust that has previously interacted with the whole lab.

    I think the correct answer is that the apparatus cannot get maximally entangled with a system unless you either uncompute its entanglement with the rest of the universe or else initialize the system/apparatus in some pure state through measurement.

  43. Ted Says:

    I’ve always had a meta-level confusion about (the original version of) Aumann’s agreement theorem, which I suspect may indicate that there’s some fundamental aspect of the result that I’ve failed to grasp.

    My confusion is: what’s surprising about the result? Once you understand the setup, the result seems to me to be fairly obvious. As Scott says starting at 16:04: “It is clear that if the two people exchanged all the knowledge that they had [starting from common priors] – if they basically uploaded their entire brains’ worth of information into each other – then at that point, they basically are the same person. It is obvious that they’ll have the same opinion, because there is nothing anymore to break the symmetry and give them a different opinion.”

    Unless I’m fundamentally misunderstanding the statement of (the original version of) Aumann’s agreement theorem, don’t those two sentences basically (non-rigorously) state and “prove” the result right there? I grant that formalizing the assumptions in the setup is highly nontrivial – but if that was the major achievement of the original result, then wouldn’t that mean that the original result was essentially a major methodological advance, but not a headline finding?

    I certainly grant that it is not at all obvious that this process can be done efficiently, so I understand the significance (and surprise) of Scott’s 2005 extension of the original result. I also grant that the informal statement of the finding seems very much contrary to our everyday real-world experiences. But at the time of its original publication (before Scott’s 2005 finding), couldn’t that difference have been a priori explained by either of the two (major!) loopholes that Scott mentioned – the limited communication time in the real world, and the fact that real-world humans aren’t perfectly rational? I don’t understand why the original result was so surprising or counterintuitive.

    Was the major advance of the original result to simply consider the question in the first place, rather than to deliver a surprising answer?

  44. Scott Says:

    Ted #42: Already with Aumann’s original theorem, the whole point is that Alice and Bob don’t have to exchange all their information in order to have the same opinions. It’s enough for them to have common knowledge of each other’s opinions, by whatever means they achieved that. But yes, the proof is about 1 paragraph long.

  45. WA Says:

    Speaking of the larger Hilbert space, there’s something related which is causing me confusion, most likely because I’m missing key background in quantum field theory: how do we reconcile that there’s a wavefunction of the universe with phenomena like the Unruh effect? If an accelerating observer sees a thermal vacuum state, then what’s the vacuum getting entangled with, or what larger system do we need to include to restore its purity, from the point of view of this accelerating observer (or are accelerating observers not privy to the larger Hilbert space)?

  46. WA Says:

    Sorry, the above question could have been answered with a bit of thought on my part: the accelerating observer gets causally disconnected from a region of spacetime and the reduced state appears mixed to them, but that doesn’t affect the universal wavefunction.

  47. Ted Says:

    Scott #43: Gotcha, thanks. For some reason I’d thought that exchanging all their knowledge was necessary for making their posteriors common knowledge. (Which is clearly wrong now that I think about it, because that would be incompatible with your 2005 result.) Now I understand why the hypotheses of Aumann’s original theorem are quite weak, and so the result is quite strong.

  48. Chwr Says:

    I have a question about your paper “The Complexity of Agreement.”

    In proposition 1, I do not understand why the first expression should equal the third.

    The proof says we are slicing up the same set S, but it seems to me that the set in the first expression may be larger than the set in the third, because we only assumed that \( \Omega_{i,t} \) refines \( \Theta_{j,u} \), not \( \Omega_{j, u} \).

    For example suppose Sam and Upton are watching a stock that can go up, down, or stay the same. Sam knows whether it will stay the same, and Upton knows whether it will go up. In fact, it will stay the same, so Sam knows that, and Upton just knows that it won’t go up. Shouldn’t Sam know that Upton will have too low of an expectation, despite the fact that Upton knows Sam’s expectation?

  49. Chaoyang Lu Says:

    Comment #1 is so amusing that I couldn’t help but laugh out loud!!!

  50. confused experimentalist Says:

    hi scott, long time lurker and first time poster. really curious if you have any comments on the technical aspects of the ibm utility paper, in particular on error mitigation in noisy circuits. it seems that after many cycles of noisy computation, the quantum information should be irretrievably lost due to decoherence, and yet by adding the outcomes of different noisy circuits you can effectively undo this decoherence and somehow extrapolate to the correct answer? this seems quite miraculous, since the state towards the end could in fact be highly mixed with little to no entanglement left. and if this is valid and allowed, then shouldn’t room temperature nmr quantum computation also be “valid”? there one can sum up experimental outcomes of different pseudo-pure states to effectively mimic the time-evolution of a pure state, albeit the number of expts goes up exponentially in qubit number. but it has been argued that nmr states are so mixed that it’s always separable https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83.1054

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.