Quantum Information Supremacy

I’m thrilled that our paper entitled Demonstrating an unconditional separation between quantum and classical information resources, based on a collaboration between UT Austin and Quantinuum, is finally up on the arXiv. I’m equally thrilled that my coauthor and former PhD student William Kretschmer — who led the theory for this project, and even wrote much of the code — is now my faculty colleague at UT Austin! My physics colleague Nick Hunter-Jones and my current PhD student Sabee Grewal made important contributions as well. I’d especially like to thank the team at Quantinuum for recognizing a unique opportunity to test and showcase their cutting-edge hardware, and collaborating with us wild-eyed theorists to make it happen. This is something that, crucially, would not have been feasible with the quantum computing hardware of only a couple years ago.

Here’s our abstract, which I think explains what we did clearly enough, although do read the paper for more:

A longstanding goal in quantum information science is to demonstrate quantum computations that cannot be feasibly reproduced on a classical computer. Such demonstrations mark major milestones: they showcase fine control over quantum systems and are prerequisites for useful quantum computation. To date, quantum advantage has been demonstrated, for example, through violations of Bell inequalities and sampling-based quantum supremacy experiments. However, both forms of advantage come with important caveats: Bell tests are not computationally difficult tasks, and the classical hardness of sampling experiments relies on unproven complexity-theoretic assumptions. Here we demonstrate an unconditional quantum advantage in information resources required for a computational task, realized on Quantinuum’s H1-1 trapped-ion quantum computer operating at a median two-qubit partial-entangler fidelity of 99.941(7)%. We construct a task for which the most space-efficient classical algorithm provably requires between 62 and 382 bits of memory, and solve it using only 12 qubits. Our result provides the most direct evidence yet that currently existing quantum processors can generate and manipulate entangled states of sufficient complexity to access the exponentiality of Hilbert space. This form of quantum advantage — which we call quantum information supremacy — represents a new benchmark in quantum computing, one that does not rely on unproven conjectures.

I’m very happy to field questions about this paper in the comments section.


Unrelated Announcement: As some of you might have seen, yesterday’s Wall Street Journal carried a piece by Dan Kagan-Kans on “The Rise of ‘Conspiracy Physics.'” I talked to the author for the piece, and he quoted this blog in the following passage:

This resentment of scientific authority figures is the major attraction of what might be called “conspiracy physics.” Most fringe theories are too arcane for listeners to understand, but anyone can grasp the idea that academic physics is just one more corrupt and self-serving establishment. The German physicist Sabine Hossenfelder has attracted 1.72 million YouTube subscribers in part by attacking her colleagues: “Your problem is that you’re lying to the people who pay you,” she declared. “Your problem is that you’re cowards without a shred of scientific integrity.”

In this corner of the internet, the scientist Scott Aaronson has written, “Anyone perceived as the ‘mainstream establishment’ faces a near-insurmountable burden of proof, while anyone perceived as ‘renegade’ wins by default if they identify any hole whatsoever in mainstream understanding.”

160 Responses to “Quantum Information Supremacy”

  1. Ryan Alweiss Says:

    Kind of a funny paper. Ultra fine grained complexity I guess? Sort of a similar flavor to the busy beaver stuff in a way. I like that you have these kinds of papers. Pure mathematicians other than Doron Zeilberger should probably release more of them.

  2. Charlie Says:

    This blog and superdeterminism are why I’m not one of the 1.72 million subscribers. Sabine is not a pluralist evidently. I’m not sure why she would be better for physics if she were in charge of anything.

  3. Suomynona Says:

    Congrats on the successful experiment!
    Regarding the WSJ article — in Sabine’s defense, we should remember that there is good science and bad science.
    As much as I hate to admit it, many fellow academics indulge in doing work that they are comfortable with, because it brings funding and gives them something to keep busy with.
    Unfortunately, some works that stem out of this are completely disconnected from reality, or anything that stands a chance at ever being meaningful (also let’s not forget that there is a reason why predatory journals exist). So what should we do? Ignore this phenomenon, or eventually decide that maybe we should redirect the research funds towards the more promising areas of physics?

  4. fred Says:

    Congratulations Scott!

    Can you clarify something: I was under the impression that you always viewed the “exponentially of the Hilbert space” as an abstraction, even with QC being realized, and that QM is “just” another way to do probabilities.
    But the paper refers to the “physicality” of such space, i.e. if one has a million connected qubits, somewhere some resources are being allocated to do accounting on 2^1,000,000 states.
    Has your view on this changed?

  5. Simon Says:

    What is stopping this (or a similar) approach from being used to prove a separation between P (or BPP) and BQP? Is it that even though the number of classical bits required grows exponentially with the problem size (presumably implying that the number of computation steps also grows exponentially?), the number of steps required for the (known) quantum algorithm also grows exponentially? Or am I misunderstanding something more fundamental?

  6. Scott Says:

    Ryan Alweiss #1: It might be a “funny paper” from the perspective of a pure mathematician, but in quantum computing and information this is a very standard type of thing to do! And even those of us like me on the far theoretical end of the field, will tell the experimenters if we see an opportunity to do something new or exciting, and then if they end up doing it, we get included as coauthors on the Science and Nature submissions… 😀

  7. Scott Says:

    Suomynona #3: Absolutely, some of Sabine’s critiques e.g. of the string unification program have merit! And indeed, recognizing that, I was restrained in my criticisms of her for 15 years, and was cordial with her, and even helped her with one of her quantum computing videos. Now, however, Sabine has chosen to ally herself with those who are trying to tear down the entire edifice of basic research in the US—the edifice that’s recently given the world spectacular successes such as LIGO and JWST—even though the contrarians plainly don’t have anything better to replace it with, other than horseshit like “superdeterminism,” or whatever Eric Weinstein can piece together from his 1980s computer files. To say that I consider this an unwise course of action would be an understatement.

  8. Scott Says:

    fred #4: No, my view hasn’t changed. Neither this experiment nor any other directly addresses the metaphysical question of whether the exponentially large Hilbert space is “really out there” or is “merely a calculational device”—people can continue arguing about such things until the end of time! What this experiment does help to show is that, in any case, exponentially large Hilbert space is a totally unavoidable calculational device—and it does that without any unproved computational hardness assumptions. This is relevant for example to any remaining skeptics of scalable quantum computing or quantum mechanics itself.

  9. Scott Says:

    Simon #5: The reason we can’t use this to prove BPP≠BQP is that we aren’t bounding the number of computational steps at all—instead, we’re lower-bounding the number of bits of classical information that would need to be transmitted from the earlier part of the quantum circuit to the later part (in other words, the “one-way communication complexity”). This doesn’t rule out that a classical adversary who received the entire circuit could calculate any observable of interest in time that was polynomial in the circuit’s size.

  10. Craig Gidney Says:

    Does the paper ever estimate the classical storage required by the control system? If there were 12 ions, and there was one single-precision float per ion to track its position, then that’s already 384 bits of information (more than the 382 bit upper bound for the problem mentioned in the paper). And the control system is inseparable from the basic function of the machine, so I’m not sure it makes sense to exclude from the accounting. If you delete those 12 floating point numbers halfway through, the computation will not succeed.

    The conclusion mentions that an n=26 instance of the problem would provably require 1.1 million classical bits. Even that sounds insufficient, if weighing the control system. It’s very common for modern software to use hundreds of megabits of storage just to initialize itself. Any Python anywhere would cause that. IIRC I saw quantinuum give a talk about using a web assembly sandbox to isolate untrusted user code from the rest of the system; that could also easily blow the budget. I believe it’s possible to write a control system that uses less than a megabit of data, but that this would be a substantial software engineering task.

    Do you know a value of `n` that would force a billion classical bits? That would be an easy target for the control system to fit inside. A trillion classical bits would also be nice to know, in case the control system has to store a lot of data (such as arbitrary waveform data for control of each qubit).

  11. Sad cowboy Says:

    Do you condemn the murder of Charlie Kirk, unreservedly and without equivocation?

    Will you write about this horror, and what it means for free speech in America?

  12. Scott Says:

    Sad cowboy #11: Yes, of course I condemn Charlie Kirk’s murder unreservedly and without hesitation. I wasn’t planning to do a post, simply because whatever I can say is being said better by others, and because (frankly) I barely knew who Kirk was before this week, although I probably could’ve told you he was some sort of MAGA influencer. Obviously he and I would’ve disagreed about a great many things, while agreeing about other things including Israel. But agree or disagree is beside the point: political murder is not only a human tragedy but also a civilizational one. It burns down our societal commons and makes the world worse in every way, which is what I expect to happen now.

    I regret that further comments on this will be left in moderation, for fear of derailing the science conversation.

  13. Scott Says:

    Craig Gidney #10: I might let William field the quantitative parts of your question, but briefly — yes, in the paper, we explicitly discuss the “loophole” that the classical bits could be stored in the control system (Dorit Aharonov also raised this objection to me). While to me it seems as fanciful as (say) the locality loophole in the Bell inequality experiments before that was closed, I agree that it would be great to do the experiment with larger and larger numbers of qubits in order to push the “environment loophole” into a tighter and tighter corner.

  14. wb Says:

    @Scott @Fred

    >> exponentially large Hilbert space is a totally unavoidable calculational device

    But this would mean that you have to believe that Hilbert space is real, unless you think there are properties of reality which cannot be described by mathematics / physics;
    Hilbert space would be lacking the ‘something’ that makes things ‘real’ …
    Do you believe there is ‘something’ beyond physics?

  15. Craig Gidney Says:

    I agree it has similarities to the locality loophole in older Bell tests. However, at least for the n=12 case, it seems literally impossible to close this control system loophole. 382 bits is an impossibly tight space budget for a control system. At n=26, with a million bit budget, I believe it’s possible (but very difficult) to close this loophole. So I would consider the analogy much stronger if it was an n=26 demonstration.

    Anyways, for fun, I just wanted to play up this control system budgeting thing up a bit:

    You enter a store to buy a computer. Computers at this store are costed at 1$ per bit or qubit of storage. Looking around, you spot a classic ENIAC, with 80 bytes of storage, costing 640$. This is sufficient to solve your problem, so you begin walking towards it. A salesman intercepts you.

    “Oh no no no, don’t buy that one! It’s very wasteful! This one over here will meet your needs, and only uses 12 qubits of storage!”

    You look over, interested in saving 628$.

    The salesman is pointing at a machine with a listed price of 100 million dollars.

    “Uh… why is the price so high?”

    “Oh, that’s just the incidentals! Mere implementation details! Only the 12 qubits are truly necessary! A crowning achievement in potential efficiency!”

    “…but it costs literally hundreds of thousands of times more money.”

    “Yes yes, but you need to consider the *future*! When your problems get bigger, you’ll need to invest millions of dollars into upgrading that old ENIAC! But for this machine it will only take a few dozen more dollars! Think of the savings!”

    “…a million dollar upgrade is still a hundred times less expensive than the upfront cost of this machine”

    “*Eventually* it will be a big savings! That’s an asymptotic guarantee!”

    “…do you or do you not currently have a version of this machine that costs less than 640$?”

    “Uh…”

    You buy the ENIAC.

  16. Once More Says:

    Suomynona #3:

    Your comment is like people defending DOGE saying “listen, I understand everything that they recently said turned out to be a lie, and everything they recently did was just bad, but shouldn’t you give them credit because the idea of stopping waste, fraud, and abuse is good?”

    Sabine used to point out valid issues with academia. Now she just lies and pretends that mainstream academics and institutions are no more reliable that Weinstein and other grifters. The morally correct thing to do in situation is call out her behavior for what it is, just like Scott has done, instead of pretending that she is the unique person that has identified issues in academia, and therefore she should be above any sort of critical judgement.

  17. Craig Gidney Says:

    I suppose I should clarify that I’m not worried about the information being smuggled through the control system. I’m worried that it’s impossible to build a quantum computer *without* a control system, and so I think of the control system as part of the budget.

    A classical analogy would be that I want to cost the storage of the program instructions in RAM, in addition to the memory allocated by the program. This “pay for the instructions” model is distinct from the usual model used in complexity theory. For example, IRRC, there are results showing 5 bits of storage is sufficient to evaluate quite complex functions by using enormously long sequences of instructions and clearly if you had to store the instructions then it wouldn’t meet the 5 bit budget anymore. But in practice we do in fact store the instructions, and so accounting for that is a better approximation of the cost model you would interact with when buying a computer to solve a problem.

  18. fred Says:

    The irony is that a bit is really an abstraction that maps a concept inside a human brain onto a certain preparation of atoms inside a system we call a computer. A classical laptop that performs a computation isn’t any more remarkable (on its own) than any other cluster of atoms, i.e. the laptop just does what the universe tells it to do and you really can’t point to something extra we call “software”.
    In that sense a qubit is more real than a bit because entanglement is an actual unique physical property even tough it can only exist if we don’t try to observe it.

    Putting it another way, a classical computer is a system which macro state (we call software) is duplicated and robust in as many branches of the many world as possible, while a QC is a system that evolves in a “subtree” of the many world that is as narrow and isolated as possible.

  19. OMG Says:

    Scott #7

    LIGO and JWST are magnificent achievements.

    It is always easy to criticize the work of others and revel in some implied superiority but very very difficult to develop something better and that has value to others. Pure criticism is quite often the secure refuge of the weak and less talented.

  20. Scott Says:

    Craig Gidney #15, #17: Good, so if you try to drum up interest in doing an n=26 version of this experiment, I’ll strongly support you!

    The real difficulty, of course, is that fully exploring the Hilbert space will require ~226 gates, and that might not be possible without fault-tolerance, in which case you’ll be using a helluva lot more than 26 physical qubits.

    With the early Bell experiments, there was the question, “could a secret classical signal, traveling at the speed of light, in principle get from the Alice detector to the Bob detector in time to spoof the results?” (Yes.) But then there was the very different question, “do you actually believe that anything like that could be happening?” (Hell no.)

    I think about the “environment loophole” for our experiment in the same way. Are there enough degrees of freedom in the control system to carry a description of the original quantum circuit C? Yes. Is it actually plausible that the control system “remembers” the circuit and that that’s what’s causing the results? Unless I’m missing something, it seems no more plausible than homeopathy, and indeed it’s a very close analogue of homeopathy.

  21. Scott Says:

    wb #14:

      But this would mean that you have to believe that Hilbert space is real, unless you think there are properties of reality which cannot be described by mathematics / physics;
      Hilbert space would be lacking the ‘something’ that makes things ‘real’ …
      Do you believe there is ‘something’ beyond physics?

    I’d need to think about those metaphysical questions (with or without the aid of psychoactive substances), but the point I’m trying to make is that there’s a way to understand our experiment without assuming anything about their answers. Namely, if we imagined there was secretly an underlying classical description of the world, then the number of classical bits that would need to be carried forward in time from the beginning of the experiment to its end in order to account for the observed results, is like an order of magnitude larger than the number of qubits that can be seen to be present.

  22. Scott Says:

    Once More #16:

      Your comment is like people defending DOGE saying “listen, I understand everything that they recently said turned out to be a lie, and everything they recently did was just bad, but shouldn’t you give them credit because the idea of stopping waste, fraud, and abuse is good?”

    That’s actually a superb analogy! Yes, the idea of fixing academic science’s groupthink problems and injecting heterodox ideas is a good one. But no, evidently neither Sabine nor Eric Weinstein has a superior alternative to offer. The restaurant needs some new food, and turd sandwiches are new, but the restaurant does not need turd sandwiches.

  23. uncle_jo Says:

    Suomynona #3, Scott #7

    Trumps defunding campaign has made people (hello, Sean) that used to promote alternative approaches circle the wagons and declare everything is a-ok. It be difficult out there atm

  24. Daniel B Says:

    Scott #20: “fully exploring the Hilbert space will require ~2^26 gates” – Do you need to sample the Haar measure exactly, or would, say, an approximate 2-design suffice? An approximate 2-design on 26 qubits requires only a few hundred or thousand gates (details depend on the geometry and the acceptable approximation error).

  25. Scott Says:

    Daniel B #24: No, you don’t need to sample the Haar measure exactly, but if you want to show that K classical bits are needed, then you do need at least 2K different quantum states—and just for reasons of counting, that requires at least ~K quantum gates in your circuit. So in particular, if you want an exponential separation between bits and qubits then you need exponentially many gates, and an approximate 2-design can’t suffice.

  26. Sniffnoy Says:

    Scott #22:

    The restaurant needs some new food, and turd sandwiches are new, but the restaurant does not need turd sandwiches.

    The usual term for this sort of reasoning btw is the “politician’s syllogism“.

  27. Alessandro Strumia Says:

    What is the argument against super-determinism?

  28. Carl Lumma Says:

    Didn’t Yamakawa and Zhandry prove some kind of quantum supremacy?

  29. Scott Says:

    Carl Lumma #28: Yamakawa and Zhandry showed (in a breakthrough) that it’s possible to get quantum advantage for an NP search problem relative to a random oracle. Crucially, however, no one has actually demonstrated their protocol, and doing so seems to require a full fault-tolerant QC, capable for example of Shor’s factoring algorithm also.

  30. Scott Says:

    Alessandro Strumia #27: For me, the decisive argument against superdeterminism is that it throws away a basic precondition for science itself — namely, that we get to design experiments without the initial conditions of the universe conspiratorially rigging our own choices to systematically mislead us — and that superdeterminism does this, not out of desperate necessity, but only for the pathetic “reward” of accounting for Bell inequality violations without needing to admit that quantum mechanics is just true.

    But the clincher is that, once you’ve postulated this cosmic conspiracy in the initial conditions, you could equally well “explain” superluminal signaling or pretty much any other miraculous phenomenon, way beyond what QM predicts. Yet the vast conspiracy of initial conditions that infects our brains does so only to make it look like quantum entanglement works the way QM always said it worked, and never to do anything more impressive or exciting. Why? Doesn’t it seem arbitrary and like overkill? At this point, we’re in the territory of God planting the fossils in the ground to confound the paleontologists (but never doing anything more useful, like smiting bad guys).

    So far, I have yet to meet a single superdeterminist who even understands what I just said, let alone having a response to it.

  31. Roger Schlafly Says:

    You are right about superdeterminism violating a basic precondition for science, but many-worlds theory has the same problem. You cannot do any experiments because all outcomes occur. QM just appears to be true because a conspiracy of universe splittings keeps putting us in worlds where QM appears to be true. And it is all for the pathetic goal of denying that probabilities make sense. Again, just like God planting fossils.

  32. andrea Says:

    highly recommend timothy nguyen’s blogpost “Physics Grifters” for an account of how eric weinstein attacked an establishment physicist who produced reasonable criticism of his theory

    unrelated note to scott: this is my first comment on your blog, which i feel compelled to post in a show of support. i’ve been reading it since 2015, when i was a high school student. now i am a graduate student working in quantum experiment. in the beginning i understood 0% of your quantum blogging. now with a bit of effort i can understand most of them. it is good to see you chugging onwards.

  33. Martin Mertens Says:

    Hi Scott. Allow me to, in the words of Principal Skinner, furrow my brow in a vain attempt to understand the situation. I’ve always heard about quantum supremacy coming in the form of fast algorithms. But I suppose it makes perfect sense to focus on space as well. So if this result were used to design a quantum supremacy experiment then point would be to use a small amount of quantum memory to accomplish a task that would require a large amount of classical memory, although if you do have enough classical memory then the task doesn’t take all that long. Is that right?

  34. William Kretschmer Says:

    Craig Gidney #10, #15, #17: By my calculations, 38 (noiseless) qubits could get you over 10^9 bits, and 49 could get you over 10^12 bits. I’m not certain whether the calculator I’m using is numerically stable enough to give completely correct answers in those regimes, but these figures seem reasonable and are consistent with my expectations. As Scott #20 mentions, these would also require something like ~10^9 or ~10^12 gates.

    Where we draw the line between the quantum system and classical environment seems like it will always be a little bit arbitrary. But if one seriously hopes to close this loophole in a future experiment, better might be to demonstrate the quantum advantage using communication between two devices separated in space, rather than storage on a single device separated in time. That ought to be both experimentally easier than scaling up by so many orders of magnitude, and more convincing than trying to define a boundary between storage and control. As long as one can argue that the physical communication channel supports at most n qubits, the complexity of the hardware and software performing quantum operations at either end is irrelevant to the claim of advantage.

  35. Danylo Yakymenko Says:

    Great work, congratulations!

    If I understand correctly, the crucial theoretical part is that Alice cannot find a short deterministic classical encoding of quantum states which could be decoded by Bob to make realistic measurement predictions. And thus the best way for Alice is to simply send the quantum state instead of a classical encoding. It looks natural, however, it is great to have it proven and experimentally verified to the smallest detail.

    And thank you for sharing the code of your work! We might find a use of it in our own experiments.

  36. Carl Lumma Says:

    Thanks Scott #29. I have a question about “relative to a random oracle”. I thought an oracle is a tool that can solve a specific problem in O(1) time, which I might use to solve a problem I actually care about faster. But when I read about random oracles, it seems they’re not tools I can query at all. Instead they say, ‘Now you have to solve the problem you care about AND also please reverse this hash function.’ Why are oracles helpful tools unless they’re random, in which case they burden me with extra work? Or am I just all wet here?

  37. Alessandro Strumia Says:

    Interesting, but you seem to have a philosophical prejudice against determinism. While I have a philosophical prejudice against randomness. I think QM must be the statistical mechanics limit of some deeper theory, where randomness comes from a specific physical mechanism. But nobody got a candidate theory of quantum randomness, because it’s amplitudes rather than probabilities. Super-determinism is just the possibility that the weirdest feature of quantum noticed by Bell is produced by a seemingly crazy amount of determinism. As long as the deeper theory reduces to QM, even if it plays this dirty trick, the issue should have no relevance in making science possible.

    Related to the second part of your argument, maybe super-determinism is a too cheap theoretical ingredient to select the special theory. Still, I don’t known any such deeper theory, because other features of QM are weird enough to block the attempts. Seeking a deterministic theory of something that is both a wave and a particle is enough to get headaches.

    It would be good if your field will find that exponential quantum gains at some point fail, experimentally hitting a new fundamental limit.

  38. fred Says:

    To me, superdeterminism sounds like starting with the initial observation that computers can’t generate true randomness (they are purely deterministic systems), only pseudo-randomness, which can be observed as subtle repeating patterns when displaying tons of random numbers on screen, but it would flip things around and claim that those wavy patterns are the primary object to be used to explain the output of every single program ever written which relied on the pseudo random number generator, and this supersedes and can explain the thousands of books that have been written about statistics and probabilities.

  39. Danylo Yakymenko Says:

    By the way, have you thought about the modification of the task where Alice is allowed to send multiple copies of the same quantum state? It adds a constant factor to the number of required qubits, however, since Bob can use multiple measurements of the same state his output z could be improved (I think one can modify the XEB definition to include multi-copy state measurements). This probably won’t increase the theoretical separation between classical and quantum memory requirements, however, it could help coping with the noise in quantum hardware, I guess.

  40. Adam Treat Says:

    Scott #12,

    “I regret that further comments on this will be left in moderation, for fear of derailing the science conversation.”

    Good. But can you also leave the ‘superdeterminism’ and ‘all things Sabine’ in moderation as I think these are already the majority of comments on what seems to be an important post about a new important paper!

  41. Mark Spinelli Says:

    Carl Lumma #36:

    There’s a lot to unpack in your questions. Because you initially asked about the Yamakawa-Zhandry algorithm we can focus on that. In particular, the oracle in the YZ algorithm is used to pick a random subset of codes; the YZ algorithm provides that a quantum computer can, in polynomial time, find a subset of codes such that, by feeding the coordinates of the codes into the random oracle, the first bit of each output from the oracle is 1; the separation says that no classical algorithm can do this in polynomial time.

    Random oracles are great because we remove all information content about how the oracle is constructed and we can prove such unconditional separation; but we live in the real world, and we have to instantiate the oracle somehow. A very common approach – indeed, the one used in YZ’s paper – is to instantiate the oracle with something like SHA2.

    But once we instantiate the random oracle with SHA256, we are no longer unconditional; rather, now we must rely on the belief that it’s classically challenging to invert SHA256. Unconditionally proving that would require at least proving P does not equal NP – a very hard problem indeed. Even when an experimental demonstration of YZ’s algorithm is run, there could sill be a nagging question that maybe SHA256 is easy to classically invert.

    Kretschmer et al.’s work doesn’t have this question looming over it; it’s unconditional ab initio. But given the market capitalization of Bitcoin I think assuming that it’s hard to invert SHA256 is a pretty safe bet!

  42. Danylo Yakymenko Says:

    I’d like to clarify my previous, somewhat fuzzy question. Assume that the task is the same: Alice and Bob receive x and y, and Bob has to produce z, such that ( | \langle z |U_y | \phi_x \rangle| ) is maximal on average. In the quantum solution, Alice simply sends her state to Bob, who performs the measurement after U_y application and outputs z. The question is: wouldn’t it be more practical on noisy hardware to send multiple copies of Alice’s state, so that Bob could use some kind of tomography to find a better z output? The quantum memory requirement is increasing in this case, but so is the output quality, which may require a higher number of bits in the classical solution.

  43. William Kretschmer Says:

    Danylo Yakymenko #39: I’ve thought about something along these lines. The problem is that the classical upper bound will remain ~2^n, because with a full classical description of the state, you can simulate many measurements on any number of copies. So, you’d end up using more qubits without pushing the classical lower bound any further.

  44. Del Says:

    Alessandro #37

    > you seem to have a philosophical prejudice against determinism

    I can’t speak for Scott, obviously, but for me and all my physicists friends who don’t like this, it’s not a “prejudice against determinism” at all. In fact many would be totally okay (and a few are even actively liking) the philosophical “normal determinism” of having the exact future being determined by the initial conditions (or any other conditions at any other time).

    The “prejudice” against *super*determinism is that the latter (unlike the former) postulates a massive conspiracy in which tons of different and independent things correlate with each other in such a way to explain what you want.
    To make an example of how absurd this sounds to me (please: no offence intended, I’m just trying to explain) let me make a superdeterministic theory of Newtonian gravity. Say that I don’t like the fact that’s attractive and that with my worldview it should be repulsive. I could postulate that gravity is normally repulsive, but it could become attractive in some very peculiar and rare circumstances (say, if the ratios of the masses of the involved bodies are within certain strange ratios). And then I can fine tune the initial conditions of the universe in such a way that “by accident” all the observed bodies always end up in the universe with these otherwise rare ratios, which instead become common and we observe and mistakenly believe that gravity is attractive. Hope you agree with me that this would be absurd.

    To *me*, the superdeterministic theory of QM feels philosophically equally absurd. Not impossible, but definitely metaphysical.
    I am *not* biased against determinism (if anything I am biased pro-determinism and I see it very strongly in action in e.g. the societal events), but I really have a hard time understanding people who seriously consider superdeterminism. And I say that not in a negative sense, but with genuine curiosity. I would be very interested in you saying more about it. If I had to read the only things which I (mistakenly?) see in between the lines of your comment, I see only “QM is very weird, I can’t believe it. Superdeterminism is weird but it feels less weird to me, so I choose to believe the latter instead of the former”. Am I misrepresenting your position, Alessandro?

  45. Scott Says:

    Martin Mertens #33: Yes, we’re bounding the number of bits or qubits. But this isn’t space complexity per se (another interesting topic), but rather communication complexity, because the bits/qubits that we care about aren’t “scratch space” next to an input, but rather the sole information about the input that gets communicated from the past to the future.

  46. OhMyGoodness Says:

    Adam Treat #40

    Yes and which reminds me to my shame that I forgot to write-Congratulations Dr Aaronson to you and your collaborators for this new result.

  47. Scott Says:

    Carl Lumma #36: You’ve correctly noticed that oracles are used for multiple purposes in theoretical computer science! They can encode the solutions to a hard problem to make it easy (as when we’re giving a hardness reduction), but they can also encode the input to a hard problem that we’re trying to solve (as when we prove an oracle separation).

  48. Scott Says:

    Alessandro Strumia #37: No, I think this has very little to do with any philosophical prejudices about the ancient question of determinism versus indeterminism. As DeI #44 points out, superdeterminism can be rejected on the much simpler ground of being a sort of abstract version of a tinfoil-hat conspiracy theory. It’s open to the same objection: namely, if the conspirators are so all-powerful, why did they only do this one thing you’ve invoked them to explain, rather than 10,000 other things?

    Yes, of course it would be great if the effort to build scalable quantum computers led instead to the overthrow of QM itself: that would mean that we’d precipitated the greatest revolution in physics since the 1920s! But have you thought about what you’ll do if that doesn’t happen, and scalable QCs work just like the theory always said they would? Will you reevaluate what you call your “philosophical prejudice” that there must be a hidden determinism underneath QM, or will you keep inventing more excuses? 😀

  49. Scott Says:

    Adam Treat #40: I wasn’t thinking about this when I put up the post, but the two topics here actually are related to each other, as they both involve boxing the believers in a hidden determinism underneath QM into a tighter and tighter corner! 🙂

  50. AF Says:

    Roger Schlafly #31: The critique of superdeterminism does not actually work for MWI. This is because the chances of appearing in each universe is determined by the Born rule, so you actually can use basic probability to determine which universe you are in. Yes, “all outcomes occur”, but the probabilities of observing each outcome can be predicted before the experiment, and the experiments keep matching the theory’s predictions.

    You can say that superdeterminism and MWI both don’t make any new predictions above those made by the raw math of QM. The difference is that superdeterminism requires adding a massive number of epicycles to bring back a classical physics only universe, while MWI literally is the raw math of QM, without tacking on extra stuff like collapse or classical particles guided by pilot waves. The “universes splitting” is part of the math of QM, and Copenhagen/de Broglie-Bohm add on extra stuff to delete the other branches of the wavefunction.

  51. Del Says:

    OhMyGoodness #46 and Scott #6 and that reminds *me* that I’ve forgotten too! The paper is a bit too much for me to digest in full (hopefully just yet), but it sounds really impressive result and worth of publication on a prestigious journal which I speculate it’s happening under the hood.

    Congrats to Scott and the whole team!

  52. Scott Says:

    AF #50: Yes, that’s perfectly put. MWI does leave it mysterious where the Born probabilities come from, but that’s an issue for any interpretation, not just MWI. It certainly doesn’t engineer any cosmic conspiracy theory the way superdeterminism does.

  53. Carl Lumma Says:

    Thanks Mark #41 and Scott #47. Wanted to review the last breakthrough I’d heard of in this space. Going to tackle the new paper now.

    Is it correct to say that YZ’s separation assumes P != NP, whereas the separation in the present paper does not?

  54. Scott Says:

    Carl Lumma #53: The YZ separation is unconditional, but only relative to a random oracle. Once you instantiate the oracle by a real hash function, yes, you need to assume P≠NP and even stronger statements to get a separation. Whereas “information supremacy” requires no oracle and no hardness assumption either.

  55. Roger Schlafly Says:

    #50,52: MWI does not predict probabilities for different universes. Nobody has made that work. At best you can say that if there is a Born rule, you might see data consistent with the rule in your universe, but you have no way to say that your universe is any more likely than any other.

    To subscribe to MWI, you pretty much have to reject the whole probability concept. To say that something has a probability means that it might not happen, and discovering that it did not happen is pretty much the same as collapsing the wave function. MWI rejects that, and says everything happens.

    There is no way to experimentally test MWI, because it does not rule anything out. David Deutsch says you could test it by building a quantum computer, but not everyone agrees.

  56. Snifnoy Says:

    Sabine Hossenfelder really seems to have gone downhill since she switched to YouTube. I’m not an astronomer or a physicist, but it seems to me that there’s a lot of interesting stuff on her old blog though… since she switched to YouTube, though, seems like the fraction of “let’s talk about some random garbage, and then at the end I’ll say it’s garbage” (which means, why would I bother watching the video? I’m not interested in random garbage!), and also overall sensationalism, has gone up quite a bit.

    I always thought the most interesting things Hossenfelder had to say back on her old blog were about dark matter vs modified gravity. Which unfortunately tends to get glossed as “dark matter vs MOND” — including sometimes by Hossenfelder herself these days in her more sensationalistic era. I think MOND is a pretty bad hypothesis, and it seems like the evidence is against it too.

    But if we ignore MOND and consider more general theories of modified gravity, then I feel like Hossenfelder raised some good points about how these aren’t as far-out as people think. I always thought this post about the Bullet Cluster was really interesting, mostly the point at the end. That is, people say that the Bullet Cluster is a smoking gun for dark matter, because the gravity is delocalized from the visible matter… but any serious theory of modified gravity will involve introducing a new field, and therefore could potentially exhibit such delocalization! Modified gravity and dark matter hypotheses aren’t as different as people make them out to be, both involve introducing a new field — the nature of the field is quite different (which yes makes a big difference obviously), but with sufficient abstraction they’re actually fairly similar and this point tends not to be appreciated.

    I never hear anything about tests of these non-MOND modified-gravity hypotheses though, although again I’m not an astronomer or physicist and so that might be why. Also some of them might just be really hard to test? Idk, it just feels like MOND sucks up all the attention as far as modified gravity goes, even though it’s just… such a bad idea. I have heard that any serious theory these days includes at least some amount of dark matter, that it’s really just unavoidable, so perhaps by Occam’s Razor we should expect that to be all that’s going on, but I still want to see these other theories tested!

    Man, the situation in cosmology and particle physics really is something though, isn’t it? In both cases you have a standard model which is very good, and yet still has visible problems; but whenever anyone tries to propose a specific alteration to the model to fix the problem, the tests all come up negative! When we test the standard model on its own we can see problems, when we test it against alternatives the alternatives all come up worse. Unsatisfying, isn’t it?

  57. AF Says:

    Roger Schlafly #55: You don’t have to reject probability to accept MWI. You just need to reframe it from “what is the chance of this happening” to “what is the chance I will be in a universe/branch where it happens”.

    “To say that something has a probability means that it might not happen” – this does not apply specifically because of the reframing.

    The idea for why the reframing works is because you only know about one branch once decoherence happens, so “what is the chance of this happening” and “what is the chance I will be in a universe/branch where it happens” both boil down to “what is the chance that I observe this happening”. This means that you actually can test MWI, since MWI still makes predictions about what you end up observing.

    You are right that MWI does not rule anything out, but other interpretations like collapse also don’t rule anything out, since the predictions are probabilities on all of the possible outcomes, rather than a single deterministic answer as in classical physics.

    Also, (and this is also in response to Scott #52), I think it actually is possible to get the Born rule from MWI, specifically by counting universe branches. I found out about the idea in a PBS Spacetime video [1], which also summarized the argument in a file linked in the description [2], which links at the end to papers by Carroll and Sebens [3] and by Zurek [4] (although I did not read the papers). Sean Carroll also wrote about his paper on his blog [5].

    I am not sure if the branch-counting argument fully removes the mystery of where Born probabilities come from, but I found it convincing.

    [1] https://www.youtube.com/watch?v=BU8Lg_R2DL0
    [2] https://drive.google.com/file/d/1qsYNMb6OK3ZbJADE0jDXWsjp6OtijPBS/view
    [3] https://arxiv.org/abs/1405.7577
    [4] https://arxiv.org/abs/quant-ph/0405161
    [5] https://www.preposterousuniverse.com/blog/2014/07/24/why-probability-in-quantum-mechanics-is-given-by-the-wave-function-squared/

  58. Ian Says:

    Was chatting a bit with Chat GPT about the paper…

    Landed here 🙂

    ………..
    So…are we finally “past the point” of a clear quantum advantage?

    Yes—within its intended scope. This is the cleanest, most defensible unconditional, experimentally realized quantum advantage I’ve seen: the device used fewer qubits than any classical protocol could possibly match in bits of cross-slice memory, with the bound tied directly to the achieved XEB and no complexity assumptions. That’s a real milestone: you can’t later “algorithmically improve” your way out of it on the classical side.

  59. Dacyn Says:

    Can the quantum computation you perform be efficiently verified classically? Asymptotically I mean, I understand that the specific computation you perform may only require 382 classical bits to perform, let alone check.

  60. Scott Says:

    Dacyn #59: Verification in our protocol takes time exponential in the number of qubits n. In our experiment, n=12 so that wasn’t an issue at all. More generally, though, classical verification time should never be the limiting factor in our experiment — the more immediate problem is that the number of quantum gates needed is also exponential in n.

  61. wb Says:

    @Scott

    Before you take the acid to think about my metaphysical question posted above,
    here is one more comment, since ppl mentioned mwi.

    A quantum computer will never calculate only the correct answer to a question (e.g. factoring a large number like 15) if you believe in mwi; the many worlds will be filled with all possible wrong answers. The amplitude of the wave function will peak above the correct answer, but the wave function is not directly visible to anybody.

    Ultimately, every computer (including humans) is a quantum computer, so if you believe that there are always ‘correct answers’ to mathematical questions does it mean there has to be a reality (as in mathematical platonism ) not captured by quantum physics ?

  62. Scott Says:

    wb #61: Yes, I absolutely do think that arithmetical questions have “objectively right answers,” independent of any physical computing device. If, for example, a computer outputs “2+2=5,” there’s clearly something that allows us to say, “well then so much the worse for that computer! Maybe it had a bug, maybe the programmer intended for it to output wrong answers, maybe its outputs are in a different language, maybe it was hit by a cosmic ray. But in any case, 2+2=4.”

    Indeed, this seems so obvious to me that I have trouble even understanding the contrary view.

    Does this make me a “mathematical Platonist”? If you say it does, fine, whatever. Note however that I’m only a “Platonist” in this sense about arithmetical questions, not necessarily the Continuum Hypothesis and other questions of transfinite theory, as for example Gödel was.

    As far as I can see, none of this has much to do with MWI, except insofar as if we run a quantum computer that succeeds with probability less than 1, then there will necessarily be branches where the computer outputs the wrong answer, and a (moderate) Platonist like me has no problem whatsoever saying “yeah, that answer was wrong, try running the computer again.” Just like we’d say about a classical probabilistic algorithm, and just like we’d say if MWI was false.

  63. wb Says:

    @Scott

    >> if we run a quantum computer that succeeds with probability less than 1
    the probability is always less than 1

    e.g. using Shor’s algorithm the wavefunction is peaked on the correct answer, but it will not be exactly zero elsewhere. So you get all possible answers with mwi and you only get 1 answer (most of the time the correct one) with Copenhagen.

  64. Scott Says:

    wb #63: No, there are quantum algorithms, like Deutsch-Jozsa or Bernstein-Vazirani or Grover’s with known number of marked items, that in principle do succeed with probability 1. They won’t in practice, of course, but only because of noise and other implementation issues.

  65. Bruce Smith Says:

    wb#63:

    > you get all possible answers with mwi and you only get 1 answer (most of the time the correct one) with Copenhagen.

    That is true, from a certain point of view in each case; but it is comparing apples to oranges, because those points of view are not actually analogous.

    To better compare the situations:

    In the MWI meta-point-of-view, there are two actual points of view to distinguish. From “outside the world” (call it “God’s point of view” if you like), you get all possible answers. From the point of view of any conscious being (or anything else capable of classical measurement), that being (or other measuring device) gets only one answer. I have read serious claims that there are ways to formalize and then conclude the important addendum, “(most of the time the correct one)”.

    In the non-MWI (traditional common sense) meta-point-of-view, but assuming some source of true randomness, there still might be two actual points of view to distinguish, if you “believe in probabilities” and/or “believe in time”. There is the situation before the event, in which “all answers are possible, but the correct one is most likely”. There is the situation after the event, in which “you only get 1 answer (most of the time the correct one)”.

    Even from “outside the world”, you might or might not also believe “exactly one of those possible outcomes is what really happened”, or “is what will really happen”. From inside it but after the event, you either believe “the likely thing happened this time” or “some unlikely thing happened this time”, but either way “that thing is what actually happened”.

    To also formalize and conclude the addendum “(most of the time the correct one [happens])”, if you try to stay “inside the world”, is harder than it appears — if you use a frequentist interpretation of probability you will only end up concluding “most of the time, if I do a lot of similar experiments, the statistics of answers will be approximately what the individual probabilities predict”. You have only pushed out the need to treat probability as fundamental from “one experiment” to “a bunch of similar experiments”, not eliminated it!

    Anyway, in your original comparison, “you get all possible answers with mwi” uses the “outside the world” point of view, whereas “you only get 1 answer with Copenhagen” uses the “inside the world” point of view. That is the source of the difference, not “MWI or not”.

    If you stick with “outside the world” these MWI or non-MWI views are essentially isomorphic, just using different terminology. There are things which one calls “real” and one calls “not real”, but only unobservable such things, so you can just say it’s the word “real” which needs to be interpreted differently in these cases, not that reality itself is different.

    If you stick with “inside the world”, they are also essentially isomorphic. Even formalizing or concluding “you get the right answer most of the time” is (I think) problematic in basically the same way in each case.

    So I recommend just understanding all these points of view, and how they relate, and how to use them, as well as you can, rather than worrying about “which one is real”, which is certainly unobservable, and perhaps unknowable or even undefinable.

  66. Gil Kalai Says:

    Impressive paper. Here are some (perhaps naive) questions/thoughts to get (quickly) some better understanding of some conceptual and technical issues. 

    First a question:

    1) Can you achieve (or perhaps already achieved) quantum informational supremacy when Alice (and Bob) are limited to fragments of quantum computation that admits efficient classical simulations?

    The next two points are my naive way to understand the notion of quantum information supremacy.

    2) Suppose that we consider an “ordinary” quantum supremacy experiment. Let’s regard the part of the quantum computer that prepares the entangled states as Alice and the part that performs the measurement as Bob. Alice prepared for Bob T copies of the same quantum state based on a quantum circuit C (unknown to Bob) and Bob measured these states to obtain a length-T sample from C of high XEB value. This task was achieved based on communication of nT qubits which can be much smaller than the number of classical bits required to describe the circuit C. Alas, in this case Alice can simply  send Bob nT classical bits – namely a sample based on C. 

    3) To make things harder for Alice we can modify the task so that Bob tries to achieve a high XEB sample but not based on C but based on some additional quantum computing (unknown to Alice) performed on the entangled state created by C. In this case, providing a sample based on C would not do the job. Alice needs either to send the classical description of C or the quantum states that C produces. (Of course, this needs to be proved.)

    Are point 2) and 3) related to what is actually done? Are there other crucial ingredients that can be mentioned?

  67. wb Says:

    @Bruce #65

    thank you for the response and I think the distinction of “outside” and “inside” view is interesting. But I suspect the main difference between Copenhagen and mwi is basically your distinction of “inside” and “outside”.
    Sometimes mwi proponents will say/write something like “you will find yourself in one of the many branches”, switching from the “outside” view of the whole wavefunction to the “inside” view of what I experience … but the ‘you’ remains unspecified and the process of ‘finding myself in a branch’ remains as unclear as an observation in Copenhagen, because it is really the same thing, switching from the “outside” view to “inside” but with different words.

  68. gentzen Says:

    William Kretschmer #34:

    But if one seriously hopes to close this loophole in a future experiment, better might be to demonstrate the quantum advantage using communication between two devices separated in space, rather than storage on a single device separated in time. That ought to be both experimentally easier than scaling up by so many orders of magnitude, and more convincing than trying to define a boundary between storage and control.

    I really like the idea to use two devices separated in space for “convincing demonstrations”:

    What I have in mind is an experiment with a source of entangled particles (probably spin-entangled photons), two quantum computers which are able to perform quantum computations using such “quantum inputs”, and some mechanism to “post-select” those computations on the two quantum computers which were actually entangled (by some sort of coincidence measurements).

    I think it would be challenging but possible to do this. I am less sure whether the “more powerful version” with quantum memory is within reach in the near future:

    … even if I could pull-off the feature (…) to have two quantum computers which can both read quantum input …, I would still not be able to simulate … I would also need some quantum memory in both computers to collect enough entangled qubits to be able to simulate …

    But I already learned the Peter Woit would dismiss those experimental demonstrations as publicity stunts, at least in the context where I described those two versions.

  69. Bruce Smith Says:

    wb#67:

    You’re welcome.

    (I realized after I posted that, I should have said “you only get 1 answer with Copenhagen” uses the “inside the world / after the event” point of view, as a slight clarification. I also should have said that I prefer the MWI interpretation because it’s simpler and more general, which I think even holds in the purely classical version, for example if we are concerned with large classical systems in which internal communication delays matter — such as a human brain or computer — which might “make unified system-wide decisions” in spite of “distributed internal randomness”.)

    To understand a statement like “you will find yourself in one of the many branches” — I agree the speaker is switching from outside to inside view, but I see nothing wrong with that, as long as they don’t let it mislead or confuse them. There is no inherent reason the “you” would need to have been unspecified in that statement, nor that anything else about its meaning would have to be unclear (assuming they provided enough context).

    For a toy model in which I hope it’s all clear, consider a robot capable of flipping a coin, observing the coin’s state (heads or tails), and recording that observed state (in a growing record of all observations so far). Suppose this robot goes through a periodic cycle of those three actions in that order, for 10 cycles (30 actions in all). Then we model this world as (at the end) having split into 2^10 = 1024 branches, each identified by one sequence of coin states. The robot knows which branch it’s in, because it recorded the complete sequence of states. If we know that record, then we also know which branch it’s in.

    If we widen this toy world so that we are in it along with the robot, all the same stuff is true, except then at the end we know which branch “we and the robot” are in. (In general, it is our entire world which is now said to be “in one branch”. The fact that this is hard to interpret in a large world due to the finite speed of signals is one reason to prefer the MWI and its outside view (in which branches are only a matter of interpretation, not fundamental), whenever not doing so could lead to confusion. But this is not an issue in the toy model, by assumption.)

    It can be useful to talk about it like I did, *or* to talk about the entire “game tree” with all the branches, including the ones that didn’t actually happen, depending on your purposes. You said:

    > … the process of ‘finding myself in a branch’ remains as unclear as an observation in Copenhagen, because it is really the same thing, switching from the “outside” view to “inside” but with different words.

    But once you see that it’s just the same thing with different words, that ought to make it *more* clear.

  70. David Mestel Says:

    Scott #25: Does the paper have a proof of that “no”, i.e. a lower bound for communication complexity in terms of the min-entropy of Alice’s distribution or something?

    Also, do we know whether one can do something similar for the CHSH game? I.e. Alice is given her challenges and then later Bob is given his. Purely classically, this “obviously” requires \(\Theta(n)\) bits of communication/memory, but can they do it with significantly fewer qubits (and of course no entanglement on the side)? My guess would be no but is there an easy way to prove it?

  71. Scott Says:

    David Mestel #70: Yes, the paper has a proof. Read it!

    So you’re asking about the quantum versus classical communication complexity of playing n parallel copies of the CHSH game, assuming no entanglement between Alice and Bob? I’d never considered that question before. Yeah, my guess would be that both complexities are just Ω(n), but I don’t know offhand how to prove it. Does anyone else have an idea?

  72. anon Says:

    Congrats on the paper. (I am not qualified enough to understand how significant it is.)

    Re the second part, I don’t think they win people by pointing out holes in mainstream, I think we in academia lose people by being arrogant and snub and too unemphatic towards general public. We expect them to accept our authority, which they don’t accept, and that is the root of the problem.

  73. fred Says:

    Bruce Smith

    “To understand a statement like “you will find yourself in one of the many branches” — I agree the speaker is switching from outside to inside view, but I see nothing wrong with that”

    Right, and this isn’t even a matter of MWI – after all, Alice also has to accept that she finds herself at the position in spacetime that we have labeled “Alice”, rather than at the position in spacetime that we have labeled as “Bob”. And noone claims that fact somehow invalidates the notion of spacetime, ie you always find yourself at a particular arbitrary point in space time, among the infinity of other possibilities. But, add MWI branches as yet another dimension/dof, and suddenly many thinkers are having a conceptual problem with the idea of finding themselves in a particular branch.

  74. David Mestel Says:

    Scott #71: I did! The closest I could find was Theorem 3/Corollary 1, which seems to relax the assumption on the distribution of Bob’s measurements, but not on the distribution of Alice’s states. Sorry, clearly I’m missing something…

  75. Armin Says:

    #69 Bruce Smith

    What does it mean under the MWI for one branch to have a probability of, say, 1/sqrt2 and the other of 1-1/sqrt2?

  76. Edgar Graham Daylight Says:

    Bruce Smith #69:

    When you speak of the “entire game tree,” you may—consciously or not—be invoking causal reasoning within your mathematical narrative. To the best of my understanding:

    + A strict Platonist regards the entire structure (e.g., the many worlds) as timelessly “out there,” devoid of causal implications. Mathematics, in this view, has a static ontology. Humans are out of the loop: the math need not be invented by us, only discovered. The “robot flipping coins in time” is merely a linguistic convenience; the coin tosses are already embedded across the branches of the tree, predetermined and eternal. No causality. No step-by-step reasoning reminiscent of Aristotelian schools of thought.

    + Yet, by occasionally shifting perspectives—from the external, timeless vantage to the internal, experiential one (where, e.g., a human observer witnesses the robot tossing coins, and lives in one world but not in another)—you seem to be entertaining causal reasoning. This suggests you’re not committed to a strictly (neo-)Platonist stance—which is perfectly fine. But you’re not fully embracing a strictly neo-Aristotelian (causal) framework either. Instead, it seems you’re seeking a synthesis: a way to harmonize _both_ perspectives without privileging one over the other. That’s too good to be true in my developing understanding of the history of science, but I hope you are right.

    Specifically, toggling between the ‘outside’ and ‘inside’ perspectives is not always isomorphic. Consider the case of a nondeterministic Turing machine:

    (1) From the non-causal (‘outside’) view, one _could_ reason across the entire computational landscape, treating the weights on instructions as degrees of truth—akin to a fuzzy Turing machine (cf. J. Wiedermann, 2004).

    (2) From the causal (‘inside’) view, one can reason about individual runs, interpreting the weights as probabilities—akin to a probabilistic Turing machine.

    If you want to call (2) an ‘outside’ view too, then that’s fine with me. But my point remains: are the non-causal in (1) and the causal in (2) not, at least from the standpoint of recursion theory, fundamentally distinct computational beasts? There is a fuzzy TM that “computes” a number theoretic function that no ordinary TM “computes”.

    Did Scott have to take some acid when encountering Wiedermann’s work on hypercomputation? I suspect he might have
    —but feel free to correct me. Either way, this discussion is so intellectually stimulating that I can’t resist “tossing” in my five cents.

    It would be ideal if multiple perspectives could indeed be interwoven freely—without compromising the physical implications of the mathematical results. That way, we could sidestep prolonged debates about Platonism and related philosophical baggage. (But is this not a dream, reminiscent of logical positivism, some 100 years ago?) Alternatively, we could just dismiss fuzzy sets (multivalued logic in particular) from the outset of this entire discussion.

    Isn’t it the case that the very same mathematics held different meanings for Eddington and Einstein—shaped by their distinct philosophical perspectives?

  77. Alessandro Strumia Says:

    Del #42 and Scott #48,

    I understand your point of view, because it was mine. In 2014 I stopped reading the ‘t Hooft cellular automaton paper at the page where he needs super-determinism, exactly because it looks like a conspiracy theory.

    Years later I become politically incorrect, so I could read the rest. If ‘t Hooft had a theory where *simple* deterministic rules have a conspiracy-like outcome that reproduces QM, I would accept that Alice and Bob are just two multi-particle bound states. While I would not accept a theory with complicated conspiracy-like laws or initial state.

    The point is shifting the judgement from the outcome to the unknown theory. I don’t think that ‘t Hooft has a theory. But the theory that goes beyond postulating dices should exist.

  78. William Kretschmer Says:

    David Mestel #71: you get the bound by combining Theorem 3/Corollary 1 with the matrix bounds proved in Appendix E.4. For Cliffords, those bounds are established in Lemmas 9 and 11. The bounds for other ensembles are summarized in Table I.

    To make things clearer in the next revision of the paper, we’ll add some pointers and another Theorem heading in Appendix G

  79. Sych Says:

    Dear @Scott
    Question about MWI bothers me now. Do I understand right that under MWI there exist some “universe” or branching trajectory where quantum computers for unknown for researchers reason will accidentally always produce wrong calculation results?

  80. fred Says:

    Armin #75

    ”What does it mean under the MWI for one branch to have a probability of, say, 1/sqrt2 and the other of 1-1/sqrt2?”

    It means the same thing as the answers to the following questions:

    Given that the Netherlands has a population of 10 millions, and there are 1.5 billion Chinese, what is the probability to find oneself born and live life as a Dutch rather than a Chinese?
    Or what are the odds to find yourself living in the 21st century when there are 8 billion humans vs being alive in 30BC when there were only 200 million people.
    What are the odds to wake up tomorrow in Armin’s shoes given that the rest of the galaxy hosts, say, 200 trillion conscious organisms?

  81. Scott Says:

    Sych #79: Yes, absolutely. But it’s equally true that in thermodynamics, the gas molecules have some nonzero probability of all congregating into a corner of the box, and in biology, there’s some nonzero probability that we all evolve to start hopping on one foot for no reason. The key point is that, in each case, the theory itself tells you why the probability in question should be exponentially small.

  82. Sych Says:

    Thanks Scott!
    Yes I understand thing about probability. But in classical thermodynamics we have only one world and improbable things are just improbable things. But as far as I understand MWI every branch is real and has observers who experience it as reality.
    It is just that I feel psychological resistance when I think that there is a “reality” where we know quantum physics, but by chance it doesn’t work in experiments as we expect. I feel very sorry for the scientists in that world who try to build a quantum computer and everything doesn’t work as they expect, no matter how hard they try.
    Then I start to think that I could be in branch were some weird accidental thing happens…Yikes. Not argument against MWI of course.

  83. fred Says:

    The problem when we start talking about what is theoretically possible but very unlikely in the MWI universe is that we have to assume that things are still bound by the laws of physics, ie you have to get there through some path that is compatible with the Schrodinger equation, etc.
    I don’t think that virtual particles from the zero energy field (whatever it’s called) allow for literally anything to happen at any moment. Maybe one limitation is that Schrodinger is based on continuous complex values extending from -inf to +inf, but in reality things may be really discrete at some level (planck, etc), which would create cutoffs in the probabilities – the tunneling effect is one thing, but noone thinks the particle has an actual chance to end up instantaneously in another galaxy , even though the theory says the probability is non-zero.

  84. fred Says:

    If underlying reality was truly based on a continuum, I don’t quite get how the world would be able to “settle” on the actual hierarchy of scale we’re part of, with macro-objects like galaxies vs buildings and cats vs electrons, etc.
    A world based on the recursive infinity of the real numbers would look like the mandelbrot set and other fractals.

  85. Scott Says:

    Alessandro Strumia #77: If there were actually a simple, elegant theory that had many other points in its favor, and as a byproduct, it implied a superdeterministic conspiracy theory, that would be something! But I don’t believe that ‘t Hooft or Sabine or anyone else actually has such a theory, that applies to any situation I would care about. I think that the urge to replace QM with something “secretly classical” came first, and that the superdeterminism was tacked on later, as an ad hoc, almost comically inelegant way to explain away Bell inequality violations.

  86. fred Says:

    It always seemed to me that the good old Central Limit Theorem ought to be enough to counteract any claim of conspiracy due to determinism.

    In Bell’s own words:

    Suppose that the instruments are set at the whim, not of experimental physicists, but of mechanical random number generators. Indeed it seems less impractical to envisage experiments of this kind, with space-like separation between the outputs of two such devices, than to hope to realize such a situation with human operators. Could the outputs of such mechanical devices reasonably be regarded as sufficiently free for the purpose at hand? I think so.
    Consider the extreme case of a ‘random’ generator which is in fact perfectly deterministic in nature — and, for simplicity, perfectly isolated. In such a device the complete final state perfectly determines the complete initial state — nothing is forgotten. And yet for many purposes, such a device is precisely a “forgetting machine”. A particular output is the result of combining so many factors, of such a lengthy and complicated dynamical chain, that it is quite extraordinarily sensitive to minute variations of many initial conditions. It is the familiar paradox of classical statistical mechanics that such exquisite sensitivity to initial conditions is practically equivalent to complete forgetfulness of them. To illustrate the point, suppose that the choice between two possible outputs, corresponding to a and a′, depended on the oddness or evenness of the digit in the millionth decimal place of some input variable. Then fixing a or a′ indeed fixes something about the input — i.e., whether the millionth digit is odd or even. But this peculiar piece of information is unlikely to be the vital piece for any distinctively different purpose, i.e., it is otherwise rather useless. With a physical shuffling machine, we are unable to perform the analysis to the point of saying just what peculiar input is remembered in the output. But we can quite reasonably assume it is not relevant for other purposes. In this sense the output of such a device is indeed a sufficiently free variable for the purpose at hand.

    Of course it might be that these reasonable ideas about physical randomizers are just wrong — for the purpose at hand. A theory may appear in which such conspiracies inevitably occur, and these conspiracies may then seem more digestible than the non-localities of other theories. When that theory is announced I will not refuse to listen, either on methodological or other grounds. But I will not myself try to make such a theory.

  87. Armin Says:

    Fred #80

    Are you saying that the MWI needs an uncountable number of branches just to interpret this binary outcome?

  88. Bruce Smith Says:

    fred #73 –

    > Alice also has to accept that she finds herself at the position in spacetime that we have labeled “Alice”

    Excellent additional point!

    Armin #75 –

    > What does it mean under the MWI for one branch to have a probability of, say, 1/sqrt2 and the other of 1-1/sqrt2?

    Just replace “branch” with “measurement outcome” and forget about the MWI, and the new sentence means the same as the old one.

    > Edgar Graham Daylight #76

    My motive in replying to wb was to take an opportunity to (I hope) reduce confusion regarding the MWI. Your new points may be interesting and/or related, but they go beyond the scope of what I personally want to talk about now! (Partly since I am also busy, and I have no special knowledge about them.) So I will limit my response to correcting possible misimpressions about my own comments. (This is in no way an attempt to suggest to anyone else that they should limit their responses.)

    > Specifically, toggling between the ‘outside’ and ‘inside’ perspectives is not always isomorphic.

    I never said it was! What I said was isomorphic was: the outside perspectives of MWI and non-MWI; and separately, the inside perspectives of MWI and of non-MWI. Rereading my post, though, it appears I never clearly defined “the outside and inside perspectives of non-MWI”, since I mostly used other terms when discussing non-MWI. Sorry about that! It also sounds like you may have misunderstood what I meant by “the inside point of view” (or if not, I misunderstood what you said about it). What I meant by “inside”, in both MWI and non-MWI, is “what it seems like, to an agent-like entity inside the system”.

  89. OMG Says:

    Scott #65

    I agree and don’t understand the intellectual pressure that drives arguments in this direction. From a social utility standpoint why would a theory be preferred that postulates our choices do not matter in creating the future because the present and the future were fully determined at the moment of creation. From a desire to determine the objective truth then what measurement has been made that nullifies the Copenhagen interpretation.

    In lieu of evidence to the contrary the Copenhagen interpretation is intellectually satisfying to me in that until a quantum measurement has been made the outcome can only be anticipated by a probability distribution. In a similar manner future time has no existence until it is actualized as the present.

    Even starting with the statement about God and dice, I can’t understand the motivation attributed to a God that would prefer a fully determined universe versus a universe that operates in accordance with the Copenhagen interpretation.

  90. OhMyGoodness Says:

    I see Sabine’s model as the -We’re all bozos on this bus-interpretation of quantum mechanics. We are along for the ride and to look at the scenery.

    Oops-determinism was responsible for my use of the acronym for my screen name in the previous post. I am innocent.

  91. wb Says:

    @Armin #87

    I don’t want to speak for Fred, but consider the simple experiment of a Geiger counter measuring radioactivity in its environment.
    The clicks can happen at any time, so according to mwi at every point in time t the world splits into two worlds, one with a click and one without (and after a click the worlds split again for the next click etc.)
    Of course they also split at Alpha Centauri and everywhere else in the universe …

    We don’t know if time / spacetime is discrete, so we don’t know if mwi results in a continuum of worlds or just a really really large number …
    Btw notice that the ‘weight’ of each world is near zero, because there are so many …

  92. wb Says:

    My 2c about the ‘interpretation problem’ is that (for now) one can be pragmatic about it:
    Copenhagen works very well in 99% of cases, including quantum computing.
    If one wants a detailed explanation of quantum measurements, Bohm is a good choice,
    and if one does quantum cosmology and/or writes a pop.sci. book mwi is the way to go.

  93. Armin Says:

    #88 Bruce Smith
    >Just replace “branch” with “measurement outcome” and forget about the MWI, and the new sentence means the same as the old one.

    So if we are to forget about the MWI just in the cases it has difficulty interpreting, why not forget about it altogether?

    #91 wb

    >The clicks can happen at any time, so according to mwi at every point in time t the world splits into two worlds, one with a click and one without (and after a click the worlds split again for the next click etc.)

    If one of these worlds has a probability of 1% and the other of 99%, what does that mean under the MWI? And what does it mean under the probability distribution in my original question?

  94. gentzen Says:

    OMG #89: You meant “Bruce Smith #65”, not “Scott #65”, correct?

    I see Sabine’s model as the -We’re all bozos on this bus-interpretation of quantum mechanics. We are along for the ride and to look at the scenery.

    This makes me wonder, whether you saw how I recently described my reaction to Sabine Hossenfelder’s book “Existential Physics”. (I won’t quote it here, because it is written in German, and I have no idea whether for example “It is indescribably fatalistic and passive” as translation of “Es ist unbeschreiblich fatalistisch und passiv” captures the mood and emotion of the German phrase.)

    Let me quote instead from my later elaborations, which depend less on mood and emotion:

    One of my reactions to Sabine was that she seemed to trust the perceptions of her senses more than her inner perceptions of what she has done, is currently doing, and wants to do.

    Later it occurred to me that Heisenberg actually has a similar asymmetry, relying on his own observation and neglecting knowledge of his own actions:

    Only what he has observed is certain to be real. Describing his own actions as co-creators of reality never occurs to him. Or even stranger: If, for example, he were to use a polarization filter to ensure that certain polarizations occur less frequently than others, he instead considers it a kind of observation.
    This is like regarding my knowledge of my actions (past, present, and future) as an observation rather than as the fact of the reality of those actions.

  95. Scott Says:

    Gil Kalai #66: Thanks for your comments.

    Regarding 1): if you use stabilizer states and stabilizer measurements, the possible one-way communication complexity advantage that you can hope to achieve quantumly is at most quadratic. Whether you can achieve that is an interesting question to which I don’t know the answer. Does anyone else?

    With non-stabilizer states and stabilizer measurements, you can achieve an exponential quantum advantage — as in the Hidden Matching Problem (see my paper with Buhrman and Kretschmer), and as in our new setup as well.

    Regarding 2) and 3): Yes, that’s what we’re doing, insofar as I understood what you wrote.

  96. Ian Says:

    Scott #95:

    Does this paper bear on your question?

    https://arxiv.org/abs/2506.19369

    Gottesman-Knill Limit on One-way Communication Complexity: Tracing the Quantum Advantage down to Magic

    Snehasish Roy Chowdhury, Sahil Gopalkrishna Naik, Ananya Chakraborty, Ram Krishna Patra, Subhendu B. Ghosh, Pratik Ghosal, Manik Banik, Ananda G. Maity

    A recent influential result by Frenkel and Weiner establishes that in presence of shared randomness (SR), any input-output correlation, with a classical input provided to one party and a classical output produced by a distant party, achievable with a d-dimensional quantum system can always be reproduced by a d-dimensional classical system. In contrast, quantum systems are known to offer advantages in communication complexity tasks, which consider an additional input variable to the second party. Here, we show that, in presence of SR, any one-way communication complexity protocol implemented using a prime-dimensional quantum system can always be simulated exactly by communicating a classical system of the same dimension, whenever quantum protocols are restricted to stabilizer state preparations and stabilizer measurements. In direct analogy with the Gottesman-Knill theorem in quantum computation, which attributes quantum advantage to non-stabilizer (or magic) resources, our result identifies the same resources as essential for realizing quantum advantage in one-way communication complexity. We further present explicit tasks where `minimal magic’ suffices to offer a provable quantum advantage, underscoring the efficient use of such resources in communication complexity.

    Seems like you can’t even get quadratic advantage…

  97. OhMyGoodness Says:

    gentzen #94

    I meant to write Scott #85-sorry.

    I translated your text from your first link and yes we have very similar views as you wrote there and as continued in your elaboration. It may be at least partially due to, as wb alludes to in #92, that the requirements for clicks and book sales results in adopting odd positions and making claims about scientific facts that are not true. If your well being comes to depend on clicks and sales then likely has an impact on your views. She is an entertainer/scientist now.

    I wasn’t aware of Heisenberg’s views but it must be the case that the greatest minds of that era hadn’t yet worked through all the implications of quantum mechanics. I respect their historical greatness fully but they didn’t have the benefit of decades of confirming experiments and debate that we enjoy today.

    I agree that people should be free to believe as they like but that in a scientific setting stating that beliefs are facts when they are not facts is not acceptable.
    I agree with you that our actions shape the future. That is why 400 million years of evolution resulted a species that can plan actions based on imperfect and non deterministic expectations of what the result of what those actions will be,

  98. Scott Says:

    Ian #96: Ah, extremely interesting, thanks for finding that! While that paper is clearly relevant, I’m not clear whether it answers the question, since it only discusses stabilizer groups of prime dimension, and never deigns to mention whether any of it carries over to the most common case of dimension 2n. And FWIW, GPT5-Thinking wasn’t sure either! Anyone else?

  99. gentzen Says:

    OhMyGoodness #97: Thanks. I don’t think that “Heisenberg hadn’t yet worked through all the implications of quantum mechanics” is the reason for his strong emphasis on observation. My guess is rather that Bohr’s pragmatic orthodox interpretation with equal emphasis on preparation and measurement is hard to make intuitive and precise. Heisenberg’s subjective Copenhagen interpretation on the other hand is clear and easy to understand (in a certain way). It has its drawbacks, but Heisenberg preferred to be understandable over Bohr’s attempts to never say anything wrong.

    I don’t know either how to integrate both action and observation into “truth conditions” for an interpretation. If we look at the electric field for example, the classical way to give it physical meaning is via the force it would exert on a test charge. I don’t see how I could use “action” instead to provide similar physical meaning. A possible “action” would be to let a certain current flow along some predefined curve, think of an electromagnet for example. This gives a magnetic field (instead of an electric field), but I still fail to make it precise in which sense this would give physical meaning to the magnetic field.

    One idea I have how observation could make “room” for other “truth conditions” of reality is to take inspiration from how our eyes communicate with the rest of the brain: Early on, there is a prediction for what the eyes should see, and they only communicate the difference between that prediction and the actually seen pattern to the rest of the brain.
    For the example with the electric field and the test charge, I could make predictions for what I would expect to measure with the test charge based on what I know of my actions in the past and also what I know of past observations, and then only update those predictions based on test charge measurements that I really did, and ignore all those potential test charge measurements that I did not actually perform.

  100. William Kretschmer Says:

    Scott #98: It looks to me like this paper crucially uses the property that in prime dimensions d, the set of d+1 stabilizer measurement bases are mutually unbiased. This means that if you measure a stabilizer state of prime dimension d in a stabilizer basis, the outcome is either (1) deterministic when you measure in the same basis as the state, or (2) uniformly random over the d possible outcomes. So in effect, you only need to communicate the value of that state in that one basis, while ensuring that sampling in any other basis is uniformly random. The main contribution of the paper is finding a clever way to do so with only log(d) bits of communication and shared randomness. The mutually unbiased property fails in composite dimensions (even d=4), so the communication strategy does not seem to generalize.

  101. Scott Says:

    William Kretschmer #100: Ah, thanks! So, the possibility of a quadratic separation between R1 and Q1 based on n-qubit stabilizer states and measurements remains open as far as you know?

    Two things that might affect the answer:

    (1) Do we allow sampling and relational problems, or only decision problems?

    (2) Do we allow non-stabilizer classical computations to generate the preparation and/or measurement stabilizer circuits given the inputs?

  102. Bruce Smith Says:

    Armin #93: in your reply to my #88 it seems you understood my phrasing in the opposite way that I intended.

    By “forget about the MWI”, I did not mean “because this is a difficult case for it” (i don’t think it is).

    I meant, “because your question has the same answer, whether or not you believe in the MWI”.

    That is, if someone says “I predict measurement A will have result A1 with probability p1”, what they mean does not depend at all on whether they believe in the MWI or not.

    And if they do believe in it, then if they use the phrase “in the branch A1” (or more pedantically, “in the branch labelled A=A1”), they mean the same thing as if they had used the phrase “in the case where A had result A1”.

    If what is bothering you is the situation where p1 is an irrational number, then I don’t understand why that would bother you, but if it does, that is just an issue with “the meaning of an irrational probability” — still nothing to do with the MWI.

    What the MWI is “for”, historically, is resolving a confusion about whether we need to come up with a physical law governing “how/when/why the wavefunction collapses when we make a measurement”. It says, “that does not even make sense, because a wavefunction collapse is not a physical event, but just a change in point of view (akin to a change of coordinates, or to which subspace of a configuration space we decide to care about)”. Therefore we can make progress on it by working out rules for correctly changing our point of view (in more and more complex situations, but with no need to cover all possible situations), rather than trying to make up new physical laws (which must apply in any possible situation from the start, to be correct). The cases we’re discussing here are all “simple easy cases where the MWI should, and does, say the same thing as if we weren’t using it, in a trivial way”. In fact, they are all cases where nothing unique to quantum mechanics is involved (they can be described classically with nothing missing).

  103. Bruce Smith Says:

    minor correction to my #102 – “subspace of a configuration space” does not make sense. I mixed up the classical version, “subset of a configuration space” (leaving out some technicalities for also tracking probabilities), with the quantum version, “subspace of a Hilbert space”.

  104. Scott Says:

    Following up on my comment #101:

    As a simple observation, if we allow sampling problems—and an arbitrary stabilizer circuit associated to every possible input of Alice or Bob, and (edited to add) no shared randomness between Alice and Bob—then we can indeed get a quadratic separation between randomized and quantum one-way communication complexities.

    Let Alice’s input be a description of an n-qubit stabilizer state |ψ⟩, and let Bob’s be a description of an n-qubit stabilizer circuit C. Let Bob’s task be to sample from the distribution DC|ψ⟩ corresponding to C|ψ⟩. Clearly Bob can do this if Alice sends |ψ⟩ itself over a quantum channel. But the ability to do it for all possible C implies the ability to learn |ψ⟩. Since there are exp(n2) distinct n-qubit stabilizer states, this requires Ω(n2) classical bits of communication.

    I conjecture that such a separation should also be possible for relational problems. As a candidate for this, modify the above task to outputting any n-bit string in the support of DC|ψ⟩. While this is far from obvious, it seems plausible to me that, given any function f mapping an arbitrary stabilizer circuit C to some element in the support of DC|ψ⟩, from f we can determine Ω(n2) bits about |ψ⟩.

  105. Ajit R. Jadhav Says:

    gentzen # 99:

    […] I don’t see how I could use “action” instead to provide similar physical meaning. […]

    Sure, one could. The answer is “well known,” though deep buried under all those categories and definitions of the formalism. That’s why, although the matter is “simple” enough (at least conceptually), it does become a bit complicated to explain.

    As to the “simplicity”: I got this matter cleared up for myself only while teaching a course which involved the calculus of variations. [The course didn’t require it, but one beautiful aspect of having to face an entire class of students is that you can no longer procrastinate learning the subject for yourself (at least a couple of paces farther down), even if only to save your face! [at least in front of them!]]

    One reason the same question had occurred to me, and stayed with me for too long: I relied, for too long, on pop-sci accounts (even the respectable ones, like the books by Prof. Morris Kline, and by Prof. Kolmogorov et al.)

    Anyway, thanks for sharing your thoughts here. … Reading through them not only refreshed the matters, but also gave me an idea of at least one section in a paper, if not an entire paper. [That’s why, I must stop here, but feel free to get in touch with me by email. I could always share the new idea and discuss the matter further via emails with you. Or, with any one … well, at least with any one commenting here, on this thread! (I could even collaborate!)] Else… Else, you have to wait [what else?]

    Best,
    –Ajit

  106. Armin Says:

    #102 Bruce Smith

    “What the MWI is “for”, historically, is resolving a confusion about whether we need to come up with a physical law governing “how/when/why the wavefunction collapses when we make a measurement”. It says, “that does not even make sense, because a wavefunction collapse is not a physical event, but just a change in point of view…”.

    How does the MWI help resolve any confusion when the reply to my original question is “forget about MWI”… “because your question has the same answer, whether or not you believe in the MWI”?

  107. OhMyGoodness Says:

    gentzen #99

    I did some reading that further convinced me that you know far more than I do about Heisenberg, and about his differences with Bohr, and so I defer to your judgement.

    My background is engineering so biased pragmatic. I don’t understand some of your statements. Yes I can map an electric field and know that the strengths I record can be used in calculations to construct safe real objects that are impacted by the field strengths I measure. So yes the electric field is real insofar as it impacts other real classical objects. Not visible but many real classical things are not visible. My actions were to acquire a measurement device and to acquire appropriate instrumentation and to develop an appropriate measurement program and then observe and record the output of the device. I am sure it is obvious to you but if not tedious will you please explain further the question as to real or not.

    In general.I accept that evolution resulted in a brain sufficient to survive in a risk laden world. This includes some remembrance of the past to be used to help survival in the present and help develop reasonable plans for the future that enhance the probability of survival. We have mental abstractions of real classical objects but are able to measure them as appropriate to determine unabstracted real characteristics.

    Evolution did not equip us to abstract quantum objects in the sense of a visual like representation and so those must be abstracted with mathematical formalism. The quantum observations we make are real in a similar manner to classical measurements and the measurements have associated probability distributions beforehand but the observation changes the nature of the object from quantum to classical.

    Sorry for all this obvious text but maybe you can see my thinking and explain to me what I am missing with respect to quantum mechanics and action vs observation. Thanks.

  108. jonas Says:

    Because of my name, I find your phrasing of the objection in comment #30 unfortunate: God may have very good reasons for not smiting the bad guys.

  109. Scott Says:

    jonas #108: But given fine control over the initial conditions of the universe, surely God could do something more interesting than cause correlations to occur in measurements of entangled particles in exactly the places where quantum mechanics predicted those same correlations much more elegantly? 😀

  110. John Duffield Says:

    The comments were an interesting read Scott. I am surprised that so many people seem to believe in the Many Worlds Interpretation. I remember a David Deutsch video where he tried to justify it using the double slit experiment. I think there’s a simple explanation for that which involves the optical Fourier transform. Art Hobson gave a similar explanation in his 2013 paper There are no particles, there are only fields”. However it would seem that people just want to believe in MWI. Along with other mystical things, like time travel, the parallel antiverse, and 72 virgins in paradise. They want to believe in entanglement too. As for your paper, ouch, it’s 42 pages long. It was bad enough wading through Bell’s 5-page 1964 paper On the Einstein Podolsky Rosen Paradox to spot the trick. Then if I find a chink in your armour it will be difficult to persuade the believers that spookiness is not required. So I’ll see how I get on. Meanwhile please do look into optical computing using displacement current. It will be worth your while. See where Feynman said how absurd it gets. Only it isn’t.

  111. Scott Says:

    John Duffield #110: Do you expect that QCs running Shor’s algorithm will be able to factor 2000-digit integers? If and when that happens, will it just be another “trick,” or will it force you to accept the physical reality of an exponentially large Hilbert space (whether people talk about it using MWI or non-MWI language)?

    Also, the main part of our paper is only 5 pages! As is the convention nowadays, there are then dozens of pages of “supplementary material” with all the calculations and experimental details.

  112. fred Says:

    John Duffield
    “However it would seem that people just want to believe in MWI. Along with other mystical things, like time travel, the parallel antiverse, and 72 virgins in paradise. “

    Scott
    “Do you expect that QCs running Shor’s algorithm will be able to factor 2000-digit integers? If and when that happens, will it just be another “trick,” or will it force you to accept the physical reality of an exponentially large Hilbert space”

    Actually John will be able to discover by himself the validity of MWI in a few decades from now, when he’s the oldest man alive on record, having somehow beat (against all odds) an endless chain of terminal disease and accidents.
    Remember folks, if MWI is true, in all likelihood you’ll live at least as long as the oldest person alive in the branch where you’re currently reading this message!

  113. jonas Says:

    Scott: yes, I’m satisfied that the argument works against superdeterminism, or against fossils planted into a six thousand years old Earth, I’m just nitpicking one little phrase in how you explained the argument.

  114. OhMyGoodness Says:

    A short addition to my post #107-

    Once I measure electrical field intensities and map the field then of course I have a vision-like mental image of the field.

    Actually when I made the statement about the great minds of that era in response to statements concerning Heisenberg I was thinking primarily about Einstein and what would be the result if he had the benefit of all the quantum experimental results that have been obtained since his death.

  115. John Duffield Says:

    Scott: no. I do not expect QCs running Shor’s algorithm will be able to factor 2000-digit integers. I have a computer science degree. I know how computers work. I have seen how they have advanced over the decades. My phone has more computing power than NASA had when they put a man on the moon. But in the last 54 years, quantum computers have delivered nothing. I’m sorry Scott, but I now know a great deal about fundamental physics. Enough to side with Einstein. I will leave it at that.

     
    Apart from this: good luck with your future endeavours. On certain things, we have your back. But please do carve out some time for optical computing. Note this in the Wikipedia article on displacement current: Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. It’s more fundamental than conduction current. It’s why the electron has a magnetic moment.

  116. Scott Says:

    John Duffield #115: Ok whatever, I just wanted your prediction for what’s going to happen. Other than that, my only interest is that you don’t wiggle out later by saying that the paper reporting quantum factoring is too long or whatever — as you’ve done in this existing case where, with considerable effort, we did actually get something to work, and it worked exactly the way quantum mechanics said it would work, a way you couldn’t even begin to understand if you don’t accept that quantum mechanics is true and classical physics is false.

  117. Bruce Smith Says:

    Armin #106:

    > How does the MWI help resolve any confusion when the reply to my original question is “forget about MWI”… “because your question has the same answer, whether or not you believe in the MWI”?

    As I already stated, your question was a very simple kind, where the MWI trivially reduces to explaining the same simple thing in isomorphic terms as without the MWI. Much like, if you applied special relativity to explain the forces and accelerations involved in the motion of cars or baseballs, it would simply reduce to making the same predictions as Newtonian mechanics. For special relativity to matter, you should apply it to things moving very fast. For MWI to matter, you should apply it to trying to understand quantum systems in which one part effectively performs measurements on another part, but using a complex multi-part process over an effectively large region of space and time, for example to an attempt to model quantum mechanically (in an abstract sense) a system containing several people (or other agent-like things).

    You might try asking google these sorts of questions, instead of me — its generative AI is getting pretty good at answering questions whose answers are part of common knowledge that has been written up well many times. It is also less biased than I am, describing the MWI as merely one of several approaches to the “quantum measurement problem”, rather than the clearly simplest and most general one.

    Fred #112:

    I want to state for the record that it is possible to both believe in the MWI, and not believe in such a strong form of the anthropic principle as you seem to have expressed there (perhaps jokingly??). Even if you think the MWI means that “things that could have happened, but didn’t, are in some sense real”, it doesn’t seem wise (to me) to ignore our experience that probability seems to be a real thing in our world, and our intuitive sense that our future welfare depends on taking future probabilities seriously. To put it another way, the MWI is not a good reason to consider a situation in which your probability of survival is very low, to be an acceptable one (assuming you want to survive, and have any choice about that probability). I admit that this (like all other fundamental beliefs) is a matter of faith, not anything provable.

    Besides, “in all likelihood” is clearly wrong, the most you could sensibly claim is “in a very small likelihood, which is still real”.

  118. Max Madera Says:

    @Bruce Smith #112
    I do not understand what is gained by using MWI. One certainly needs both an observer and a collapse of the wave function to make any predictions with the correct statistics for one self (that is, the “I” that identifies himself out of the many branches of the universe wave function). I have never seen any MWI practitioner predicting the outcome of any experiment with god’s point-of-view wave function, if it has any meaning. All the MWI believers that put videos in the internet start with a very specific branch of the universe, corresponding to what they observe presently, that is, identifying themselves as the observer that has collapsed the universe wave function from their point of view. From that moment on, it is as if the rest of the wave function can be safely omitted FAPP, exactly (and as subjectively), as an observer would do in standard QM.

  119. flergalwit Says:

    John Duffield #110, for me at least it’s exactly the reverse of what you say. I lean towards MWI not because I want to believe in it, but because in my judgement it’s the most conservative interpretation of QM. Frankly it’s somewhat boring, being a return to naïve realism (a kind of Copernicanism on steroids), and I think things would be far more interesting if MWI turns out to be wrong. I’d therefore love to see a sound argument against MWI, but imo those attempts I’ve seen fall short.

    Scott, first of all congratulations on what looks like an important result. I’d like to comment more on it if that’s still welcome, but I’m still absorbing the paper (and how it fits in with what was previously known).

    I hesitate to comment on the superdeterminism angle, but I feel compelled to. I’m very disconcerted that you are repeating the line about superdeterminism positing a conspiracy theory of initial conditions yet again, despite the multiple explanations on this blog (and elsewhere), over the months and years, that this is a misrepresentation of what superdeterminist proponents (at least the ones under discussion) are advocating.

    In one sense I don’t know why I’m so bothered by this, because I’m far from being a supporter of superdeterminism. Perhaps it’s because I’d actually like a proper refutation of superdeterminism, and one that relies on strawman misrepresentations isn’t of any help to anyone in the long run, least of all opponents of superdeterminism.

    And if we’re really going to go down the route of making creationist comparisons, frankly the “initial conditions conspiracy theory” misrepresentation smacks more than a little of the creationist “critique” of evolution as “an explosion billion of years ago, after which the pieces randomly came together to form the Earth and all the structure within it, including life”.

    Of course that analogy only goes so far. Evolution is correct (or at least as correct as a scientific theory can be), whereas superdeterminism is almost certainly wrong (being weakly motivated and overly speculative). At any rate, it’s certainly not established science, unlike evolution. But the misrepresentation point is the same.

    When Peter Woit was blatantly misrepresenting your views on a certain other controversial topic that’s come up recently, I wrote a polite comment to his blog pointing this out. In response Peter deleted my post, publicly misrepresented its contents, and told me to “f*** off”. (I believe the same has happened to other commenters.) This was of course hugely disappointing. And yet I can’t say I’m shocked by it. Despite Peter’s many valid points and criticisms over the years, I’d never have said that accurate representation of his opponents’ views, with all their nuances, was one of his strengths.

    With you it’s different – I have a great respect for your integrity, which is why it is so baffling and disconcerting to see you repeat this line when superdeterminist proponents have disavowed it and have even written papers refuting it.

    At the very least I hope in future you can say something like “superdeterminism would have to *imply* a cosmic conspiracy in the initial conditions in order for it to work”. (And then explain why this is catastrophic for its merit as a scientific theory.)

    I still think the part in quotes is wrong – Adam Treat and I explained why it’s wrong in the comments on https://scottaaronson.blog/?p=8705, and I believe others have done likewise – but it’s very different to saying that superdeterminism is *postulating* this conspiracy theory. Proponents can be wrong about the *implications* of their theory after all, and it’s something there can be legitimate scientific disagreements about. But saying superdeterminism is *proposing* an initial conditions conspiracy explanation (as if this description is something its proponents would actually embrace), before knocking this idea down, is simply a strawman.

    I hope this comment is taken in the spirit intended, from someone with a great deal of respect for your scientific work and public outreach.

  120. Armin Says:

    #116 Scott

    I hope I don’t come across as pedantic, but it seems to me that in the spirit of scientific inquiry it would be better to rephrase the last sentence as something to the effect that quantum mechanics is a whole lot closer to truth than classical physics. Otherwise we are liable to the charge that we treat it like a religion.

  121. wb Says:

    @jonas

    If G** knows that the evildoers are evil but does nothing about it, would it not make Him complicit?

  122. gentzen Says:

    OhMyGoodness #107: Thanks for your interest.

    I did some reading that further convinced me that you know far more than I do about Heisenberg, and about his differences with Bohr, and so I defer to your judgement.

    Yeah, I used to defend Heisenberg and blame Bohr. One reason is simply that I wanted to understand the positions and opinions of the founders, and was unhappy with Bohr’s impenetrable prose. So when people online attacked Heisenberg for one of his mistakes, and explained why they preferred Bohr, I always pointed out how ridiculous it is to try to never be wrong. (One resource I liked early on was The Information Philosopher (on Werner Heisenberg), because it provides easy access to many original accounts. The Stanford Encyclopedia of Philosophy is also a great resource.) Besides Bohr, I always wondered how much Henry P. Stapp is to blame for all the confusion about QM and the consciousness causes collapse misconceptions.

    To provide evidence for my claims about Heisenberg’s focus on observation (and his neglect of preparation, contextuality and “action”), let me quote from Heisenberg’s unpublished reaction “Ist eine deterministische Erganzung der Quantenmechanik moglich?” to the EPR paper, which Pauli had requested Heisenberg to write (“Da durch die Publikation eine gewisse Gefahr einer Verwirrung der offentlichen Meinung – namentlich in Amerika – besteht, so ware es vielleicht angezeigt, eine Erwiderung darauf ans Physical Review zu schicken, wozu ich Dir gerne zureden mochte.” Heisenberg had asked Bohr on 28. August 1935 and before v. Laue about their opinion, which apparently was negative, because Heisenberg didn’t even try to publish it. My Source is “WOLFGANG PAULI, Wissenschaftlicher Briefwechsel mit Bohr, Einstein, Heisenberg u.a. Band II: 1930-1939” around page “[414] Heisenberg an Pauli 409”):

    § 1. Quantum mechanics characterizes a physical system by a wave function in a configuration space whose dimensionality is determined by the number of degrees of freedom of the system in question. The square of the absolute value of the wave function at a particular point in this space indicates the probability that the intuitive physical quantities designated by the coordinates of the space assume the specific values ​​corresponding to that point when the system is observed for these values. The formalism of quantum mechanics is thus based on the assumption that a physical system can be characterized by classically intuitive determinants and that, as in classical theory, it can have an objective meaning independent of the observation process to speak of the actual value of a particular physical quantity, e.g., the “position of the electron.”

    and 8 pages further down

    Since knowledge of the quantum mechanical wave function can be obtained from a suitable observation of the system and is sufficient to determine those observations of the system for later times whose outcome cannot be predicted, one can speak of an “observation context” characterized by knowledge of the wave function. The example just discussed shows that the same intuitive event can belong to different observation contexts—in contrast to classical physics, where there is only a single observation context. The experimental experience laid down in quantum mechanics has further shown that the observation of a system generally transitions discontinuously from one observation context to another. The causal process can only be traced within a specific observation context; in the discontinuous transition from one observation context to another (and “complementary” in the Bohr sense), only statistical predictions are possible. The possibility of various complementary observation contexts, unknown to classical theory, is thus responsible for the occurrence of statistical laws.

    Here are Heisenberg’s original German paragraphs:

    § 1. Die Quantenmechanik charakterisiert ein physikalisches System durch eine Wellenfunktion in einem Konfigurationsraum, dessen Dimensionszahl durch die Anzahl der Freiheitsgrade des betreffenden Systems bestimmt ist. Das Quadrat des Absolutbetrages der Wellenfunktion an einem bestimmten Punkt dieses Raumes gibt die Wahrscheinlichkeit dafür an, daß die durch die Koordinaten des Raumes bezeichneten anschaulichen physikalischen Größen die dem Punkt entsprechenden bestimmten Werte annehmen, wenn das System auf diese Werte hin beobachtet wird. Dem Formalismus der Quantenmechanik liegt also die Annahme zu Grunde, daß ein physikalisches System durch klassisch-anschauliche Bestimmungsstücke charakterisiert werden kann und daß es, wie in der klassischen Theorie, einen vom Beobachtungsvorgang unabhängigen objektiven Sinn haben kann, von dem tatsächlichen Wert einer bestimmten physikalischen Größe, z. B. vom “Ort des Elektrons” zu sprechen.

    Da die Kenntnis der quantenmechanischen Wellenfunktion aus einer dazu geeigneten Beobachtung des Systems gewonnen werden kann und dazu ausreicht, für spätere Zeiten diejenigen Beobachtungen am System festzulegen, deren Ergebnis nicht vorhergesagt werden kann, so kann man von einem durch die Kenntnis der Wellenfunktion charakteristischen “Beobachtungszusammenhang” sprechen. Das eben besprochene Beispiel zeigt, daß das gleiche anschauliche Geschehen zu verschiedenen Beobachtungszusammenhängen gehören kann – im Gegensatz zur klassischen Physik, in der es nur einen einzigen Beobachtungszusammenhang gibt. Die in der Quantenmechanik niedergelegte experimentelle Erfahrung hat weiterhin gezeigt, daß die Beobachtung eines Systems im allgemeinen unstetig von einem Beobachtungszusammenhang zu einem anderen überführt. Der kausale Ablauf kann nur innerhalb eines bestimmten Beobachtungszusammenhangs verfolgt werden, beim unstetigen Übergang von einem Beobachtungszusammenhang zu einem anderen (und dazu im Bohrschen Sinn “komplementären”) sind nur statistische Vorhersagen möglich. Die der klassischen Theorie unbekannte Möglichkeit verschiedener komplementären Beobachtungszusammenhänge ist also für das Auftreten statistischer Gesetze verantwortlich.

    The German original is even more focused on observation, so for example “classically intuitive” originally was “klassisch-anschauliche”. So maybe subtle details of the German language also played a role to turn Heisenberg blind with regards to the roles of preparation and action.

    OhMyGoodness, let me try to answer more specifically to your question:

    My actions were to acquire a measurement device and to acquire appropriate instrumentation and to develop an appropriate measurement program and then observe and record the output of the device. I am sure it is obvious to you but if not tedious will you please explain further the question as to real or not.

    For physical experiments, “measurement devices” alone are not enough. One also needs a laboratory which allows one to sufficiently isolate a physical system to allow predictions and their verification with the measurement devices. This “sufficiently isolate” is a reality established by our actions, which is complementary to the reality provided by the pointers or digital records of the measurement devices.

    The action to “acquire” devices on the other hand is on a kind of meta-level, which is of limited help for discussions of how to provide physical meanings and their connection to the formalism.

  123. Lieuwe Vinkhuijzen Says:

    Hi Scott,

    Thanks for the post. I would like to better understand the setup of the experiment and to understand if some alternative experiments I thought of while reading the paper make any sense. I have numbered them 1,2,3. Any thoughts are very much appreciated.

    I initially wondered why Bob is instructed to execute a circuit U, since performing that unitary does not change the probability distribution of the resulting state given that Alice’s state is Haar random. Then I realized that of course if we forget to instruct Bob to execute a unitary then Alice can just send Bob 12 bits of advice, namely the highest-probability computational basis measurement of her completely known state; and then Bob outputs to the referee Alice’s advice verbatim. But since Alice doesn’t know what Bob’s unitary is, she must send a classical very good description of her quantum state, hence the lower bound.

    I nevertheless wonder if we can obtain a one-way communication lower bound in a different setting where only Alice gets an input, and Bob gets no input.

    For imagine a confused quantum skeptic who objects to the experimental setup thus: the reason that Alice needs to send so many bits is only because of the way the experiment is set up, namely, it forces Alice to anticipate Bob’s many possible unitaries; for otherwise, if Alice had known Bob’s unitary, then the task would be easy for Alice, since he postulates that real-world quantum states all admit simple descriptions.

    (This may not be the optimal articulation; perhaps this quantum skeptic’s viewpoint can be further steelmanned)

    Here’s my idea for a communication protocol where only Alice gets an input: Alice sends Bob an $n$-qubit Haar-random state and then Bob uses the Haar state as input for an $n^2$-bit randomness extractor. Let me flesh this out. Bob’s quantum circuit takes as input $n$ qubits and $n^2-n$ ancilla qubit initialized to $|0>$. Then he applies a unitary (which we will call a “randomness extractor”) to all $n^2$ qubits; lastly he measures all $n^2$ qubits in the computational basis; the measurements are his output to the referee. The unitary that implements the randomness extractor is known beforehand to both Alice and Bob and the referee.

    Then the point is that, even though both Alice and Bob know the unitary beforehand, Bob is required to output so many random bits that it’s infeasible for Alice to properly advice Bob without telling him verbatim what he should output (which requires sending n^2 bits of classical information) and without describing her quantum state (which requires sending 2^O(n) bits).

    I give an example randomness extractor just to better convey the idea without claiming that it is the best possible randomness extractor. Bob starts with a layer of Hadamards; then he applies a controlled-Z gate to half of all pairs of qubits, effectively forming a pseudo-random graph; then he applies another layer of Hadamards; lastly he applies a computational basis measurement to each qubit. In the final quantum state, the input state has become very entangled with the ancilla qubits. My hope is that these measurements yield a probability distribution on $n^2$ bits which cannot be succinctly described even if you knew the input state and the unitary. (No doubt there are better constructions. For example, the layer of controlled-Z gates could be replaced with a random Clifford circuit, or the graph could be a low-degree expander; whatever is most convenient to obtain a lower bound.)

    Then I conjecture that the classical communication complexity would be on the order of $n^2$, since that’s how many bits of advice Alice would have to send to Bob if she wanted to tell him verbatim what he should output, and since it’s difficult to see how she can do any better. This would show that, if the universe was secretly classical, then it is storing at least $n^2$ bits to describe an $n$-qubit quantum state, which would be a separation. If I understand correctly, showing that the universe must be using much more than $n$ classical bits to describe an $n$-qubit quantum state was also the philosophical impetus to perform your and Quantinuum’s experiment. You can enlarge the separation to $poly(n)$ vs $n$ instead of $n^2$ vs $n$ by making Bob’s circuit act on (and measure) more ancillary qubits. I acknowledge that the hardware requirements, and the classical-vs-quantum separation, of my protocol are worse than yours: the only “advantage” is that Bob receives no input.

    Question 1. Is there any theoretical-CS or philosophical merit to such a one-way communication protocol, where only Alice gets an input? And if you managed to make it work in the lab, would it demonstrate anything beyond what your and Quantinuum’s experiment demonstrated?

    I also wonder to what extent we can further remove the experiment’s reliance on Haar random states. The paper already gives some thought to this, being content to approximate the Haar random state using a brickwork circuit. A quantum skeptic might object that Haar states were not actually seen in the lab… Could they still object that even deeply entangled 12-qubit brickwork states were not actually seen in the lab, given the results of the experiments? In any case it is interesting to obtain exponential classical communication lower bounds for simpler families of quantum states. For example, so-called subset states are described in the paper “QMA with subset state witnesses” by Grilo et al:

    $$|\phi_S\rangle = \frac{1}{\sqrt{S}} \sum_{x\in S}|x\rangle\quad\quad\text{with }S\subseteq \{0,1\}^n$$

    This is an interesting set of quantum states because they are sufficient for Merlin, namely we have QMA=SQMA where SQMA (“Subset-QMA”) is like QMA but Merlin is only allowed to give Subset States to Arthur.

    Conjecture 2. The one-way classical communication complexity of DXHOG, but with a random Subset State instead of a Haar-random state, is exponential in the number of qubits.

    A quantum skeptic may also object that, even acknowledging that approximate Haar random states require exponential communication, it doesn’t matter because such states do not occur in polynomial-time quantum algorithms because only a polynomial number of gates is available to prepare the states. I suppose a response would be to show that the amount of classical one-way communication required by this protocol for an $n$-qubit state, scales with the number of gates used to prepare it. Is that the only response?

    This made me wonder, unrelated to your newest paper, whether such a Haar random state is a useful resource for computation. In classical computing, a source of random bits seems to be useful e.g. for detecting primes and for polynomial identity testing. Even if P=BPP, then it’s conceivable that access to randomness may bring down an algorithm’s runtime from $n^k$ to $n^{k-1}$. My analogous question is therefore:

    Question 3. Is there any quantum algorithm that benefits from receiving a Haar random state as ancillary input?

    Sorry for such a long comment! I hope you are well.

  124. fred Says:

    Max Madera

    ”I do not understand what is gained by using MWI […] that is, identifying themselves as the observer that has collapsed the universe wave function from their point of view”

    As I wrote already, it’s only a matter of time for anyone to eventually find themselves (i.e. a branchial copy of themselves swearing that they’re the one and only Max Madera) in circumstances where the probability of their existence and survival is vanishingly small, in other words their age will be way at the end tail of the distribution of average lifespan, based on pure luck.
    And the same applies as a species, the more our view of the universe increases, the more we are puzzled by the cheer luck required for humanity to find itself the only intelligent species in the universe… but not surprising if MWI is true, and every unlikely occurrence of intelligent civilization finds itself in its own branch, isolated from all the others.

  125. fred Says:

    The mix of MWI + consciousness (not coincidental?) is a guarantee that every conscious being/species eventually finds itself (as a surviving conscious copy) in circumstances that seem asymptomatically unlikely. Which is both a curse and a blessing and not a matter of choice.

  126. fred Says:

    Bruce

    “I admit that this (like all other fundamental beliefs) is a matter of faith, not anything provable.”

    That’s the thing, the wave function of the universe evolves the way it’s supposed to be evolving, and not contingent on what you believe or your apparent choices. Because of consciousness, it’s only a matter of time before an ever decreasing subset of branches host copies of you (all claiming and feeling to be the one and only you just as strongly as the ones reading my post) and it will become painfully clear to them that tail values matter just as much as expectations.
    Again, it’s not about faith, it’s an inevitable and painful result of the mix MWI+consciousness.
    We already do get a taste of this everyday, by sheer virtue of still being alive and thinking “boy am I happy and lucky to not have been part of the ~20,000 humans who died yesterday”… many copies of you actually did die yesterday, and this is also an inevitable result of what you call probabilities.

  127. fred Says:

    There’s always the possibility that consciousness is a limited resource (like magical pixie dust) and we’re only maximally conscious in the branches of the MWI that have the biggest “weights” (in terms of where you could find yourself, individually) and somehow the lights become very dim in the most unlikely branches (in terms of the probability to find ourselves there) and we turn into numb automatons in those places.
    But this would imply that I’m the only conscious/most entity in all “my” own branches, and everyone else there is an NPC. That would be disturbing too.

  128. Scott Says:

    flergalwit #119:

      I hesitate to comment on the superdeterminism angle, but I feel compelled to. I’m very disconcerted that you are repeating the line about superdeterminism positing a conspiracy theory of initial conditions yet again, despite the multiple explanations on this blog (and elsewhere), over the months and years, that this is a misrepresentation of what superdeterminist proponents (at least the ones under discussion) are advocating.

    Sabine and other superdeterminism proponents can produce unlimited quantities of verbiage about how they don’t choose to describe superdeterminism as a cosmic conspiracy theory in the initial conditions. The point is that, from my perspective, it still is, because there’s literally no other way to get the result they want (violation of the CHSH inequality without faster-than-light communication and without accepting that the world is quantum). From my perspective, all they do is change the rhetoric and emphasis—kicking up sand, throwing in p-adic numbers for some reason, etc. etc.—without any change to the on-the ground reality.

    Incidentally, this is one thing I appreciate about ‘t Hooft, who started this whole business. Despite the utter insanity of the superdeterminist position, ‘t Hooft at least has the awareness and intellectual honesty to say, in effect, “oh yeah, of course it’s a giant conspiracy theory in the initial conditions; what’s wrong with that?”

  129. Scott Says:

    Armin #120:

      I hope I don’t come across as pedantic, but it seems to me that in the spirit of scientific inquiry it would be better to rephrase the last sentence as something to the effect that quantum mechanics is a whole lot closer to truth than classical physics. Otherwise we are liable to the charge that we treat it like a religion.

    Yes, sure: quantum mechanics is “true” in the same provisional sense that the heliocentric theory, general relativity, the germ theory, or any other historic scientific advance has been “true”—that is, dramatically closer to the truth than what preceded it.

    But there are two things I’d add to this:

    First, I’d say that QM has a better shot at finality than nearly anything else in the history of physics. That’s because QM, at its core, is not about the details of particles or fields or even spacetime; instead it’s a change to the rules of probability themselves. And when you start tinkering with the rules of probability, there really aren’t many self-consistent options. What’s incredible is that there’s any self-consistent option, other than classical probability theory.

    Second, if QM were eventually superseded by something else, I see no reason to believe that the something else would restore our original classical picture of the world, rather than taking us even further from it.

  130. OhMyGoodness Says:

    gentzen #122

    Thanks you so much for a very nice answer but it will likely take a few days for my response. I have some things I have to take care of prior to answering and need a very close reading of your answer.

    Two things on the initial read through raised immediate questions. The measurement of electrical fields in the classical world do not require a laboratory in order to isolate. In fact the point of usual measurement is to determine the in situ field at some specific location. I believe we are each speaking about different situations. I am speaking about common classical electrical fields and the usual measurement of those fields pursuant to some isolation or construction design activity while you are speaking conducting experiments under very specific conditions and so presumably small difficult to measure and so easily disturbed parameters.

    I understand when conducting quantum experiments that at a minimum the quanta for the observations must be isolated to ensure no prior decoherence but still need to understand how actions prior to measurement could possibly result in decoherence. At this time, prior to further reading, it seems to me if that were the case you would never see wavelike properties on the two slit experiment because your prior actions decohered the photons at some earlier time.

    I will look through your references and think reasonably, and in opposition to my current thoughts, then respond and thank you for your very nice explanation.

  131. fred Says:

    Scott

    “Second, if QM were eventually superseded by something else, I see no reason to believe that the something else would restore our original classical picture of the world, rather than taking us even further from it.”

    Let’s face it, until general relativity and QM are unified, or, more fundamentally, we understand clearly the nature of space and time, which both sound like reasonable “quests”, there must be a few extra surprises in store for us.

  132. fred Says:

    OhMyGoodness

    “The measurement of electrical fields in the classical world do not require a laboratory in order to isolate. In fact the point of usual measurement is to determine the in situ field at some specific location”

    That’s in contradiction with the countless afternoons I spent in college, painstakingly collecting EM data points and estimating their associated measurement errors inside a Faraday cage! 😀

  133. William Kretschmer Says:

    Lieuwe Vinkhuijzen #123: Appending n^2 – n ancilla and then performing a general n^2-bit projective measurement is equivalent to applying a POVM on the original n-qubit state with 2^{n^2} possible outcomes. You’re essentially asking: by increasing the number of outcomes from 2^n to 2^{n^2}, can we get a quantum advantage without randomizing the measurement?

    Alas, it is known that every n-qubit POVM can be simulated via a combination of classical randomness and projective measurements on the n qubits plus an n-qubit ancilla register (https://arxiv.org/pdf/1609.06139). So, with shared classical randomness, Alice can simply send Bob the 2n-bit outcome of the randomly chosen projective measurement, which allows Bob to simulate the original POVM. Hence, you’ll get at most an n-qubit vs. 2n-bit quantum advantage.

    The situation is even worse if the task is something like DXHOG, where you only need to generate strings of large average XEB rather than sample from the correct distribution to small total varation distance. Indeed, the classical communication complexity of matching the quantum experiment becomes O(1). The rough idea is that for each of the possible n^2-bit outputs, there is some constant probability over the Haar measure that the measurement probability is larger than 2/(2^{n^2}). So Alice and Bob can simply assemble a list of O(1) randomly chosen n^2-bit strings before protocol, and then when Alice gets her state, she sends to Bob whichever string has the largest output probability.

      I also wonder to what extent we can further remove the experiment’s reliance on Haar random states. The paper already gives some thought to this, being content to approximate the Haar random state using a brickwork circuit. A quantum skeptic might object that Haar states were not actually seen in the lab… Could they still object that even deeply entangled 12-qubit brickwork states were not actually seen in the lab, given the results of the experiments?

    It’s crucial to emphasize that we did not simply run random high-depth 1D brickwork circuits and claim without proof that the resulting output distribution was “close enough” to Haar-random. Rather, we first sampled a Haar-random state and then variationally trained a circuit to prepare something with high fidelity. Based on the experimental outcomes, one can infer that we managed to prepare Haar-random states with average fidelity 0.427(13).

      Conjecture 2. The one-way classical communication complexity of DXHOG, but with a random Subset State instead of a Haar-random state, is exponential in the number of qubits.

    That should be provable, because random subset states have similar anticoncentration properties to the Haar measure. I’d expect the constants to be much worse, however.

      A quantum skeptic may also object that, even acknowledging that approximate Haar random states require exponential communication, it doesn’t matter because such states do not occur in polynomial-time quantum algorithms because only a polynomial number of gates is available to prepare the states. I suppose a response would be to show that the amount of classical one-way communication required by this protocol for an $n$-qubit state, scales with the number of gates used to prepare it. Is that the only response?

    Yes, it’s clear that scaling up to larger demonstrations of quantum information supremacy would require the ability to apply many more quantum gates, and thus higher gate fidelities! For that reason, we expect that any significantly larger separations will necessitate better hardware. Ultimately “polynomial” and “exponential” are asymptotics, so it’s not meaningful to say whether 516 2-qubit gates was poly(12) or exp(12).

      Is there any quantum algorithm that benefits from receiving a Haar random state as ancillary input?

    No, unless you require the output of the algorithm to have some particular correlation with the chosen state. The density matrix of a single Haar-random state is maximally mixed. Even if you give the algorithm many copies of a Haar-random state, the resulting density matrix is proportional to the projector onto the symmetric subspace, which is efficiently preparable.

  134. OhMyGoodness Says:

    fred #132

    I take it you weren’t on the Power EE curriculum. 🙂 Difficult to move a power substation into a faraday cage. 😉

    Ya know the quantum eraser experiments are astounding and suggest that the information of which path is the crucial factor and when that information is erased wavelike photons return. It suggests to me we ae missing some key ingredient in physics related to information.

  135. flergalwit Says:

    Fred #124, the quantum immortality argument used to bother me a lot, but in the end I convinced myself the inference had to be wrong even if MWI is correct. I still think it’s more subtle than a lot of people give it credit for though.

    But in brief:

    (1) If you ever find yourself apparently in an asymptotically unlikely branch of the wavefunction, it is actually far more likely you are a Boltzmann brain, fleeting into consciousness for a split second – because the latter has a far higher measure than the former, and is consistent with the same observations.

    (2) Furthermore, if you ever find you are orders of magnitude (tending to infinity) older than the next person then, much more likely than either being a Boltzmann brain or being in an asymptotically unlikely branch of the wavefunction, is that you’re actually still unknowingly in one of the main branches of the wavefunction. Perhaps the year is 1990 say and you’re asleep, dreaming that you are a billion years old. Or perhaps instead of dreaming, your senses and memory are broken for some other reason – and thus all bets are off.

    In short, I think there’s no circumstances in which (even after conditioning on your existence and your apparent environment/memories etc) you’d rationally deduce that you are in fact in an asymptotically unlikely branch of the wavefunction – even if (when!?) that is actually the case.

    That said I did wonder how the quantum immortality concept must have appeared from the point of view of someone like Stephen Hawking. He was someone who defied all expectations with how long he was expected to live. So if I had been him, I might have started to wonder as the years went on… despite the earlier logic. (Of course from our point of view, this explanation for Hawking’s long life makes no sense, as we have no reason to condition on his existence. Not to mention, he did in fact die in the end.)

  136. fred Says:

    flegarwit

    finding yourself in a branch where you survived all your loved ones at >100 years old isn’t that unlikely in an absolute sense (we do observe centanials in our common branch), no need to invoke immortality or boltzmann brains to get there. It’s just that at some point you’ll get tired to hear how “lucky” you’ve been :p

    Similarly the apparition of life, then intelligence on earth seems very fleeting and unique (where are all the traces of alien civilizations?), but yet here we are.

  137. fred Says:

    In other words, if you truly buy into MWI, you must buy that there are branches of the universe wave function where your life has been snuffed out. The only method we have to estimate your alive/dead proportion is from lifespan statistics matching your particular “profile” as much as possible. Those are obtained from the main trunk typical for an average human life, not from the very last tail distributions corresponding to near immortality and whatnot.

  138. fred Says:

    There’s also the situation where you would volunteer to measure a qubit 100 times in a row and get disintegrated if it ever comes down rather than up. The “all up” result is just as likely as any other sequence, so you’re guaranteed one copy of you will survive in one of 2^100 branches induced by measuring the qubit 100 times. Again, no need for quantum immortality or boltzmann brains here. At the end of the experiment, you’ve really just pruned out a whole lot of “useless” copies of yourself, yet it’s a very disturbing but inescapable consequence of MWI.

  139. flergalwit Says:

    Scott #128,

    Sabine and other superdeterminism proponents can produce unlimited quantities of verbiage about how they don’t choose to describe superdeterminism as a cosmic conspiracy theory in the initial conditions. The point is that, from my perspective, it still is, because there’s literally no other way to get the result they want (violation of the CHSH inequality without faster-than-light communication and without accepting that the world is quantum).

    I think in that case it would be in order to say that the cosmic conspiracy stuff is your inference from their theory (“from [your] perspective”, to use your expression) rather than something they are actually advocating.

    Note that the (unstated) inference you are making – which comes down to the implication that any rejection of the Statistical Independence assumption in Bell’s theorem implies conspiratorial initial condition fine-tuning – is actually doing the heavy lifting in your argument against superdeterminism.

    Thus there is no merit in railing against conspiratorial physics theories and (rightly) pointing out their scientific uselessness, if your argument is already broken, with the said inference being the weak link.

    Let me ask a question if I may. Imagine we discover possible evidence that some part of reality – perhaps a parallel universe of some kind – operates with a reversed arrow of time to our own.

    I know I know – it’s extremely difficult to make a coherent physics theory with multiple arrows of time. But I don’t think the concept is philosophically incoherent. Imagine the two parallel universes weakly interact with each other so that we can tell (perhaps through some subtle statistical effect, similar in spirit to Bell correlations) that the other parallel universe has an opposite arrow of time to ours, and yet the interaction is weak enough that the two arrows of time don’t break each other (to continue the analogy, like the fact Bell correlations don’t allow you to communicate FTL).

    Would you say that such a theory requires a conspiratorial fine tuning of the (high entropy) initial conditions in the parallel universe, to make its future state low entropy?

    Or would you say that, clearly, the theory shouldn’t be formulated as a pure initial-value Cauchy problem. Rather, the sensible way of formulating the theory is to posit hybrid boundary conditions – the initial state in one parallel universe, and the final state in the other (both low entropy). And that yes, if we are forced to describe it as an initial value problem, then it indeed requires conspiratorially fine-tuned initial conditions, but this is just as inappropriate as trying to formulate a theory in a universe like our own (with the usual single arrow of time) as a “final value” Cauchy problem, with the far future conditions conspiratorially chosen so that the past state has low entropy.

    If the latter point of view, I don’t understand why you think differently in the superdeterminism case. If the former, then this is at least consistent with your (not SI) -> conspiracy initial conditions inference. But I don’t understand why you’d hold this view, unless you’ve accepted a principle like “all physics theories have to be viewed as initial value problems”. In fact there may actually be excellent scientific reasons for believing such a proposition, but rejecting it is clearly still orders of magnitude more reasonable than believing in a conspiratorially fine-tuned initial conditions, so to steelman the superdeterminists’ position would require us to say they reject a proposition like this one, rather than attributing to them beliefs in conspiracy theories that they’ve disavowed.

  140. gentzen Says:

    Ajit R. Jadhav #105:

    gentzen # 99:

    […] I don’t see how I could use “action” instead to provide similar physical meaning. […]

    Sure, one could. The answer is “well known,” though deep buried under all those categories and definitions of the formalism. That’s why, although the matter is “simple” enough (at least conceptually), it does become a bit complicated to explain.

    In case you are talking about physical meaning of the electric or magnetic field, it would be nice if you could try to describe the answer here. There seem to be more participants interested in insight for that case.

    If you referred to QM/QFT exclusively, then I fully understand why you don’t even want to try to describe it. Everybody will misunderstand you anyway, no matter how well you describe it.

  141. Scott Says:

    flergalwit #139: Imagine that someone completely rejected Darwinian natural selection. So then, every time you asked them how complex adaptations arose in life on earth, they gave a whole incomprehensible patter about p-adic numbers, the upshot of which seemed to be that life just kind of arose because of a tendency built into the universe from the beginning.

    “So then, we’re basically back to creationism?” you ask. “Intelligent design?”

    “NO!” they bellow. “I said nothing about any Creator! Stop putting words into my mouth!”

    This is how I feel about Sabine and Tim Palmer’s stance on superdeterminism. It’s insanity without even the saving grace of clarity.

    If Alice and Bob have the freedom to randomize the detector settings that’s normally assumed in every scientific experiment from Galileo down to the present, then the CHSH inequality is empirically violated. This doesn’t even assume quantum mechanics; it’s just an empirical fact. The only way to escape that conclusion, if you don’t like it, is to deny them that freedom, which means a cosmic conspiracy going back to the beginning of time. Complicated verbiage can obscure this and make the matter seem unsettled to people who don’t understand what’s being talked about, but it has no effect on those of us who do understand. (This is why, as far as I know, superdeterminism has a grand total of zero defenders among experts in, e.g., quantum information and computation.)

    As for your hypothetical about a universe with conflicting arrows of time in different regions, the simple answer is that the burden is on whomever proposes that, to make it make sense and resolve all the obvious paradoxes that arise. If they can’t do it, or they just generate incomprehensible verbiage like a cornered squid shooting ink, we shelve the proposal for now.

  142. Roger Schlafly Says:

    Flergalwit #119 #139 relates superdeterminism to MWI and Boltzmann brains, as Scott related it to God planting fossils and p-adic evolution. I would add the simulation hypothesis, nihilism, solipsism, implanted false memories, retrocausality, and the rest of the existential crisis iceberg. What they all have in common is that they deny that the scientific method tells us anything about reality. Everything we see and do is part of an elaborate fake-out.

    None of these theories can be disproved, as they can explain away anything.

  143. flergalwit Says:

    Scott #141,

    If Alice and Bob have the freedom to randomize the detector settings that’s normally assumed in every scientific experiment from Galileo down to the present, then the CHSH inequality is empirically violated. This doesn’t even assume quantum mechanics; it’s just an empirical fact. The only way to escape that conclusion, if you don’t like it, is to deny them that freedom, which means a cosmic conspiracy going back to the beginning of time.

    I still don’t understand why there couldn’t in principle be a backward causal influence. Alice and Bob are free to choose the settings, and this choice (in part) causes the initial conditions to be what they are, through some backwards-in-time evolution rule.

    Would you say that any theory involving retrocausality is necessarily conspiratorial? I don’t know how to get to that conclusion, or the one about superdeterminism necessarily being conspiratorial, unless one is wedded to the idea that physics theories have to be viewed as initial value problems, and that no other possibility exists even in principle.

    Roger Schlafly #142, first of all I didn’t relate superdeterminism to MWI or Boltzmann brains. They are two separate threads of the discussion.

    I would agree that superdeterminism of the fine tuning initial conditions variety is scientifically and epistemologically useless. But my very point is that this would never be the correct way of formulating the theory – any more than formulating standard classical or quantum mechanics using a fine tuned “far future” boundary condition that conspiratorially happens to give rise to a low entropy past would be. Or formulating evolution as “a sequence of random coincidences that happens to give all the structure we see around us including life” (which is how creationists often strawman the theory). If you formulate a theory badly, you can always make it as ridiculous and useless as you like. (https://home.deds.nl/~dvdm/dirk/Physics/Gems/Parable.html comes to mind, for readers of sci.physics back in the day.)

    Again my interest is not actually in defending superdeterminism per se, but in steelmanning the theory (a term I learnt from this blog!) so that it can be refuted in its strongest form, not its weakest.

  144. Max Madera Says:

    fred #124,
    This is the same kind of “proof” that leads many survivors to believe in the existence of god (the god that chooses then to survive, that is). And I thought that in comment #112 you were being sarcastic by clearly showing how much to believe in MWI is shared with any other spiritual belief!

    Now, more seriously, without a proper measure of probability, we can not tell if surviving that long means that MWI exists or, to the contrary, that you are still in the dump-tail of the branches, given that there may be so many other copies of you that have superhero powers. For who has evolve god’s wave function from the very beginning? And what recipie allows you to say this is another me in another branch, separated when we were born, and this is another reincarnation of me born one thousand years ago, and this has the same soul as me, even if it is an alien from a distant galaxy, or even “this is me now”? Although you have dismissed armin question in #75, do we know how many degenerate (FAPP) are we out there until eventually something macroscopic in our radar (many things may be very different a few meters away) distinguish my concious me from the others?

  145. wb Says:

    @fred

    I do not recommend dangerous / suicidal activities to test ‘quantum immortality’ of mwi ;
    e.g. jumping from a plane without parachute, there is a non-zero (but very small) probability that you will tunnel through Earth and emerge unharmed on the other side; mwi indeeed suggests that there is a conscious experience associated with this exercise.

    However, the wavelength associated with a free falling macroscopic object is below the Planck length and therefore we cannot assume that conventional quantum mechanics still holds.
    In other words, gravity will probably kill you in more than one way …

  146. Adam Treat Says:

    All this continued verbiage against Sabine and Superdeterminism, but not one of you actually bothers to read her paper which shows that conspiracy of initial conditions is not required. If you’ve read the paper and can demonstrate why it is wrong I’m happy to entertain, but until then it is sounds like a bunch of ad hominem and hyperbole. It isn’t convincing and it goes against the ethos of the Rationalist movement.

    Until I see an actual response involving the paper and the claims therein I think it is superdeterminism detractors who are generating the verbiage.

  147. AlexT Says:

    Concerning superdeterminism, if you want to believe that we are living within a simulation running on a classical computer, the conspiracy perfectly fits the bill. Use a stream of high-quality pseudorandom numbers to fool the simulated experimenters in believing that the world is quantum. And yes, optionally add some geology and dinosaur fossils if you want to cut down simulation time by 6 orders of magnitude…

    And about the MWI, beware of false dichotomies in suicidal experiments. The choice is not just between being unharmed alive and dead, there is also the third of ending up paraplegic or comatose and being unable to repeat the experiment.

    Moreover, quantum experiments have long evolved from mere collapse to partial measurements, weak measurements, non-demolition measurements, etc. So what is this supposed to be in MWI: Partial world splits, failed world splits, twiddling but not splitting worlds?? Not good at all.

    And unless you are a fan of solipsism, each observer happily splitting the world according to own perspective? Also splitting the other observers? The consistency issues seem endless here.

  148. Adam Treat Says:

    flergalwit #143,

    “Again my interest is not actually in defending superdeterminism per se, but in steelmanning the theory (a term I learnt from this blog!) so that it can be refuted in its strongest form, not its weakest.”

    I’m right there with you. I have no idea if superdeterminism is a good approach, but I do know that nearly every response on this thread is about attacking it due to the absurdity of initial condition conspiracy, but all of its current proponents is (Sabine, Tim Palmer, T’hooft) deny that any conspiracy is necessary and two of them have written papers with maths that show toy models without such conspiracy. None of the detractors has actually bothered to wrestle with these papers.

  149. Roger Schlafly Says:

    Adam Treat #146 #148: There is no point in reading those papers, any more than reading papers on toy models for God planting fossils or watching The Matrix movie. If the Bell test experiments are invalid because of an input independence assumption, then most or all of our scientific knowledge is also invalid. No experiment can tell us anything. You as might as well believe that you are a figment of someone’s imagination.

    If you really want to steelman superdeterminism, then find a paper that addresses the point in Scott #128 #141.

  150. fred Says:

    Max Madera,
    actually I’ve never said I believe in MWI, I merely listed its implications.

    wb
    no need to hope for tunelling effect, all necessary for my thought experiment is measuring a qubit and a dose of courage!

  151. Adam Treat Says:

    Roger Schlafly,

    So for you there is no point in reading those papers because nothing they say can possibly change your mind and also you apparently have no idea which papers are even being discussed. LOL. Okay, no point in discussing any more with you as you’ve left the bounds of rational argument as your mind is set.

    “If you really want to steelman superdeterminism, then find a paper that addresses the point in Scott #128 #141.”

    What’s the point. You’ve already dismissed the papers which you haven’t read as not capable of changing your mind. You’re determined to remain in your ignorance. So be it.

  152. fred Says:

    I’ve tried reading a few superdeterministic papers claiming to show some “toy” examples on how some form of conspiracy could materialize itself naturally, but it was too abstract for me.

    I thought the idea of retrocausation could be easier to grasp.

    But as Scott puts it, it’s hard to see how one could create “superorder” (ie some well formed theory like QM) out of the vague idea that everything is connected causally (which is true but has no structure).

  153. fred Says:

    Fundamentally, determinism (cause and effect) and randomness are orthogonal.
    If the world is purely deterministic, then the apparent randomness of QM is an illusion (pseudo random), and we should be able to find evidence of this, statistically, rather than trying to generate actual randomness from order, which is impossible by definition.

  154. John Duffield Says:

    Scott #116: All points noted. I’ll get started.

    flergalwit #119: I note what you say about the MWI. I don’t want to spam Scott’s blog, so all I will say for now is this: forget all the myth and mysticism and magic and moonshine. Everything is much simpler than you think. And everything, is classical.

  155. Scott Says:

    Everyone: I’m closing down this thread, because my mistake of mentioning the WSJ article allowed the discussion to wander off topic. The thing about classical alternatives to quantum mechanics, as I’ve learned over 25 years, is that there’s no limit to the amount of your time you can lose to arguing against them. There will always be another paper to read, another comment to answer, and so on. But I’ve already spent 10x more time on these alternatives as (in retrospect) I’d judge to be productive or healthy for me. Certain ideas led nowhere in the past century, and I’d wager most of them won’t lead anywhere in the next century either. So, I think it’s now time for me to close this section, and use my time another way. Thanks all.

  156. Scott Says:

    Update: By popular request, I’ve reopened the thread, but only for comments about our paper—no more about superdeterminism, MWI, Sabine Hossenfelder, etc. Thanks; this was my mistake.

  157. Anonymous Says:

    Suppose Alice transmits the entirety of her input to Bob. This should be equivalent to Alice sending many copies of her state to Bob. In this case, how high will their linear cross entropy be?

    I think it’s obvious that the best protocol would have Bob return the most likely measurement outcome, rather than the one that happens to occur when he runs the experiment once. How different are the linear cross entropies between these two scenarios?

  158. William Kretschmer Says:

    Anonymous #157: You observe correctly that if Alice sends Bob an entire description of the state, the best protocol has Bob return the most likely measurement outcome. Compared to the noiseless n-qubit quantum protocol, which achieves F_XEB ≈ 1, your protocol does significantly better, achieving F_XEB = H_{2^n} – 1 ≈ n * ln(2), where H_N is the Nth harmonic number. We prove something even stronger in Corollary 4 of the paper, although the simpler case that you describe is a reasonably straightforward consequence of Lemma 14.

  159. Anonymous Says:

    William Kretschmer #158

    Thanks. This is very interesting. Do you have upper/lower bounds for F_XEB achievable using varying numbers of qubits?

    Even when Alice is constrained to send n qubits, is it obvious that her best quantum strategy is to send her quantum state? And maybe this answer depends on whether Bob will use Haar or Clifford unitaries.

  160. William Kretschmer Says:

    Anonymous #158: It is not obvious what the optimal quantum strategies are, either with n qubits or more than n qubits! You can start to nontrivially beat the naive n-qubit quantum protocol with roughly n * 2^{n/2} qubits. The protocol is: Alice sends many copies of the state, then Bob measures all of them in his basis and outputs the empirical mode. With ~2^{n/2} copies of the state, you’ll see birthday collisions, which are are more likely to be heavy outputs. One can lower bound the score achieved by this protocol using properties of the posterior of the Dirichlet distribution. O(2^{n/2}) copies should suffice to achieve F_XEB ≈ 2.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!