My podcast with Brian Greene
Yes, he’s the guy from The Elegant Universe book and TV series. Our conversation is 1 hour 40 minutes; as usual I strongly recommend listening at 2x speed. The topics, chosen by Brian, include quantum computing (algorithms, hardware, error-correction … the works), my childhood, the interpretation of quantum mechanics, the current state of AI, the future of sentient life in the cosmos, and mathematical Platonism. I’m happy with how it turned out; in particular, my verbal infelicities seem to have been at a minimum this time. I recommend skipping the YouTube comments if you want to stay sane, but do share your questions and reactions in the comments here. Thanks to Brian and his team for doing this. Enjoy!
Update (Oct. 28): If that’s not enough Scott Aaronson video content for you, please enjoy another quantum computing podcast interview, this one with Ayush Prakash and shorter (clocking in at 45 minutes). Ayush pitched this podcast to me as an opportunity to explain quantum computing to Gen Z. Thus, I considered peppering my explanations of interference and entanglement with such phrases as ‘fo-shizzle’ and ‘da bomb,’ but I desisted after reflecting that whatever youth slang I knew was probably already outdated whenever I’d picked it up, back in the twentieth century.
Follow
Comment #1 October 19th, 2024 at 3:30 am
Hey Scott, at 46:23 you say that a 99.99% two qubit gate fidelity (0.01% error rate) is needed for fault tolerant quantum computation. Where are you getting that number from? I’ve heard you use it before, but it’s off by an order of magnitude.
The threshold two qubit gate error rate for the surface code is between 0.5% and 2%, depending on the exact details of the noise model and the decoder. This threshold has been known for over a decade (see fig4 of Austin’s 2012 surface code review paper https://arxiv.org/abs/1208.0928 ). You can verify the threshold for yourself in stim’s getting started notebook ( https://github.com/quantumlib/Stim/blob/main/doc/getting_started.ipynb ). The threshold is even now confirmed experimentally by the recent google 3v5v7 experiment, which had a median two qubit gate error rate of 0.3% ( see fig1b of https://arxiv.org/abs/2408.13687 ).
Now, keep in mind you wouldn’t want to operate *at* the threshold of ~99% fidelity. That’s where the costs are infinite. 99% isn’t sufficient, given current knowledge. But 99.9% *is* sufficient. Whereas 99.99% is overkill.
Comment #2 October 19th, 2024 at 8:35 am
Craig Gidney #1: Ok, my basis for that was talking to multiple experimentalists who told me that, in practice, you’d want to be around 99.99% in order to scale (e.g.) the surface code at a reasonable overhead—that much lower than that, and even if you were past the threshold in theory, you’d need millions of physical qubits to do even the simplest things. But maybe my information is badly outdated, and 99.9% now suffices for fault-tolerance with reasonable overhead in practice? In which case, would you say that “all that remains” is the engineering task of integrating large numbers of chips of a quality that we already have?
Comment #3 October 19th, 2024 at 10:39 am
Bohmian plank in the ocean of amplitudes. But this plank is enough to hold on to decide which branch the “fire of consciousness” goes. So, in the end it’s only arbitrariness of hidden variables that you don’t find appealing?
Comment #4 October 19th, 2024 at 10:49 am
Scott #2:
Let’s define a term to help with this discussion. Define T_{S,p} to be the “teraquop footprint” of a fault tolerant system S subjected to a noise strength p. The teraquop footprint is the physical-qubits-per-logical-qubit required to achieve a logical gate error rate of 1 in a trillion. The teraquop footprint is a finer grained measure than the threshold, because it gives a notion of quantity in addition to quality. Note that the threshold of a system S is just the largest p such that T_{S,p} < ∞. Also note that Shor's algorithm uses fewer than a trillion gates to factor 2048 bit numbers (even when including idles as gates), so the teraquop footprint describes overheads sufficient for breaking RSA.
Here are some citations showing a 99.9% fidelity is sufficient for teraquop footprints on the order of a thousand (as opposed to on the order of a million):
– Under uniform depolarizing circuit noise a huge variety of surface code circuits have teraquop footprints around 1000 assuming a 99.9% fidelity CX gate (figure E.2 of https://arxiv.org/abs/2302.02192). Using more asymmetric hardware-inspired noise, and a CZ gate instead of a CX gate, this increases to ~1500 (figure E.1).
– Planar yoked surface codes have a teraquop footprint of ~800 assuming a 99.9% fidelity CZ gate (figure 1 and figure 19 of https://arxiv.org/abs/2312.04522 ).
– Planar honeycomb codes have a teraquop footprint of ~900 assuming a 99.9% fidelity two-qubit parity measurement (right panel of figure 9 of "Benchmarking the Planar Honeycomb Code" https://arxiv.org/abs/2202.11845v3)
– Non-planar semi-hyperbolic honeycomb codes have teraquop footprints 10x smaller than planar honeycomb codes, at least for memory (figure 10 of https://arxiv.org/abs/2308.03750 ).
– Non-planar bicycle codes are claimed to be 10x denser than unyoked surface codes, at least for memory, though I caution that in this result they extrapolate quite far compared to what's simulated ( figure 2b of https://arxiv.org/abs/2308.07915 ).
I would confidently say T_{*,1e-3} ≈ 800 if you are restricted to a square grid connectivity and would less-confidently say T_{*,1e-3} ≈ 200 if you have non-planar connectivity. If the experimentalists you talked to said T_{*,1e-3) = millions, then they are off by three orders of magnitude. That said, I think it's a perfectly defensible position to react to "800 physical qubits per logical qubit" with "that sounds very expensive, so let's try to improve our gates from p=1e-3 to p=1e-4 before scaling quantity". If the experimentalists you talked to were saying that, then I sympathize. I also don't want teraquop footprints of 800. My mindset is that we'll do 800 if we have to, but we'll try to improve those numbers numbers along the way (as has been happening).
Comment #5 October 19th, 2024 at 11:17 am
Scott #2: Oh I forgot to answer your question.
You asked whether it’s now “just” a scaling-quantity task, as opposed to an improving-quality task, I would say in theory yes but in practice definitely not.
If you abstract away the engineering details and just look at the qubit fidelities then yes simply repeating those abstract qubits with those fidelities would be sufficient for fault tolerant quantum computation. But if you tried to “just repeat the existing qubits” in practice you’d slam into walls. For example, wires take up space. As you put more and more qubits into a fridge, you’ll run out of space for the wires. So you’ll need to switch to thinner wires, or to multiplexed control, or to using multiple fridges. These all impact how well the qubits work, forcing changes to their design, preventing a simple copy-paste from working in practice.
Comment #6 October 19th, 2024 at 11:36 am
Craig Gidney #4: Ok thanks! If others close to the subject confirm what you’re now telling me about the teraquop footprint of ~1000 at 99.9% fidelity, I will update my message in talks and podcasts to say we’re already at the practical fault-tolerance threshold, rather than likely to reach it within the next few years. In that case, though, the next question is likely to be: then where (other than PsiQuantum, maybe) are the current efforts to engineer million-physical-qubit systems? Is the real issue just that everyone knows perfectly well that the 2-qubit fidelity will fall back below 99.9%, say to 99% or lower, once you need complicated interconnects among multiple chips or traps?
Comment #7 October 19th, 2024 at 11:39 am
red75prime #3:
So, in the end it’s only arbitrariness of hidden variables that you don’t find appealing?
I object to the pretense of knowing the rule that determines where the fire of consciousness goes (and of knowing that it goes down exactly one branch). Likewise, with MWI, I object to the pretense of knowing that it goes down all the branches simultaneously.
Comment #8 October 19th, 2024 at 11:41 am
Craig Gidney #5: Thanks! Comments crossed, but you may have already answered my question.
Comment #9 October 19th, 2024 at 12:51 pm
Scott #7
Since my consciousness has to tell your consciousness which branch I am going down, lest the fire be divided, they are Bohmian particles capable of nonlocal communication during measurement, unless consciousnesses split in a lonely sense (one conscious mind per world) or a crowded (Everettian) one. If someone presumes that nonlocal projective operators are not physical and that they are not surrounded by P-zombies, they have committed themselves to the MWI view of consciousness.
So either there are many of each of us, or many hidden nonlocal variables.
Comment #10 October 19th, 2024 at 1:07 pm
Concerned #9:
So either there are many of each of us, or many hidden nonlocal variables.
Right, or as I’d prefer to phrase it: if the Schrödinger equation is universally valid, then either every branch with “intelligent observers” gets the light of consciousness, or else there must be something that determines which branches get it and which don’t.
Comment #11 October 19th, 2024 at 1:14 pm
Can you summarize the discussion between you and Craig Gidney? At 99.9% 2-qubit gate fidelity, physical:logical qubit ratio is 1000:1. As you increase the number of 2-qubit gates, the 2-qubit gate fidelity will drop due to engineering reasons and the required physical:logical qubit ratio will increase?
Comment #12 October 19th, 2024 at 3:14 pm
Scott #10
I agree with you, but a superluminal network connecting all consciousnesses sounds so unscientific that I can’t begrudge committed Everettians their perceived confidence!
I will just leave you with an interpretation that seems okay to me today. All interpretations are in an equivalence class over physics, and each one has a different intolerable feature. Something like this happens with gauges in EM (some problems become easy while others become harder) and coordinate systems for black holes. So in the end I believe in the group, but admit its representations are necessary to actually touch it.
Comment #13 October 20th, 2024 at 2:15 pm
It’s always seemed to me that there’s another potential objection to the many-worlds interpretation, although we can debate how strong it is.
The key mechanism behind the MWI is the continual decoherence of microscopic superpositions, as they get “amplified” into macroscopic superpositions whose branches decohere and lose the ability to affect each other via local interactions. But this picture only makes sense if we assume that the initial state of the universe was one with very low entanglement entropy across macroscopic distances, so that there are many initial superpositions that are “contained” to microscopic scales and can be therefore be measured. In a generic (and therefore highly entangled) state in the Hilbert space of the universe, the MWI picture of decoherence doesn’t really make sense IMO, because are no “isolated” microscopic superpositions to measure.
The counterargument to this objection – and it’s a strong one! – is that the same objection applies to classical cosmology as well. The various related concepts of Loschmidt’s paradox, Boltzmann brains, the entropic arrow of time, etc. are all ultimately related to the question of why the universe appears to have started in such a low-entropy state (which was presumably necessary for us to have memories and conscious experiences, do science, etc.). So this objection is certainly not unique to the MWI interpretation.
But I think it is worth emphasizing that, strictly speaking, the plain old Schrodinger equation isn’t actually enough to get the full MWI to just pop out, as the MWI’s proponents often claim it is. You also need to postulate an initial state of very low entanglement entropy (and also spatially local interactions in the fundamental Hamiltonian) in order to explain why decoherence occurs. Depending on your perspective, this may or may not be a “big additional ask”.
Comment #14 October 20th, 2024 at 3:29 pm
Ted #13: I don’t see how that’s an objection to MWI at all. Even if you rejected MWI, and went for Copenhagen or Bohm or any other alternative, you’d still need an Arrow of Time, which means an initial state of the universe with anomalously low entropy. So how can that issue be laid at MWI’s feet even slightly?
Comment #15 October 20th, 2024 at 4:10 pm
Scott #14: I take your point. It seems to me that there’s a somewhat qualitative distinction between (a) the basically classical problem of the arrow of time that we can tell without even invoking quantum mechanics at all, and (b) the more specific version within the MWI picture of explaining why the universe appears to have begun in approximately a product state with very little entanglement entropy specifically.
Maybe it isn’t fair to frame that as a point against the MWI. I guess what I was trying to say was that, IMO, the frequent claim “The MWI just falls directly of the Schrodinger equation with no additional assumptions” is not exactly correct. The more accurate statement would be that “The MWI comes from combining the Schrodinger equation with the additional postulate of a minimally entangled initial state of the universe.” So arguably it isn’t quite as minimal of a theory as people say.
But I agree that one could counter-argue that that assumption of low initial entropy is implicitly contained within every other interpretation as well, because it’s arguably implicit in the very concept of a “measurement”. So it may be unfair to single out the MWI specifically.
Comment #16 October 20th, 2024 at 4:30 pm
I love Brian Greene as theorist and public intellectual. He is a gifted thinker and communicator. I respect and welcome his perspective about the (non)reality of mathematical truths. However, I think the weight of reason is against antirealism in general. As always, I could be wrong, and I am open for correction. Here are my reasons. What am I missing here?
1. Does Prof. Greene think that modus ponens, or modus tollens, or the law of identity, or the law of noncontradiction, will still be true after the last sentient being has died? These principles are not material in anything like the usual sense, and do not have a physical structure in the way a field or a particle or a point of spacetime might be material or have a physical structure. But surely, we would not want to say they cease to be true when the last thinker dies? That does not seem plausible.
2. I think the worry is the non-materiality, or the exotic nature, of abstracta. However, it seems hard to me to not include something like truth values concerning non-material entities. If am I wrong about this, someone help me here, please? For example: take the idea of logical possibilities. Even if a universe had never existed, with no particles, surely it would have been *logically possible* that a universe might have existed. Or it is logically possible that the universe could have had different basic elements or update rules. But what ontological status are we to assign to the entities referred to by our concepts that we call ‘logical possibilities’? To deny them would be to say that only the actual world could ever have existed, or else that logical possibility has nothing to do with metaphysical ‘real’ possibility. In this case you would get the extremely implausible implication that although a universe filled with particles that were round-spheres was logically impossible, perhaps this subjective system of human logic does not map onto what is really allowable, and such universes could exist. I find that implausible in the extreme.
3. It seems to me much more likely that the range of possible realities are constrained by logical and mathematical truths, and not the reverse. I dug this quote out of a paper I read a while back, but I think it is useful: “…mathematical facts can enter into grounding relations with concrete non-mathematical facts. Here is one widely cited example of non-causal mathematical explanation: ‘The fact that twenty-three cannot be divided evenly by three explains why it is that Mother fails every time she tries to distribute exactly twenty-three strawberries evenly among her three children without cutting any (strawberries!)’ (Lange (2013), 488). The mother’s inability to divide the strawberries evenly among her children is plausibly grounded in the fact that twenty-three is not divisible by three.”
I would add that there can be no possible reality, no aliens in other universes, that will be able to divide 23 basic elements evenly. This is probably because the truth of at least some mathematical statements does *not* depend contingently on the structure of physical reality, or the subjective acceptance of arbitrary rules. The grounding relations go the other way. The possibilities for what physical reality could be depend upon the rules and possibilities of logic, of which mathematics is associated.
4. If Prof. Greene is correct, I need some explanation for his attitudes towards the character of physical discovery. We are evolved creatures, and we have intuitive notion of time, space, distance, motion, etc. that are adequate for survival in medium-sized environments. Crucially, however, outside this domain we have no a priori reason to trust our intuitions about space and time. We have seen that at small and big scales matter and space does not behave the same as our intuitions would suggest. But no one thinks about modus ponens and logic and math this way. No one thinks that they are useful and applicable only at medium sized scales, and that they require empirical support at small scales. Nobody gets the Nobel Prize for finding that modus ponens works at all scales and sizes. That is because it is an a priori truth that is necessary, objective in character. Everyone knows it. That is why it is not worth testing, and cannot be supported by empirical evidence, because it already has the best support possible: direct rational intuition.
Now, this does raise a question of how humans can have such knowledge at all. After all, if our system of beliefs had no extra element of justification, then we could find no grounds for treating our logical intuitions differently than our physical intuitions: in which case, we would be eager to know if modus ponens worked at small scales like we should be interested to know if Newtonian mechanics worked. I do not know the answer here, but I am certain that evolution is true, as I am certain that modus ponens holds at all scales (even before I check to see). So I just accept there is this non-physical property of ‘justification’ that is grounded in physical circumstances, such as when a species gets the cognitive complexity to reason reliably about abstracta.
5. I remember seeing an exchange between Steven Weinberg and Alex Rosenberg. Weinberg asked Rosenberg what are physical laws, and their material status? It is important to note that there are 3 basic views of physical laws: Humean, governing, and causal powers and liabilities. The last two options are credible because of the weaknesses of the Humean view. But they would invoke nonmaterial entities, or nonmaterial properties (or metaphysical laws grounding these).
Anyway, that is all I have for now. Thanks for the post, you did great Scott!
Comment #17 October 21st, 2024 at 5:52 am
What do you think about recent research that AI progress is logarithmic or even asymptotic? For instance, see https://www.youtube.com/watch?v=dDUC-LqVrPU.
Comment #18 October 21st, 2024 at 6:44 am
Rainer #17: Sorry, I don’t accept evidence in the form of YouTube videos, and I also don’t know what “progress being logarithmic” means: the inverse loss? What’s the scale here? In any case, in this conversation I explicitly noted the near-certainty that the current AI paradigm will run out of juice at some point, but added that there’s no principle to tell us whether that will happen before or after we get artifacts that are “smarter than Ed Witten” (I thought Brian would appreciate that).
Comment #19 October 21st, 2024 at 7:49 am
Thank you for being honest (or is it truthful? trustworthy? English is not my mother language).
Being intelligent and a good communicator is already an uncommon combination, but I can’t think of a lot of people that add honesty to these two qualities, and you and Brian Greene are among them.
I have nothing to comment in general, and am shy, but I think it may be important sometimes to say how grateful I am for the work you’re doing.
Take care Scott.
Comment #20 October 21st, 2024 at 8:12 am
Toby Ord has a new paper on the limits of growth. Constraints on reality are related to the Busy Beaver problem.
https://arxiv.org/abs/2410.10928
Comment #21 October 21st, 2024 at 10:24 am
#18
The video is a summary of the paper https://arxiv.org/pdf/2404.04125
BTW, I also accepted your statements in the YouTube video, not as strong evidence, but as something worth thinking about 🙂
Comment #22 October 21st, 2024 at 12:22 pm
Demis Hassabis just predicted it will take a decade for achieving top human level reasoning. Given this is an empirical only field, every one’s error bars are very high, just that Nobel Laurates get lot of press. So do star CEOs like Dario Amodei of Anthropic who created waves when he predicted AGI in around 2 years. Regulators, Users who want to use AI for serious use cases like Medical diagnosis and advances, Scientists and Mathematicians who are eager to get new insights – will have to grapple with these high error bars. Until then it will be used as souped up search engine, helping kids to beat homework blues, for artists to get creative ideas, coders to get the grudge work done etc etc.
Comment #23 October 21st, 2024 at 2:41 pm
Scott, regarding the interpretations of quantum mechanics, you dismiss the Copenhagen interpretation as “shut up and calculate”, but certainly there’s more to it: “things” don’t exist in the traditional sense, they are probabilistic variables which assume values whenever they interact with their environment (ie. are observed) and whose probabilistic distributions get updated in a Bayesian manner. One could imagine a (quantum?) computer running such a simulation, no?
Comment #24 October 21st, 2024 at 2:54 pm
Say I have a 4 qubit register that has the basic vectors (in equal superposition)
R1 : 0100, 0101, 1100, 1101
and then I have a second 4 qubit register that has the basic vectors (in equal superposition)
R2 : 1100, 1101, 1110, 1111
(the basic states 1100 and 1101 are ‘common’)
Is there a generic way to combine the two register into one that would have the union of the basic vectors of the other two, but still in equal superposition?
R3 : 0100, 0101, 1100, 1101, 1110, 1111
Comment #25 October 21st, 2024 at 3:57 pm
maglo #23: Yes, that’s why I like to describe Copenhagen as “shut up and calculate but minus the shut up part.” 😀
It declines to give straight answers to questions like, “what counts as ‘the environment,’ and what counts as ‘the system’? What kind of object is the state of the universe? Can observers like ourselves be placed in coherent superposition, or not?”
But it spends a tremendous number of words to try to change your conception of science to the point where you stop wanting answers to these questions.
Comment #26 October 21st, 2024 at 4:00 pm
fred #24: Yes, you can take a linear combination of any two quantum pure states (including the ones you specified) to get a new pure state. But that’s a formal operation, not a physical one, and it requires a normalization that depends on the inner product between the two states (in your example, the inner product should be 1/2).
Comment #27 October 21st, 2024 at 4:09 pm
Scott #26
not sure I get the distinction between “formal” and “physical”.
but can this be automated, or it has to be carefully crafted/tuned based on the specific pure states being combined (like build combining gates specific to the states of the various registers)?
Comment #28 October 21st, 2024 at 4:14 pm
fred #27:
Formal = if you tell me |v> and |w>, I can write down the linear combination a|v>+b|w>
Physical = if you gave me |v> and |w> as physical states and told me a and b, then not knowing anything else I could prepare a|v>+b|w> (not true in general)
Like I said before, there’s one parameter (the overall normalization) that needs to be tuned based on knowledge of the inner product ⟨a|b⟩.
Comment #29 October 21st, 2024 at 4:28 pm
Regarding consciousness and AI.
I keep bringing this up in this blog, but I think an aspect of consciousness that’s overlooked is that, as humans, we talk about it constantly.
To talk about our subjective experience means that our vocal cords are driven by neurons which have somehow to be influenced by the fact that we’re conscious. Which is why there’s 2600 years of Eastern philosophy texts detailing (non perfectly since subjective experience can’t be covered by concepts) how one should go about observing his own mind, in a dual or non-dual way, etc.
I.e. there is a coupling between consciousness and the physical state of the brain. Such coupling has to be physical itself (in the sense it could be measured, deconstructed, etc).
Now, if we were able to precisely model the evolution of a physical human brain, using the known laws of physics, at some arbitrary level of precision, then two things:
1) either that simulation diverges because somehow our laws of physics are missing the “consciousness” part of it. But here the mysterious part is that, since consciousness has a coupling on the state of the brain, we should be able to observe such coupling, as deviations from our basic physical laws. And in this case it could be that the physical contribution of consciousness is at a very fundamental level, but maybe its effect gets concentrated in a human brain, either by additive property or some sort of resonance mechanism (in the same way that gravity is so weak that we don’t include it in the two slit experiment, and it only matters for large macroscopic objects).
2) the simulated brain and the physical brains are in synch, and therefore, somehow, our laws of physics already include the consciousness ingredient, and the simulation itself is conscious. But since our laws are bottom-up, the mystery is – where in the simulation does the consciousness bit get instantiated? There’s no such thing as true emergence in this picture of physics.
Comment #30 October 21st, 2024 at 4:37 pm
Scott #28
but this inner product a|b can’t be evaluated in a generic automatic way because it would require destroying the state of the registers, right?
The only way to do this would be to replicate the entire system as many times as new registers are being combined.
Comment #31 October 21st, 2024 at 5:00 pm
fred #30: That’s why I said, from the beginning, that linear combination is a formal operation in QM.
Comment #32 October 21st, 2024 at 7:17 pm
This is just a follow-up to my comment #16 above. I am no expert in mathematics, the philosophy of mathematics, or the metaphysics of abstract objects. I am trying to learn from anyone who has the knowledge to help me. My beliefs are revisable and updatable.
Yesterday I bought the book, “Do Numbers Exist? A Debate about Abstract Objects” in an attempt to become a bit more informed. So far I find Peter van Inwagen’s arguments very attractive. However, if anyone would like to counter, or offer a bit of pushback, I would encourage that and welcome that, and be open to hearing and trying to rationally update my view. Van Inwagen, if I interpret him correctly, argues that facts about abstracta are reducible to, or grounded in, facts about logical possibilities. Consider a candidate class of abstract objects: properties. To be very simple: the property of ‘being cubical’. Does this property exist, as an abstract entity, even if it is uninstantiated in a given physical universe? For example, imagine the universe was so made that there was at no time (past, present, or future) any object with the property of ‘being cubical’. No one could make a true statement of any object in that universe, ‘x is cubical’. Still, even in that physically cube-less universe, there would still be a true, objective fact that it is a *logical possibility* that something could be cubical (or, equivalently, that there is another possible universe where a cube could exist). Even if the laws of physics (in our imaginary universe) themselves prohibited the construction of cubes, (and such laws would have to be ad hoc to do this, but it is imaginable), the fact of the logical possibility of a cubical object would still exist, even in that materially cube-less world. Logical possibilities are true in all worlds, and they do not depend on physical law, and seem to involve properties of geometry. Properties exist as logical possibilities, and logical possibilities have a necessary and mind independent character.
I think this is relevant to Prof. Greene’s remarks. Facts about logical possibilities do not seem to change, even in the remote future. They are necessary in character.
Another analogy would be this: suppose God exists alone, and decides to create nothing. Yet, he knows if he wanted to, he could have created a cubical object. The property ‘is cubical’, is a logical possibility that God knows about and can actualize if he wants to, even if it is never actualized.
What about numbers? Van Inwagen says that if you grant there are possibilities that correspond to abstracta that we would call ‘properties’ or ‘propositions’, then of course there will be something real and objective to play the role of numbers. Maybe numbers are sui generis entities, or maybe they are reducible to facts about logical possibilities, and then reasoning further about them leads to logical conclusions. He says, “that if there are properties, then of course there are numbers—in the sense that there are, well, any number of ways to “identify” numbers with certain propositions or certain properties (none of them the one “right” way). For example, the natural number 2 might be identified with the proposition there are at least two things (that is, that there is a thing and another thing) and the number 3 with the proposition that there are at least three things—and so on. But, of course, it might as well be identified with any of vastly many other propositions or with any of vastly many properties.” Later a summary of one of the sections states, “There exist (in the only sense of ‘exist’ there is) propositions, properties, and relations. However ‘abstract object’ is to be defined, every abstract object is a proposition, a property, or a relation. (And, therefore, any mathematical object—a complex number, a vector space, an Abelian group—belongs to one of these three categories.) All abstract objects are necessarily existent, and are, moreover, essentially without causal powers”
Does anyone here have any differing or opposing opinions? Does anyone think that logical possibilities are a matter not of objective fact, but is a contingent reality? Just curious. I do not know if I will have much time to reply, but I will have time to read.
Comment #33 October 21st, 2024 at 10:44 pm
Scott #25
Trying to relativize one’s statements (philosophers call that “bracketing”) over interpretations produces phrases that sound a lot like Copenhagen, and that’s how I understand it. If you wanted to say what a wavefunction was but didn’t want to commit to MWI or pilot waves, “a representation of what’s knowable to us” is a tricky way to make the implication of the existence of a fact in itself ambiguous. I think it serves as a template for really good interpretation – if it could only be rephrased so that it referred to reality like physics does, rather than our subjective perceptions and first-person viewpoints.
Comment #34 October 22nd, 2024 at 1:14 am
Scott #25
Regarding Copenhagen:
“What counts as ‘the environment,’ and what counts as ‘the system’?”
Anything can count as the “system”, provided it is an isolated system. In the Schrödinger’s box, there’s no cat until the box is opened.
“Can observers like ourselves be placed in coherent superposition, or not?”:
Of course, if we are to be placed in a box such as the Schrödinger’s box. But if that happens, we cease to exist. We are no different than the cat.
Regarding the universe, the fact that we exist suggests that there’s something akin to an external observer realizing this particular branch of the universe. But isolated systems inside it are undetermined/are not realized.
I understand this interpretation raises questions – an “external observer”? What happens to parts of the universe which are not in contact with our own (don’t exist?). I am not saying this is the correct interpretation, only that it maybe deserves a bit more credit 🙂 (btw I’m no expert and I’m clearly outside my area of expertise)
Comment #35 October 22nd, 2024 at 1:44 am
J. Hance & S. Hossenfelder claim that
but that seems to be wrong. At least I don’t see how to get a continuity equation in momentum space, when the potential is no longer assumed to be quadratic.
I initially wondered whether you had something similar in mind as Hance & Hossenfelder when you complained about the non-uniquencess of “Bohm rule”. On relistening, I came to the conclusion that you are more worried about discrete systems. I don’t think that you can write down a “morally valid Bohm rule” for discrete systems, because there is no “morally valid” continuity equation in that case.
But the “Bohm rule” is “morally” non-unique in a very real sense, which is closely related to how discrete degrees of freedom are handled today in Bohmian mechanics, namely by simply not introducing any hidden variables for them. The related non-uniqueness is that you can do this for any inconvenient degree of freedom, for example for photons. But when you start down that road, you can also do that for individual electrons in superconductors, and only keep a hidden position variable for electron pairs.
This is fine, but seems to confirm Wolfgang Pauli’s criticism, that Bohmian mechanics is not really different from Copenhagen. Just like with Copenhagen, it becomes subjective and arbitrary which hidden variables (which act “morally” like hidden pre-measurements) are there.
Comment #36 October 22nd, 2024 at 4:49 am
I have a hard time trying to understand how the Born rule can be arrived at when physicalism is combined with MWI. Many-minds interpretation looks like a brute force attempt.
Am I right that physically many-minds interpretation can be seen as Bohm interpretation with multiple non-interacting sets of particle configurations with initial states that are arranged in such a way that there’s no empty branches and that is fine-tuned to split according to the Born rule? Or no fine-tuning is required? And your many minds are attached to a subset of those configurations.
Are there ways to arrive at the Born rule in a more… natural fashion?
When the future contains a multiple equally valid continuations of you (with different amplitudes), then there’s no probability space. The outcome is singular: there are different versions of you observing different things. And if we try to create probability space, we get many-minds.
Comment #37 October 22nd, 2024 at 3:44 pm
As someone who really like studying classical algorithms (e.g. watching for fun Erik Demaine’s MIT classes), I find it very difficult to wrap my head around the limitations of qubits… no-cloning of state, the impossibility to accumulate arbitrary superpositions of basis states, .. these are just a gigantic source of frustration, haha.
As Brian Green noted, most of the quantum algorithms people keep bringing up were discovered 30 years ago… and I always wondered why someone like Scott hasn’t come up with half a dozen original algorithms yet.
On the other hand, IBM came up with Qiskit, so I guess there’s a growing demand for tools to explore algorithms beyond simple educational toys.
But I’ve always wondered how the effort to develop the hardware compares to the effort to come up with algorithms to use it.
Comment #38 October 22nd, 2024 at 4:48 pm
fred #37: I have invented or co-invented at least a half dozen quantum algorithms, including for finding a local minimum, for spatial search, and for star-free languages (all various applications of Grover), and the quantum protocol for detecting the bias of a coin with 1 qubit. It’s just that they’re ones you probably haven’t heard of. 🙂
For algorithms of greater practical significance, I think the main issue is exactly what I said in the podcast: not enough problems that are plausible candidates for superpolynomial speedups.
Comment #39 October 22nd, 2024 at 5:04 pm
fred #37,
Scott is the expert, but John Preskill said in a podcast with Sean Carroll that the application to classical problems as in Shor’s algorithm was a shock. In the long run, it is more likely that the use case may be to simulate quantum systems, not solve classical problems.
Comment #40 October 22nd, 2024 at 5:20 pm
Scott #38
The nonspecialist world has spent so long in a haze of doubt that I would be surprised if the full practical implications of a fast solution for the discrete logarithm problem will be understood for many years hence.
Comment #41 October 23rd, 2024 at 1:59 am
MWI assumes the superposition of macroscopic objects, which would require the superposition of different space-time geometries, but we do not have a working quantum gravity theory for that.
Usually MWI proponents simply discuss the non-relativistic Schroedinger equation, which uses a time parameter; but this (implicitly) assumes the existence of classical clocks with classical observers.
I believe that there are good reasons to believe that quantum gravity significantly affects the interpretation: The equivalent of the Schroedinger equation is the Wheeler-deWitt equation, but it looks like H psi = 0 , there is no time evolution (and therefore no splitting of worlds).
I would also like to comment on the question if cats or humans can be in a superposition.
It would require complete isolation from the environment, which would quickly lead to dead cats and/or humans. Btw one can in principle shield an experiment from all interactions – except gravity.
Comment #42 October 23rd, 2024 at 6:50 am
wb #41: Yes, that’s exactly what Penrose thinks for example, that gravity will swoop in and solve the interpretation problem by making sufficiently large superpositions impossible.
Even then, though, it’s possible that a fault-tolerant quantum computer would evade the limit, treating gravitational decoherence as just yet another source of error to be corrected. An AGI running on a quantum computer would also evade your observation about it being impossible in practice to completely isolate a human being without killing them (this, incidentally, is related to why Deutsch thought of quantum computing in the first place — he thought that a superposed AGI might force everyone to accept the truth of many-worlds).
Anyway, since 1997, progress in AdS/CFT has led to a completely opposite perspective on gravity, that QM can actually handle superpositions of spacetime geometries just fine, because gravitational theories are dual to ordinary QFTs. (Well, at least in AdS universes with supersymmetry … but conceivably in our universe too!) Not coincidentally, Penrose doesn’t like AdS/CFT at all. 🙂
Comment #43 October 23rd, 2024 at 7:50 am
Can’t we “shield” an experiment from gravity by putting it in “free fall”?
You just have to put your QC in orbit! (assuming the tidal effect is tiny)
Comment #44 October 23rd, 2024 at 9:40 am
Scott #42
The Penrose idea has the advantage that it can be falsified, perhaps in our lifetime, and perhaps gravitation gets us to a 1-world interpretation.
But this was not really my point:
I think MWI (as it is usually discussed) is inconsistent, because it relies on a classical time / background.
Btw the AdS/CFT correspondence considers QFT defined on a classical background, which is the boundary of the quantized space-time, and I don’t think it changes the argument.
Good old Copenhagen, as explained e.g. by Heisenberg (who is quite easy to read), avoids these issues, simply acknowledging the existence of classical observers, who cannot be decoupled in any meaningful sense from the rest of the universe; therefore we (have to) use Born probabilities etc.
Of course it is just a (very good and very useful) approximation and does not tell us anything about the true nature of the world; but MWI gives us an inconsistent and misleading picture …
Comment #45 October 23rd, 2024 at 10:12 am
wb #44: In order for gravity to invalidate MWI, it’s not enough for it to change the situation. Rather, it has to change the situation so drastically as to overturn the prediction of branches of the wavefunction constantly splitting off, all continuing to exist but out of causal contact with each other. This is exactly what Penrose speculates that gravity does. But I’d say that the ball is firmly in the court of anyone who thinks gravity does do this, to articulate how.
Quantum mechanics, as you know, is notoriously hard to change even slightly while keeping its mathematical consistency. Which is a huge part of the appeal of AdS/CFT: that it doesn’t ask us to change quantum mechanics (but merely almost everything we thought we knew about space 🙂 ).
Comment #46 October 23rd, 2024 at 11:08 am
Re: Scott #25
Regarding the separation between system and environment, etc, in the responses to Steven Weinberg’s article in the NY review of books, David Mermin gave this subjective, human-centric interpretation from the QBist viewpoint. I also understand this to mean that the observer is not in a superposition with the measured system since for QBists the wave function is not physically real.
Comment #47 October 23rd, 2024 at 3:57 pm
Fred #43
I believe it was H.D.Zeh in his book about the arrow of time who made the argument that the gravitational interaction with atoms on Alpha-Centauri disturbs the wave function of a human being on Earth (or any other macroscopic being) enough to make it impossible to define it.
Comment #48 October 23rd, 2024 at 11:28 pm
What’s your honest opinion about the current Nobel prize in physics and chemistry?
Comment #49 October 24th, 2024 at 12:38 am
I’ve never really understood the exact relation between the Bohmian interpretation and relativistic quantum field theory. Scott, which of these three statements would you say best matches your understanding?
1. There is no known way to combine any version of the Bohmian interpretation with relativity.
2. There is a natural way to combine the Bohmian interpretation with relativity, but it makes experimental predictions that differ from those of relativistic QFT. (I assume that this option is wrong but am including it for completeness.)
3. There is a straightforward extension of the Bohmian interpretation to relativististic QFT, and it matches the experimental predictions of standard relativistic QFT (e.g. superluminal correlations but no superluminal communication or causal influence), but it contains explicitly nonlocal terms in the relevant equations that seem conceptually incompatible with the “essence” of relativity.
If this answer is #3, then I would argue that the Bohmian interpretation is fully compatible with relativity. If the Bohmian formulation of relativistic quantum physics “looks” non-relativistic but turns out to be experimentally equivalent to a manifestly Lorentz-invariant theory, then I would argue that the former formulation is indeed “morally” Lorentz-invariant. This is like how one can formulate classical electromagnetism in a Hamiltonian formulation in which the Hamiltonian itself is defined with respect to a preferred Lorentz frame (and so is not Lorentz-invariant), but the entire theory turns out to be (non-obviously) Lorentz covariant.
Comment #50 October 24th, 2024 at 4:19 am
wb #41
> I would also like to comment on the question if cats or humans can be in a superposition.
It would require complete isolation from the environment, which would quickly lead to dead cats and/or humans.
A spacesuit would help. With respect to pressure forces and insulation of a body at 100bar/310K, 0.1mbar/4K is not significantly different from 0mbar/0K.
> Btw one can in principle shield an experiment from all interactions – except gravity.
In principle, an experiment can’t be shielded fully from anything, but when the strength of the interactions are made very low quantization converts them into very low probabilities, that can be “seen around” through repeated experiments. Having said that, storing quantum information in supercurrents and magnetic fields is about as close as you could get to shielding the information from gravity.
Comment #51 October 24th, 2024 at 4:44 am
Scott #42
> Anyway, since 1997, progress in AdS/CFT has led to a completely opposite perspective on gravity, that QM can actually handle superpositions of spacetime geometries just fine, because gravitational theories are dual to ordinary QFTs.
The gravitational duals aren’t anything new from AdS/CFT, they’re the regular ones from string theory (as far as I know, which I don’t really. 🙂 ) The CFT side is the only one we know is real.
Comment #52 October 24th, 2024 at 6:49 am
Mark #48:
What’s your honest opinion about the current Nobel prize in physics and chemistry?
I think deep learning and AlphaFold were both clearly epochal advances deserving of some Nobel Prize, and that the set of Nobel Prize categories we have is a weird accident of history.
Equally clearly, deep learning is not centrally physics, Hinton is not a physicist (though Hopfield is), and Hassabis is not a chemist (though Jumper is). On the other hand, if the physicists and chemists on the Nobel committee say that shouldn’t preclude them from winning their respective prizes, then who am I — a mere computer scientist — to argue with them?
One of my colleagues quipped that “CS is finally a real science, with Nobel Prizes and all.” Better yet, CS is so badass that it now wins Nobel Prizes despite not even having its own Nobel Prize! Though I’m guessing the physicists who chose to award Hinton saw it the opposite way: that once something like neural networks changes the world to this degree, it should be imperialistically claimed to have been “physics” all along. 😀
I wonder if this year was the harbinger of a future, not that far away, where the Nobel Prize for X will most often be for the creation of an AI related to X — and maybe even will eventually go to the AI itself. If so, I daresay we have bigger things to worry about than who wins the Nobel Prize. 🙂
Comment #53 October 24th, 2024 at 6:58 am
Ted #49: The correct answer is #3, though with the crucial caveat that the nonlocal hidden-variable theory compatible with a relativistic QFT won’t be “Bohmian mechanics,” but rather some new theory constructed for the purpose, probably no longer a deterministic one. And there will be infinitely many choices for such a theory (even leaving aside the choice of reference frame), with no empirical way to decide among them.
Comment #54 October 24th, 2024 at 11:19 am
I am still very confused by the difference between “formal” and “physical” operation in quantum mechanics.
Can you please define for me precisely what is a “formal” operation and a “physical one?” Is a formal one a linear transformation, and a physical one is classical?
Comment #55 October 24th, 2024 at 12:05 pm
Student #54: Physical = an actual thing you could do to the state, like a unitary transformation or a measurement. Formal = a thing you can do to descriptions of the state written on a piece of paper. In this case, linearly combining two states actually is linear in those states (although not in their tensor product), but in any case it’s not unitary.
Comment #56 October 24th, 2024 at 12:06 pm
Ted #49 Scott #53
>the nonlocal hidden-variable theory compatible with a relativistic QFT won’t be “Bohmian mechanics,”
That could be a little too pessimistic. The Bohm rule originates from the idea of probability current, an intermediate step between fully classical and many-body quantum currents. To see why it works, first observe that non-interacting many body systems have physics that is fully captured by the one-particle wavefunction except for their overall number. If you put that into the normalization (using the rigged Hilbert equivalence class idea so that you do not have to normalize) you get an effective wave function. This is seen in the Ginsburg-Landau model for superconductive currents (the wavefunction represents the number density of center of masses of electron pairs). The local contribution to the expected value of the momentum operator is a vector field that aligns well with our classical idea of electrical current.
There’s a paper by somebody named Marx that works out probability current in the relativistic case. Maybe instead of Bohmians we’ll call them Marxists.
Comment #57 October 24th, 2024 at 5:52 pm
Student #54
“Formal” is when the observer and the wave function haven’t met yet and are about to be introduced for the first time, the future is still open and full of linear possibilities.
“Physical” is when the observer and the wave function have been on enough dates for things to get seriously entangled and either something really dirty is about to finally happen or the whole thing may collapse.
Comment #58 October 27th, 2024 at 3:10 pm
Hello Mr. Aaronson, as a classical programmer interested in starting in quantum computing I’ve stumbled on a paper with a quite strange claim about computational complexity in part of its algoritm.
The paper talks about optimisation of a 5G network using a quantum genetic algorithm. The strange claim appears in section 2, there it claims there is a quantum algoritm called “quantum existence testing (QET)” that is used to search extreme values in an unsorted database. Am I reading it correctly that they claim QET can search in polylog time complexity, more precisely O(log(sqrt(N))^3), or am I reading it wrong?
I’ve have used the seach box on the blog to see if there was any mention of the “quantum extreme value searching algorithm” (QEVSA) or the “quantum existence testing” (QET) before, but haven’t found anything.
Best regards.
Comment #59 October 27th, 2024 at 5:00 pm
Isaac Gonzalez #58: That belongs to an enormous equivalence class of papers that I don’t bother reading, on the ground that if the thing advertised in the abstract had any validity (in this case, basically breaking everything we thought we knew about the limits of quantum algorithms), it would have implications vastly beyond the narrow area being talked about (in this case, 5G networks).
I’d love to teach people how not to put too much stock in any one paper that they dug up—a large fraction of papers are wrong, misleading, or simply don’t mean anything like what a casual reader thinks they mean.
Comment #60 October 31st, 2024 at 9:12 am
Message from a Brit on the US election and populism:
Hopefully our sanity will be able to make it through next week…
Comment #61 October 31st, 2024 at 1:24 pm
Scott has been busy lately. Jim has a good audience in the finance world. There are hucksters when money is involved, and bigger stakes attract bigger thieves as Ed Thorp once said. In his case, the warnings (about Madoff) were ignored for 16 years.
https://newsletter.osv.llc/p/quantumania
Comment #62 October 31st, 2024 at 2:12 pm
For Jewish Americans who are still on the fence (assuming there are any), Sam Harris vs Ben Shapiro on who to vote for:
Comment #63 November 1st, 2024 at 11:53 am
Scott #59: Well it’s not just 5G networks that are getting those claims but other optimisations. In reality is derived by 2 authors, Sara El Gaily and Sándor Imre, the later seems the one that “invented” the QET and QEVSA algoritms back in 2005 and 2007.
Since I have no free access to this 2 last papers, but they claim the QET algoritm is based on quantum counting (basically indicates if the item is in the search space or not by returning a count of 0 or not), I’ve decided to run a simulated experiment on my PC adapting the quantum counting example on Qiskit. I’ve found 2 interesting facts:
1) When the item is not on the database to search, the measurement is always 0 independently on the database size qubits, the number of counting qubits for precision or the number of shots. Literally you could get a 0 on a single counting qubit for any database size N as long you have no noise. But this is just the good half of the story…
2) Is when there is an item in the database to search when the counting qubits for precision became necessary, because you get a lot of “false 0s”. How many? well the error on the Qiskit example can’t be made constant if the counting qubits (t) over the search qubits (n) are less than n/2, and that means 2^(n/2) Grover iterations that translates to O(sqrt(N)) scaling with the database size. I have no idea how they got that log^3 wrapping on the complexity they claim.
The most advantage you could get, I think, is substracting some counting qubits for getting a constant factor speedup (in fact in their papers they claim t = n/2 – 1 counting qubits needed), for every counting qubit substracted you’ll get an speedup of 2 (I think), but doing this I’ve noticed the error scales by 4 so you need more shots to not get a “false 0”.
So to sum up, I don’t think there is an exponential speedup on the QET algoritm. At best, as the authors propose, you could get a factor of 1/2 instead of Grover’s pi/4 on O(sqrt(N)) complexity, a bit of improvement for checking if an item is in a database or not, but nothing revolutionary I’m afraid.
Comment #64 January 2nd, 2025 at 8:15 pm
Hi Dr. Aaronson,
Thank you for your fascinating post on parallelism. It got me thinking about a potential connection between two key quantum experiments: the double-slit experiment and the phenomenon of negative time observed in photon interactions (e.g., photons passing through ultracold rubidium clouds).
In the double-slit experiment, we see photons exhibit wave-particle duality and an interference pattern when unobserved. However, recent studies on negative time correlations suggest that photons might influence atomic states before they arrive, raising questions about retrocausality and temporal superpositions.
Could these two phenomena be connected? Specifically:
• Might the interference pattern in the double-slit experiment arise from photons interacting with their own future states across a temporal superposition?
• Could the act of observation collapse not only spatial but also temporal superpositions, enforcing a classical forward-time trajectory?
I realize this is speculative, but it feels like a natural extension of quantum mechanics’ strange relationship with time. I’d love to hear your thoughts on whether these ideas resonate with current research or if this connection has been explored.
Thank you for your time and for fostering such an insightful community here on your blog!