On tardigrades, superdeterminism, and the struggle for sanity
(Hopefully no one has taken taken that title yet!)
I waste a large fraction of my existence just reading about what’s happening in the world, or discussion and analysis thereof, in an unending scroll of paralysis and depression. On the first anniversary of the January 6 attack, I read the recent revelations about just how close the seditionists actually came to overturning the election outcome (e.g., by pressuring just one Republican state legislature to “decertify” its electors, after which the others would likely follow in a domino effect), and how hard it now is to see a path by which democracy in the United States will survive beyond 2024. Or I read about Joe Manchin, who’s already entered the annals of history as the man who could’ve halted the slide to the abyss and decided not to. Of course, I also read about the wokeists, who correctly see the swing of civilization getting pushed terrifyingly far out of equilibrium to the right, so their solution is to push the swing terrifyingly far out of equilibrium to the left, and then they act shocked when their own action, having added all this potential energy to the swing, causes it to swing back even further to the right, as swings tend to do. (And also there’s a global pandemic killing millions, and the correct response to it—to authorize and distribute new vaccines as quickly as the virus mutates—is completely outside the Overton Window between Obey the Experts and Disobey the Experts, advocated by no one but a few nerds. When I first wrote this post, I forgot all about the global pandemic.) And I see all this and I am powerless to stop it.
In such a dark time, it’s easy to forget that I’m a theoretical computer scientist, mainly focused on quantum computing. It’s easy to forget that people come to this blog because they want to read about quantum computing. It’s like, who gives a crap about that anymore? What doth it profit a man, if he gaineth a few thousand fault-tolerant qubits with which to calculateth chemical reaction rates or discrete logarithms, and he loseth civilization?
Nevertheless, in the rest of this post I’m going to share some quantum-related debunking updates—not because that’s what’s at the top of my mind, but in an attempt to find my way back to sanity. Picture that: quantum mechanics (and specifically, the refutation of outlandish claims related to quantum mechanics) as the part of one’s life that’s comforting, normal, and sane.
There’s been lots of online debate about the claim to have entangled a tardigrade (i.e., water bear) with a superconducting qubit; see also this paper by Vlatko Vedral, this from CNET, this from Ben Brubaker on Twitter. So, do we now have Schrödinger’s Tardigrade: a living, “macroscopic” organism maintained coherently in a quantum superposition of two states? How could such a thing be possible with the technology of the early 21st century? Hasn’t it been a huge challenge to demonstrate even Schrödinger’s Virus or Schrödinger’s Bacterium? So then how did this experiment leapfrog (or leaptardigrade) over those vastly easier goals?
Short answer: it didn’t. The experimenters couldn’t directly measure the degree of freedom in the tardigrade that’s claimed to be entangled with the qubit. But it’s consistent with everything they report that whatever entanglement is there, it’s between the superconducting qubit and a microscopic part of the tardigrade. It’s also consistent with everything they report that there’s no entanglement at all between the qubit and any part of the tardigrade, just boring classical correlation. (Or rather that, if there’s “entanglement,” then it’s the Everett kind, involving not merely the qubit and the tardigrade but the whole environment—the same as we’d get by just measuring the qubit!) Further work would be needed to distinguish these possibilities. In any case, it’s of course cool that they were able to cool a tardigrade to near absolute zero and then revive it afterwards.
I thank the authors of the tardigrade paper, who clarified a few of these points in correspondence with me. Obviously the comments section is open for whatever I’ve misunderstood.
People also asked me to respond to Sabine Hossenfelder’s recent video about superdeterminism, a theory that holds that quantum entanglement doesn’t actually exist, but the universe’s initial conditions were fine-tuned to stop us from choosing to measure qubits in ways that would make its nonexistence apparent: even when we think we’re applying the right measurements, we’re not, because the initial conditions messed with our brains or our computers’ random number generators. (See, I tried to be as non-prejudicial as possible in that summary, and it still came out sounding like a parody. Sorry!)
Sabine sets up the usual dichotomy that people argue against superdeterminism only because they’re attached to a belief in free will. She rejects Bell’s statistical independence assumption, which she sees as a mere dogma rather than a prerequisite for doing science. Toward the end of the video, Sabine mentions the objection that, without statistical independence, a demon could destroy any randomized controlled trial, by tampering with the random number generator that decides who’s in the control group and who isn’t. But she then reassures the viewer that it’s no problem: superdeterministic conspiracies will only appear when quantum mechanics would’ve predicted a Bell inequality violation or the like. Crucially, she never explains the mechanism by which superdeterminism, once allowed into the universe (including into macroscopic devices like computers and random number generators), will stay confined to reproducing the specific predictions that quantum mechanics already told us were true, rather than enabling ESP or telepathy or other mischief. This is stipulated, never explained or derived.
To say I’m not a fan of superdeterminism would be a super-understatement. And yet, nothing I’ve written previously on this blog—about superdeterminism’s gobsmacking lack of explanatory power, or about how trivial it would be to cook up a superdeterministic “mechanism” for, e.g., faster-than-light signaling—none of it seems to have made a dent. It’s all come across as obvious to the majority of physicists and computer scientists who think as I do, and it’s all fallen on deaf ears to superdeterminism’s fans.
So in desperation, let me now try another tack: going meta. It strikes me that no one who saw quantum mechanics as a profound clue about the nature of reality could ever, in a trillion years, think that superdeterminism looked like a promising route forward given our current knowledge. The only way you could think that, it seems to me, is if you saw quantum mechanics as an anti-clue: a red herring, actively misleading us about how the world really is. To be a superdeterminist is to say:
OK, fine, there’s the Bell experiment, which looks like Nature screaming the reality of ‘genuine indeterminism, as predicted by QM,’ louder than you might’ve thought it even logically possible for that to be screamed. But don’t listen to Nature, listen to us! If you just drop what you thought were foundational assumptions of science, we can explain this away! Not explain it, of course, but explain it away. What more could you ask from us?
Here’s my challenge to the superdeterminists: when, in 400 years from Galileo to the present, has such a gambit ever worked? Maxwell’s equations were a clue to special relativity. The Hamiltonian and Lagrangian formulations of classical mechanics were clues to quantum mechanics. When has a great theory in physics ever been grudgingly accommodated by its successor theory in a horrifyingly ad-hoc way, rather than gloriously explained and derived?
Update: Oh right, and the QIP’2022 list of accepted talks is out! And I was on the program committee! And they’re still planning to hold QIP in person, in March at Caltech, will you fancy that! actually I have no idea—but if they’re going to move to virtual, I’m awaiting an announcement just like everyone else.
Comment #1 January 10th, 2022 at 8:48 pm
Shades of Descartes’ Evil Genius !!!
Comment #2 January 10th, 2022 at 8:59 pm
I wasn’t a big fan of Bub & Bub’s “Totally Random: Why Nobody Understands Quantum Mechanics”, but I did find this line about superdeterminism delightful:
“We’re really just unwitting pawns playing out a sinister predetermined plan laid out by the toaster.”
(Wish I could find a picture of the relevant page online.)
https://www.goodreads.com/work/quotes/58411030-totally-random-why-nobody-understands-quantum-mechanics-a-serious-comi
Comment #3 January 10th, 2022 at 9:39 pm
Having listened to some of the video:
On the one hand, Sabine ought to stop beating that poor red herring to death.
On the one hand, it’s kind of the fault of the physicists who referred to the “the qubit does not know how many points Michael Jordan scored on Nov 7 1990 assumption” (or “the qubit is not omniscient assumption”) as the “free will assumption”. Because free will is a bogus idea and we shouldn’t use the same words to refer to things that are obviously true.
Comment #4 January 10th, 2022 at 9:43 pm
Rand #3: I completely agree with you that “free will assumption,” “free will loophole,” and “Free Will Theorem” were some of the most misleading terminology in the history of science! Having said that, I still wish Sabine had cleared these terminological mines in her video, just dispensed immediately with the “free will” part and talked about whether statistical independence is a presupposition of science, rather than all but walking into the mines!
Comment #5 January 10th, 2022 at 10:20 pm
I don’t understand how anyone can think superdeterminism is a good theory. A theorist focused on elegance and beauty would not like it: It’s an overcomplicated and inelegant theory. A practically minded theorist would not like it either. “Tell me, what are the equations can you use to predict superdeterministic effects?” he or she might ask, but would be met with silence. “Wait a sec, does it even predict any new effects that don’t show up in standard quantum theory?” Experimentalists would absolutely loathe it, of course. At least an interpretation like Many Worlds could be falsified by observing the spontaneous collapse of a superposition. Superdeterminism allows pretty much anything to happen. Plus, it implies that experimentalists don’t have free will, though it allows theorists to have free will, as long as they don’t actually perform any experiments. Definitely not fair!
Comment #6 January 10th, 2022 at 10:37 pm
Is there in principle any experiment that can distinguish between standard quantum mechanics (say, the ‘collapse of the wave function’ version) and superdeterminism? Or are they just two different sets of words to explain the same physical and mathematical phenomena? (I don’t see how you can get superdeterminism to predict anything, it seems to rationalize anything that could conceivably happen. But Hossenfelder describes it as just another description of the same quantum mechanics, so I assume whatever they do they somehow arrive at the same math?) (But she also says something I don’t understand about being able to test it if you can get the deterministic hidden variable system out of the chaotic regime by using very small systems at very low temperature, which sounds like they don’t necessarily arrive at the same math — or else do arrive at the same math, but only by assuming chaotic dynamics??)
Comment #7 January 10th, 2022 at 10:53 pm
Ken Miller #6: The Nobel laureate Gerard ‘t Hooft is the most famous advocate of superdeterminism. On the basis of superdeterminism, ‘t Hooft has predicted that quantum computers will never outperform similarly-sized classical computers, if the classical computers could perform operations at the Planck scale.
The trouble is, I don’t understand why ‘t Hooft predicts this! I.e., supposing a quantum computer works exactly like QM says it should, why couldn’t ‘t Hooft “explain” that in exactly the same way he “explains” Bell inequality violation: namely, it was all part of the universe’s superdeterministic conspiracy going back to the Big Bang?
Meanwhile, in her video, Sabine argues that the way to test superdeterminism experimentally is to search for non-random patterns in quantum measurement outcomes. I’m not sure whether she appreciates the amount of violence that such patterns, if they existed, would do to the whole structure of QM (e.g., they’d generically lead to superluminal signaling, unless one somehow carefully engineered them not to).
In any case, this strikes me as yet another example of something that could happen under superdeterminism, but only because pretty much anything could happen under superdeterminism … including zero deviations from the predictions of standard QM! 🙂
Comment #8 January 10th, 2022 at 11:11 pm
My take on the tardigrade entanglement paper was largely the same, that the real kernel of interesting science was on the biology of tardigrades (I have to imagine dilution refrigerators capable of hitting ~10mK aren’t standard biology lab fare). Based on the details of the experiment it is hard to for me to see what is so special about the role of the tardigrade here. Why I couldn’t I just as well stick my finger onto the chip, measure the coupling, and claim that my finger (and by extension myself) was entangled with a transmon qubit? Admittedly those skin cells likely wouldn’t survive this process very long…
Re: Superdeterminism. It is interesting to me how this idea remains so tempting to so many people, including generally excellent scientific minds. There is a certain religiosity to the this line of thinking in its inherent unfalsifiability. That said, I walked away from the video thinking to myself that what Sabine was defining as superdeterminism was subtly different than how I’d define it and is closer to what I would call a non-local hidden variable theory (or at the very least isomorphic to one) not all that different from, say, Bohmian mechanics. In any case, one thing we know about nonlocal hidden variable theories is that in this setting there is an embarassment of riches, and picking from among an infinite number of theories/rules for the dynamics of the nonlocal hidden variables, all of which entirely indistinguishable from the predictions of quantum mechanics, requires adopting some principled choice. Any other takes on this?
Comment #9 January 10th, 2022 at 11:39 pm
Corey #8: In a nonlocal hidden-variable theory like Bohmian mechanics, there’s no need to talk about the measurement settings being predetermined in extremely specific ways since the beginning of the universe, or violating statistical independence, or anything like that. But Sabine did talk about that … which only makes sense to me if, unlike in Bohmian mechanics, she wants to dispense with the wavefunction of the universe and go “full superdeterminist.” What did she say that made you think otherwise?
Comment #10 January 10th, 2022 at 11:48 pm
“superdeterminism, a theory that holds that quantum entanglement doesn’t actually exist”
This is bluntly wrong. I never said anything like that and I have no idea where you got it from. Superdeterministic theories can full well contain entangled states.
“but the universe’s initial conditions were fine-tuned to stop us from choosing to measure qubits in ways that would make its nonexistence apparent: even when we think we’re applying the right measurements, we’re not, because the initial conditions messed with our brains or our computers’ random number generators. (See, I tried to be as non-prejudicial as possible in that summary, and it still came out sounding like a parody. Sorry!)”
Indeed you are a delivering an excellent parody of all the nonsense that people have said about superdeterminism. Seriously, Scott, you aren’t doing yourself any favor there. I have given dozens of talks (some of which you find on YouTube) in which I explain why this is a trivial mistake in interpreting a correlation. Experimenters can chose whatever settings they like. It’s just that the evolution of the prepared state depends on the setting.
Look, I understand you are short on time, so am I, and I don’t expect you to read my papers and watch my lectures. But maybe if you don’t you shouldn’t comment on them.
Comment #11 January 11th, 2022 at 12:37 am
A little weird to discuss a popularisation video rather than her actual published papers on the topic, but anyway, Sabine seems to simply say that yes, you can make whatever experiment you want. But since all parts of the experiment must be causally connected and the universe is deterministic and reversible, then there’s no difference between what you get (or choose) about the end of the experiment and what you prepared before it — they are just time-translations of each other. Doesn’t sound like your “initial conditions messed with our brains” at all.
As for the quest to find non-random patterns in quantum measurement outcomes, she actually says that she has specific experiments in mind, which, according to her, were simply never done. Given she’s a particle physicist (while I’m not), I tend to trust her on that point.
Comment #12 January 11th, 2022 at 12:54 am
Sabine, this is just semantics. If you had “entangled states” as I understand the term, then superdeterminism would be false, because the outcomes of measurements on those entangled states would follow the Born rule. What exists in a superdeterministic theory is a configuration of local hidden variables that simulates an entangled state—something that, on its face, would contradict Bell’s Theorem, and which doesn’t do so here only because we also get to violate statistical independence.
(Or, to spell it out in plain English, because the universe is postulated to have fine-tuned initial conditions that prevent us from ever choosing measurement bases that would cause us to see the deviation from QM. Or is your version of this different from ‘t Hooft’s? Because he’s been clear as day about this!)
If, as you say, “experimenters can choose whatever settings they like,” and it’s just that the evolution of the prepared state depends on the measurement setting, then there are two cases:
(1) The prepared state depends on the measurement setting, but only locally, so that Bob’s output probabilities are always independent of Alice’s measurement choices and vice versa. In this case, we’re right back in the situation that Bell’s Theorem standardly covers. Meaning that you can’t have local hidden variables, free choice for the experimenters, and the predictions of QM, full stop. “Pick any two.”
(2) The prepared state depends on the measurement settings in a global way, so that Bob’s output probabilities can depend on Alice’s measurement choices or vice versa. In this case, we clearly have nonlocality even “worse” than the standard nonlocality of QM—worse because it will yield superluminal signalling, unless it’s fine-tuned not to do so—so what was the point of postulating superdeterminism in the first place?
I’d be curious about your response to my main point, which is that taking superdeterminism seriously in the first place requires treating QM as a vast “anti-clue,” a red herring that systematically leads us astray about the nature of the world, a thing to be grudgingly accommodated in an antithetical framework rather than derived or explained. Or as Greg Kuperberg put it on Facebook, superdeterminism is “an attempt to accept the technical, mathematical truth, and yet stay in philosophical denial”—like epicycles except a million times more so, because now the epicycles actually affect the positioning of our telescopes, preventing us from making the observations that would confirm the geocentric theory.
Comment #13 January 11th, 2022 at 1:15 am
Dmitri Urbanowicz #11:
– I commented on the video because that’s what readers asked me to comment on, and because that’s the thing that’s actually reaching non-experts and giving them a completely mistaken impression that superdeterminism is viable. It’s not. It’s been Sabine’s and Gerard ‘t Hooft’s hobbyhorse for many years. The only thing to be said in its favor is that if Sabine and Gerard ‘t Hooft declared that 5+5=8 … well, they’re obviously smart and sensible on many other topics, so maybe the rest of the math and physics world really has been missing something of immense importance. But then every time you look into it again, it still looks like a foot-stomping claim that 5+5=8…
– As I explained in #12, if you’re going to have nonlocal dependencies, then you might as well just accept QM (or a deterministic nonlocal hidden-variable theory, like Bohmian mechanics) and be done with it! The entire point of superdeterminism was supposed to be that you don’t want nonlocality! If it’s not going to get rid of that, then what is the point?
– People have actually done many searches for deviations from the Born rule (including, but not limited to, pseudorandom patterns in measurement outcomes). Needless to say (because otherwise you would’ve heard), they’ve never found anything.
– Beyond that, though, you might not appreciate that even a tiny deviation from the Born rule would generically lead to superluminal signalling, unless that was ruled out by yet more restrictions on allowed experiments—see, e.g., Is Quantum Mechanics An Island In Theoryspace? paper (although the point is a standard one that precedes my paper by a long time).
– Finally, as I said, even if superdeterminism were true, we could still get no empirical deviation of any kind from standard QM! In other words, the empirical situation is kind of like that for supersymmetry or string theory—ironically, given how acerbic Sabine has been about those topics. If any deviation from the Born rule is ever found, QM is overthrown. But if zero deviations continue to be found, superdeterminism can still never be falsified. At least string theory could probably be confirmed or falsified if we could build particle accelerators capable of reaching the Planck energy!
Comment #14 January 11th, 2022 at 1:48 am
Scott, I don’t know if you’re aware of this, but Sabine doesn’t believe in fine-tuning, so that particular argument won’t matter. (By which I mean she doesn’t believe that there is a principled way to distinguish fine-tuned from non-fine-tuned theories, not that she believes the world isn’t fine-tuned.)
The vague impression I got from reading her and Tim Palmer’s paper a while back is that they’re proponents of something a bit different from t’Hooft, though I still can’t quite understand how it’s supposed to work. (I’ve also seen Tim Palmer give a talk on his specific version with a p-adic phase space.) The impression I’ve gotten (and I very much hope she corrects me here because I’m certainly misunderstanding something) is that she would distinguish correlations between the prepared state and the measurement settings, from correlations between the measured state and the measurement settings. That is, she doesn’t think they need to be correlated with each other until the time of measurement. I don’t get how this is supposed to make any sense with causality (she’s previously described a notion of causality as continuity of correlation across space-time, which this certainly looks like it would violate), but again I’m almost certainly misunderstanding some part of how this is supposed to work.
Regarding the appeal of superdeterminism, one aspect is pretty understandable: it leaves the door open to a theory that actually predicts the results of quantum measurements, letting people do something they couldn’t do before. That would be pretty cool/useful if true. It’s still a theory of an effect whose scale can conveniently be always pushed out of experimental range, which you’d think Sabine would be against given her opinions on SUSY…but eh.
Comment #15 January 11th, 2022 at 2:06 am
Scott #12:
> The prepared state depends on the measurement setting, but only locally, so that Bob’s output probabilities are always independent of Alice’s measurement choices and vice versa
But shouldn’t we consider the whole experiment? It has a singular entry-point in the space-time (Alice and Bob must be instructed) and a singular exit-point (their measurements must be reported back and collected).
Comment #16 January 11th, 2022 at 2:33 am
I don’t understand why super-determinism should be bad. It’s just the old determinism of classical physics. Determinism does not mean going back to classical physics. It’s an attempt of going one step further: quantum mechanics plays dices, so a legitimate research question is searching what its dices are. Determinism just means that we can ignore the Bell no-go theorem against a local hidden dynamics that produces apparent randomness. The real problem is that nobody (as far as I know) found a valid mechanism that produces what quantum mechanics postulates.
Comment #17 January 11th, 2022 at 3:56 am
Great post.
I still would like to have a precise description of an experiment proving super-determinism…
Comment #18 January 11th, 2022 at 4:04 am
Scott, what superdeterminists don’t seem to realise is they are in fact advocating a new form of Intelligent Design. The claim as I understand it is that initial conditions in the early universe determine uniquely everything that happens, there is no randomness at all, even though the fluctuations on the last scattering surface as measured by the Planck satellite look like modulated random gaussian fluctuations. In fact they somehow contain a coded version of everything written in this blog, because they uniquely determined what is written here. There is no physical chain proposed as to how this data gets into our brains, but don’t worry about that, its not fundamental physics is it? The key issue however is this: this blog contains a set of (more or less) logical arguments. How did such arguments, which are NOT random precisly because they contain logical claims, get coded into whatever kind of initial data the superdeterminists image is behind everything we see? Who or what wrote them into the intial data? The theory in fact hides some demi-urge or God who does this extraordinary feat of pre-arrangement, writing into that data also Shakepeare’s sonnets, the battle of Waterloo, Trumps tantrums, and so on.
Comment #19 January 11th, 2022 at 5:44 am
IME with the Bohmians, the Everettians and now the superdeterminists they’ve always seemed at first unaware of and then unable or unwilling to absorb current knowledge.
Comment #20 January 11th, 2022 at 6:26 am
I’d like to give superdeterminism a fair chance but it’s difficult to find any serious engagement with it. Also, its proponents never seem to deal with much of the critism directed at it. For example, here is a talk by Indrajit Sen at PIRSA. This conclusion struck me as important for superdeterministic theories.
“Developing an intuitive criticism by Bell, we show that superdeterministic models are conspiratorial in a mathematically well-defined sense in two separate ways. In the first approach, we use the concept of quantum nonequilibrium to show that superdeterministic models require finetuning so that the measurement statistics do not depend on the details of how the measurement settings are chosen. In the second approach, we show (without
using quantum non-equilibrium) that an arbitrarily large amount of superdeterministic correlation is needed for such models to be consistent. Along the way, we discuss an apparent paradox involving nonlocal signalling in a local superdeterministic model.”
https://pdf.pirsa.org/files/20120023.pdf
Comment #21 January 11th, 2022 at 7:56 am
Paul Hayes #19: What’s the new knowledge that the Bohmians and the Everettians were at first unaware of and then unwilling or unable to absorb? (The Bohmians famously have a hard time absorbing quantum field theory, but that’s been the case since the very beginning!)
Comment #22 January 11th, 2022 at 8:07 am
George F R Ellis #18: You just made me realize that perhaps my real objection to superdeterminism is as follows. If you’re going to put Intelligent Design into the initial conditions of the universe—and try to talk around it however you like, that’s what it is—then why the hell would you waste it on such boring goals as evading Bell’s Theorem or making QM deterministic?? Why not give humans libertarian free will or give human history a divine purpose, which you can now do for no extra cost in the complexity of specifying physics?
It’s like in the Harry Potter world, where the witches and wizards have discovered that the fundamental laws of the universe do care about human goals and intentions after all … and they use that titanic discovery, in large part, to create vastly inferior versions of what the Muggles are able to do with their cell phones.
Comment #23 January 11th, 2022 at 8:18 am
One could take the Tegmark approach to the initial conditions and say “all possible configurations exist”.
Comment #24 January 11th, 2022 at 8:25 am
I haven’t studied QM in great depth, and I am not all that clear on what is meant by ‘superdeterminism’ both in Sabines video and elsewhere, but
Might this be a case of talking past one another due to differing interests? Maybe the nondeterminists are more focused on the ‘user interface’ of QM (that, yes, does look nondeterministic), and the determinists are more interested in QM’s ‘inner workings’ (that, in contrast, actually look deterministic)?
(My background: mainly Yudkowsky, a shut-up-and-calculate course, and reading on Physics SE and such.)
Comment #25 January 11th, 2022 at 8:46 am
Scott #12:
This is basically the correct answer from Sabine’s work, if you remove all the dismissiveness and claims of fine-tuning. Superdeterminism is an umbrella term for a whole class of theories, some of which are implausible in the sense you describe, and some of which are actually very plausible.
I think the time spent watching her video would have been better spent reading her toy superdeterministic model to get a feel for what such a theory might actually look like, and why your objections don’t seem well-motivated:
A Toy Model for Local and Deterministic Wave-function Collapse, https://arxiv.org/abs/2010.01327v5
If you want more rigourous versions of the arguments she presented in the video, see her paper:
Rethinking Superdeterminism, https://www.frontiersin.org/articles/10.3389/fphy.2020.00139/full
Comment #26 January 11th, 2022 at 8:48 am
George F R Ellis #18: Doesn’t that objection apply equally well to plain old determinism? E.g. in Bohmian mechanics this blog was determined by the initial hidden variables of the universe. That doesn’t particularly bother me in the plain old determinism case (partly because I’m a compatibilist about free will) so I don’t see why it should bother me in the super-deterministic case either. What does bother me about the super-deterministic case is that if somebody uses the blog to choose the settings for a Bell experiment then the blog had to be determined in a very specific way in order to fake the inequality violation.
Comment #27 January 11th, 2022 at 8:50 am
This latest video by Hossenfelder made my actually angry. She quotes arguments by Bell, Gisin, Zeilinger, Shimony, Horne, Clauser, and Maudlin about why superdeterminism is a terrible idea, and dismisses them with “As you can see, we have no shortage of men who have strong opinions about things they know very little about”. The lack of self-awareness is astounding. Has she considered the possibility that she is the one having strong opinions about something she knows very little about? The people she is insulting have worked most of their lives on the foundations of quantum mechanics, and are well-deservedly famous for their contributions. Even if they are wrong they most definitely know a lot about it.
Comment #28 January 11th, 2022 at 9:01 am
Greg Guy #20: The reason is that pretty much nobody in the quantum foundations community takes superdeterminism seriously, so we won’t waste our time writing a paper debunking it. I did waste my time reading Hossenfelder and Palmer’s paper and writing a blog post criticizing it. What for? Hossenfelder simply ignored it. To be fair to Palmer, he did respond.
Comment #29 January 11th, 2022 at 9:47 am
Scott,
The best argument for superdeterminism is that it is the only way one could explain entanglement in a local way. In a nutshell, EPR proved that QM is either non-local or incomplete, while Bell proved that the only possible way to complete QM and keep it local is superdeterminism.
Given the fact that locality is a fundamental physical principle of all modern physics (standard model and GR) any non-local explanation has a very, very low prior probability. Superdeterminism on the other hand does not contradict any known physics. It is therefore perfectly reasonable to accept superderminism.
Superdeterminism is also not ad-hoc, at least not necessarily so. We know that the source and detectors interact electromagnetically (they consist of charged particles, electrons and nuclei), we know that the emission of entangled particles is an electromagnetic phenomenon, so why is it so surprising that the observed correlations are caused in this way? There is no need to invent new physical effects or fine tune anything. The source and detectors get correlated for the same reason stars in a galaxy are correlated, long range interactions between them.
A short mention about the medical test argument proposed by Tim Maudlin and others. In this case we oppose superdeterminism to the mundane explanation that the medicine works. The prior probability that a medicine works as expected is quite high. On the other hand, the hypothesis that the EM interaction between the patients and doctors or whatever is used to select them, is going to have a statistically significant effect (a single virus killed is not enough to cure the disease) is very low. So, it is reasonable to dismiss superdeterminism in this case. The crucial difference between entanglemet and medical tests is that in the case of entanglement the outcome depends on the state of a single particle, it’s not a macroscopic/statistical effect.
Comment #30 January 11th, 2022 at 9:49 am
Mateus #28: Thanks for the link to your earlier post, which I hadn’t seen! I enjoyed reading it as well as the comments on it. One commenter of yours, in particular, managed to state things more succinctly than any of us:
Superdeterminism solves QM problems in the same way as guillotine solves headache
Comment #31 January 11th, 2022 at 9:50 am
phi,
“I don’t understand how anyone can think superdeterminism is a good theory.”
EPR proved that QM is either non-local or incomplete, while Bell proved that the only possible way to complete QM and keep it local is superdeterminism.
Given the fact that locality is a fundamental physical principle of all modern physics (standard model and GR) any non-local explanation has a very, very low prior probability. Superdeterminism on the other hand does not contradict any known physics. It is therefore perfectly reasonable to accept superderminism.
Comment #32 January 11th, 2022 at 9:59 am
Scott,
“The Nobel laureate Gerard ‘t Hooft is the most famous advocate of superdeterminism. On the basis of superdeterminism, ‘t Hooft has predicted that quantum computers will never outperform similarly-sized classical computers, if the classical computers could perform operations at the Planck scale.”
‘t Hooft model is a discrete one. The universe consists of Planck-sized cubes that can have a certain number of states. This imposes a limitation of the computer’s power. It has nothing to do with superdeterminism per se. A continuous superdeterministic model would not have such a limitation.
Comment #33 January 11th, 2022 at 10:04 am
Matthijs #24:
Might this be a case of talking past one another due to differing interests? Maybe the nondeterminists are more focused on the ‘user interface’ of QM (that, yes, does look nondeterministic), and the determinists are more interested in QM’s ‘inner workings’ (that, in contrast, actually look deterministic)?
No, I’m interested in the inner workings too. The right question to ask is what you gain by imagining the inner workings to be secretly deterministic, and whether the gain is worth the cost. Even with Bohmian mechanics, I don’t think it’s worth the cost, although reasonable people could disagree. With superdeterminism, by contrast, the trade is close to the worst in the history of science—again, like using a guillotine to cure a headache, or magic to place a phone call.
From reading blog comment sections like this, you could easily get the mistaken impression that this is a “debate” between two opposed “camps.” This impression, however, is entirely due to a failure of Statistical Independence: the probability that someone shows up to argue about superdeterminism, is not independent of the probability that they’re a superdeterminist. If you polled (say) theoretical physicists or quantum information or quantum foundations researchers, there would be at least a hundred — not a figure of speech, literally 100 — saying what I and Mateus are saying for every one who followed Sabine and Gerard ‘t Hooft’s line.
Comment #34 January 11th, 2022 at 10:06 am
Focusing only on statistical independence, isn’t it true that statistical independence is an illusion since every thing in the universe affects and is affected by everything else (in terms of quantum fields or plain classical field equations)?
Meaning that, whatever choice Alice and Bob seem to pick independently, a Maxwell type demon could trace back Alice and Bob evolution to a common set of causes that determines what Alice and Bob will choose?
I’m not saying that this then invalidate Bell or QM, but I’d like to understand how I’m wrong about this:
In the Copenhagen interpretation, “pure” quantum randomness is a thing, and maybe it’s what gives you statistical independence (severing whatever causal links at some point existed between Alice’s state and Bob’s state), but, with Everett, there’s only the wave function of the universe and its deterministic evolution, so, in that case, how can statistical independence ever arise?
Comment #35 January 11th, 2022 at 10:12 am
I guess the problem is that “statistical independence” was never clearly defined in the sense we don’t know if it’s a statement about evolution at the micro level or at the macro level, in the sense that, with enough noise, any two chaotic systems always appear statistically independent, like Alice and Bob each throwing a coin.
Similarly, even though the evolution of a gas in a box could be considered entirely deterministic (if you wait long enough, it will go into a cycle), at our macro level of knowledge we come up with the notion of entropy.
Comment #36 January 11th, 2022 at 10:12 am
In a causal graph like
A→C←B
we assume A and B are independent. However, conditional on C, they are dependent.
The classical example is that even if sports talent and academic talent are independent in the general population, they will be anticorrelated at an elite school.
So whether causal independence implies statistical independence depends on what we condition on, and it’s not always clear to me what we condition on in a given QM derivation.
With that in mind: do you know of any source that explains the statistical independence assumption in Bell’s theorem in detail? The only source I’ve read that discusses it in detail is Jaynes, who treats it as a silly mistake as if nobody’s ever thought of this before. It’s hard to believe it hasn’t been addressed, unless it’s obvious to everyone but me what conditional distribution we calculate with a given QM calculation, in which case I would like to read some explicit discussion of that somewhere.
Comment #37 January 11th, 2022 at 10:12 am
George F R Ellis,
“what superdeterminists don’t seem to realise is they are in fact advocating a new form of Intelligent Design.”
No, superdeterminism is the only way to preserve locality, which is a fundamental principle of modern physics.
“The claim as I understand it is that initial conditions in the early universe determine uniquely everything that happens, there is no randomness at all, even though the fluctuations on the last scattering surface as measured by the Planck satellite look like modulated random gaussian fluctuations.”
This is just good old determinism, nothing strange about it. I guess that any complex, chaotical system would appear to be random, so I don’t think the Plank measurements conflict with determinism.
Superdeterminism is the claim that the source and detectors in a Bell test have correlated states due to past interactions. It’s the same explanation we use to explain any other observed correlations, like the motion of planets in the solar system or synchronized clocks. If you see a pair of synchronized clocks you don’t assume that the state at the Big-Bang was finely tuned, you assume they interacted some time in the past, directly, or with a master clock. The same explanation is needed for a Bell test as well.
Comment #38 January 11th, 2022 at 10:15 am
Scott #21: (“What’s the new knowledge that the Bohmians and the Everettians were at first unaware of and then unwilling or unable to absorb?”)
Current rather than new; mostly [G/A/]QPT related. E.g. “You want to “generalize” probability, whatever that means?” –Tim Maudlin in a reply to me in an unproductive and unpleasant discussion with him and other Bohmians on Sabine Hossenfelder’s blog. E.g. Sean Carroll in an ironically titled article. This time we have Sabine herself asserting that “interpreting the collapse as an information update really only makes sense in a hidden variables theory”.
Comment #39 January 11th, 2022 at 10:28 am
Scott #30: I’m glad you liked it. That comment made me laugh out loud, but I’m afraid I don’t agree with it: a guillotine does solve headaches, but superdeterminism doesn’t solve any QM problems. It substitutes a dynamical explanation for the correlations with simply postulating that they are what they are. Now if somebody actually managed to develop a superdeterministic theory that can dynamically reproduce the results of quantum mechanics (which will never happen), then I would agree with the guillotine analogy.
Andrei #30: It’s definitely not the only way. Many-Worlds is a much more satisfactory way to preserve locality. Also, I don’t think it is methodologically sound to make these object-level choices about what we want the world to be. It is what it is, and if we don’t like it then it is our problem. The correct way to proceed is to choose sound epistemological principles and follow them wherever they take us. Which ones? Well, those that have served us well in the past: reductionism, universalism, occam’s razor, the Copernican principle, and, perhaps more importantly, actually taking physics seriously instead of running away screaming covering our ears.
Comment #40 January 11th, 2022 at 10:49 am
Sandro #25:
Yes, but that is one of the reasons for me why her video is disappointing in various ways. At least I don’t see why this should be Scott’s fault. I had similar feelings as expressed by Mateus Araújo in #27 after watching her video.
But Scott’s dismissive attitude reinforces my conclusion: “I am not especially keen on discussing superdeterminism anyway, since I learned already that it is very easy to get myself into an uncomfortable position.” The way those discussions on superdeterminism seem to go in general makes both sides look bad.
But discussions on how to understand QM were historically just as bad. Can you guess which famous physicist said the following (around 1986, the question is from Paul Davies)?
Comment #41 January 11th, 2022 at 10:50 am
Response to your “anti-clue” challenge: I heard that Newtonian mechanics was initially criticized for postulating “action at a distance”, which seemed unphysical. However non-local forces were confirmed to be a real part of physics time and again with gravity, magnetism, and electricity. Then field theories came up which explained these forces in terms of local field laws.
Agreed that superdeterminism is nonetheless highly implausible.
Comment #42 January 11th, 2022 at 10:52 am
Hi Scott-
I’m curious, did the tardigrade paper authors say anything to you that would be useful for a reader trying to make sense of their claims?
As far as I can tell from the paper, if one were to imagine slowly moving the tardigrade away from the qubit it sat on, the most reasonable expectation would be that the frequency of the qubit would smoothly shift, and it would at all times be in a pure state (or maximally entangled state with the other qubit), with that state merely changing from a tardigrade-dressed state to one that is not so. This would make the role of the tardigrade essentially the same as when one turns on or off an external macroscopic electric or magnetic field to shift a qubit frequency, which does not generate any measurable or controllable entanglement and certainly isn’t a new or remarkable thing to do.
OTOH, if my expectation is wrong, and they slowly moved the tardigrade away from the qubits and then found them to be in a mixed state, that would be another story. So, my semi-joking proposal is that we need to redo this experiment but with a little cantilever that the tardigrade sits on or something like that.
Comment #43 January 11th, 2022 at 11:17 am
I really really feel you on the reading the news is hell. One option though is to stop. After it became clear that there would be no serious consequences for any of the instigators of Jan 6th I stopped reading the news. It was getting really really bad for my mental health and I needed to stop. I get that it is incredibly hard to look away, especially because it seems like every once in a while you can actually have some small impact.
But for me I realized that my actual life was pretty great. I get to do math, teach, read, cook, swim, play board games, spend time with people I like and love. But because I was force feeding myself a constant diet of fear, anger, and outrage I walked around failing to appreciate the life I was actually living each day. Unplugging from the news has been incredibly good for me. It might help you reconnect with your love of quantum computing and see it as not somehow pointless because the world is falling apart. Knowing that the world is falling apart doesn’t help prepare you much for the moment it falls, and you have to spend every day from today till then worried and watching the growing cracks.
Whatever you do, good luck. And thank you for your thoughts on water bears and superdeterminism.
Comment #44 January 11th, 2022 at 11:49 am
Matthew Gray #43: Thanks so much, you just helped make my day!
Comment #45 January 11th, 2022 at 11:54 am
Scott Says:
Comment #12 January 11th, 2022 at 12:54 am
“Meaning that you can’t have local hidden variables, free choice for the experimenters, and the predictions of QM, full stop. “Pick any two.””
SH has picked two it appears: “Superdeterminism does of course not perfectly reproduce quantum mechanics, but only recovers it in the limit where one assumes the hidden variables are really random.”
Also, there is a real problem here, and thinking about a real problem may lead to some insight even if the original idea doesn’t prove correct. Unlike, for example, completely random speculation about fine-tuning and multiverses which aren’t known to mean anything even.
Comment #46 January 11th, 2022 at 11:56 am
Sandro #25: OK, I promised by Sabine by email that I’d read her paper, and update this post if it changed my thinking about any of this. But you still haven’t answered my central question: namely, if
(a) you started out with the goal of explaining Bell’s inequality in a local way, but then
(b) you end up needing global tuning of the variables that affect Alice and Bob’s measurement outcomes (as you’ve now conceded),
then what was the point of postulating superdeterminism in the first place, rather than simply accepting that QM works as stated?
Comment #47 January 11th, 2022 at 12:02 pm
Andrei #31:
Given the fact that locality is a fundamental physical principle of all modern physics (standard model and GR) any non-local explanation has a very, very low prior probability. Superdeterminism on the other hand does not contradict any known physics. It is therefore perfectly reasonable to accept superderminism.
To say it one more time, the issue with superdeterminism is that you end up postulating a nonlocality a billion times worse than the nonlocality that you were trying to explain.
It’s a billion times worse because with QM, we understand precisely why the nonlocal correlations don’t allow superluminal signalling and hence obey special relativity; we can even prove that as a theorem.
With superdeterminism, by contrast, once the nonlocality is postulated we could use it for anything we liked, including superluminal signalling. We arbitrarily stipulate that it’s only used to reproduce the predictions of QM.
Comment #48 January 11th, 2022 at 12:07 pm
The idea of locality is weird and messes people up.
It’s hard to understand the mechanics of a universe that lacks space and time from the perspective of a construct that evolved to operate in space-time. The universe you are in IS deterministic but most of the interesting action is outside of your interference band.
Instead of asking why entanglement happens, humans should be asking why entanglement increases with decreases in space-time divergence, what the confounding factors are and whether or not we can spot/create entanglement with other alignments that are space-time divergent.
To my understanding entanglement means that the two particles have had their math optimized along a property set that you are not able to observe. It isn’t a setting (hidden variable) so much as a process (hidden interference) that continues interacting with the other waves around it. The catch being that you can’t observe the things influencing the shared state, so It’s random!….
My 2¢ is that the word “random” means that you’re looking at a chaotic process that you can not understand or you are unable to determine the starting variables for.
Comment #49 January 11th, 2022 at 12:08 pm
Superdeterminism can work in quantum assisted experiments where 1) quantum assistants make the selection of future settings, 2) Those assistants impose the settings on the entanglement that could lead to different type of results, 3)The quantum assistants inform the settings imposed; if a set of experimental designs, that lead to identical patterns, are used to test a bell-type inequalities using the results informed by the assistant quanta, then you can assume that the hidden variables of the entanglement and the assistants are correlated BEFORE each repetition of the experiment. Thus, by restricting quantum superdeterminism to quantum assisted experiments, one can maintain the independence of the “classical world” from the quantum world, to set up experiments, and the results from those experiments could be attributed to hidden variables that act as initial conditions and determine the future of the system once measurements are done.
Let’s keep in mind that even quantum superdeterminism can be interpreted in many ways.
Comment #50 January 11th, 2022 at 12:09 pm
gentzen #40:
Not sure “fault” is applicable here. Sabine’s videos are not rigourous, scholarly work, but informative popular science entertainment, and they’re very good for that, and at stripping away a lot of the ivory tower mysticism surrounding science, scientists and its institutions. I think that’s a good thing, even if her brusque manner grates on some people.
I don’t get the reaction, sorry. None of those figures really explored superdeterminism at all. Literally. Her soft ridicule at their claims seems perfectly justified frankly. What kind of scientist reaches such definitive conclusions before even seriously engaging a subject?
I think everyone dismissing superdeterminism is at least as wrong as all of the people who believed for decades that QM could not be reproduced by any hidden variable theory, despite the availability of Bohmian mechanics since the early 50s.
I think the radically different ontology required by plausible superdeterministic theories will yield progress on some previously challenging questions in the coming decades.
Comment #51 January 11th, 2022 at 12:09 pm
Forgive my sub-Sakurai umderstanding of QM, but it seems like all the technical points being made are already too generous, because: if the initial conditions of the universe conspire to undermine QM experiments and make them appear random when they aren’t, then isnt the entire concept of knowledge/epistemology out the window? Math, logic, physics, all of it gone, because knowledge and experiment can’t be trusted anymore; everything we’ve ever seen may just be a conspiratorially illusion that could fail right now. To be a superdeterminist is to reject the concept of predictive power isn’t it? Every equation may ne wrong, just formulated from the universe feeding us junk data that has only been consistent due to chance.
Comment #52 January 11th, 2022 at 12:11 pm
fred #34: Sure, perfect statistical independence might be an illusion. But if we didn’t have effective or good enough statistical independence, experimental science would be impossible. For example, as Sabine herself says, how could we ever do vaccine trials, if we couldn’t randomly assign people to control and intervention groups—if, every time we tried that, the universe somehow interfered with our random-number generator in order to bias the experiment’s results in a preferred direction? Yet this is precisely what the superdeterminists say happens when we do a Bell/CHSH experiment.
And, to say it one more time, Sabine gets out of this only by arbitrarily stipulating that statistical independence is valid when use our random-number generators for a vaccine trials, but becomes invalid when we repurpose the same random-number generators for quantum mechanics experiments. She doesn’t even hint at a theory by which you could derive that statistical independence holds in the one case and fails in the other. How did the random numbers even know what they were going to be used for? Did they, or the universe, look into the future or into the experimenters’ brains to check?
Comment #53 January 11th, 2022 at 12:14 pm
If the purpose of superdeterminism is to allow deterministic hidden variable theories to explain quantum probabilities, how do its proponents deal with Hardy’s excessive baggage theorem? Also, I believe statistical independence is used in the PBR theorem. So do superdeterminists support an epistemic view of quantum probabilities?
I am also unclear as to what initial conditions everyone is referring to. Are we talking about boundary conditions of the Standard Model + GR at some initial slice of time? Has anyone demonstrated that such initial conditions as required to produce quantum correlations actually exist?
Comment #54 January 11th, 2022 at 12:20 pm
Scott #47
Ok, but if it’s all so obvious, then what was Bell’s motivation to even set statistical independence as a prerequisite, in a way that, when asked “what if SI doesn’t hold?”, he would then immediately cave with “well, in that case, sure, my point doesn’t hold!”?
Whereas, when asked the same, a specialist in vaccine trials could just say “sure, but we just need practical statistical independence, like base everything off a few coin tosses, and not worry about it”…
Comment #55 January 11th, 2022 at 12:21 pm
raginrayguns #36: In the Bell/CHSH experiment, you can take the A and B to which we’re applying statistical independence to be just the outputs of Alice’s and Bob’s spatially separated random-number generators. In which case, as I said before, presumably Nature doesn’t even know what the random numbers will be used for, at the time it has to spit them out! Of course, if Alice and Bob then perform some experiment and we condition on the experiment’s outcome, that could introduce correlations between A and B—but in the Bell/CHSH experiment, we don’t condition; local hidden variables are ruled out even after we average over all four possible settings for A and B, and all possible measurement outcomes! So then, what is this “C” that you’re worried about?
Comment #56 January 11th, 2022 at 12:27 pm
“Concede” is maybe an oversimplification. I think her toy model falls under the basic outline that you described if you squint, but secondhand summaries probably won’t be satisfactory, so I intended to donate any Scott-eyeball-time to her toy model.
Comment #57 January 11th, 2022 at 12:36 pm
Andrei #32:
‘t Hooft model is a discrete one. The universe consists of Planck-sized cubes that can have a certain number of states. This imposes a limitation of the computer’s power. It has nothing to do with superdeterminism per se. A continuous superdeterministic model would not have such a limitation.
Even with a discrete model, ‘t Hooft could easily say if he wanted that quantum computers will work exactly like standard QM predicts, since the initial state of the universe was tuned to produce just that outcome—exactly like it was tuned to produce the standard QM outcome in the Bell/CHSH experiment. Why is the one any worse than the other?
This is my whole point: superdeterminists have effectively infinite freedom to pick and choose where they want to agree with QM’s predictions and where they want to depart from them. So then, as should surprise no one, they choose to agree on the existing experiments (like Bell/CHSH), and disagree only on experiments that haven’t been done yet (like Shor’s algorithm)! But they never suggest even a shadow of a hint of a theory from which one could derive where their results will differ from standard QM’s, and where they’ll agree.
Comment #58 January 11th, 2022 at 12:51 pm
Andrei #37
“Superdeterminism is the claim that the source and detectors in a Bell test have correlated states due to past interactions.” No, not just past interactions, also the initial data which determined outcome of those interactions. Without initial data, there are no outcomes. The issue you and the others are dodging is what determined that initial data. Follow it back to the start please. There is some explaining to do.
Comment #59 January 11th, 2022 at 1:10 pm
Andrei #29: (“The best argument for superdeterminism is that it is the only way one could explain entanglement in a local way. In a nutshell, EPR proved that QM is either non-local or incomplete, while Bell proved that the only possible way to complete QM and keep it local is superdeterminism.”)
This sort of misconception neatly illustrates what’s so frustrating about the Bohmians, superdeterminists etc. One could at least respect these attempts to find an alternative theory or interpretation if they were motivated by an informed metaphysical preference for determinism (or classicality), or because of genuine problems with interpreting QM, but simple – and intransigent – ignorance seems to be the motivation.
Comment #60 January 11th, 2022 at 1:22 pm
Itai Bar-Natan #41:
Response to your “anti-clue” challenge: I heard that Newtonian mechanics was initially criticized for postulating “action at a distance”, which seemed unphysical. However non-local forces were confirmed to be a real part of physics time and again with gravity, magnetism, and electricity. Then field theories came up which explained these forces in terms of local field laws.
It’s not just that Newtonian mechanics was “criticized” for action at a distance—it’s that Newton himself was dissatisfied with it, and looked forward to it being eliminated by some future theory! As, of course, it finally was, by GR … which gloriously explained and derived where the Newtonian approximation was and wasn’t valid. The need to reproduce the Newtonian limit was, of course, a crucial clue in Einstein’s path to GR.
Now let’s contrast this to the situation with superdeterminism. One doesn’t like quantum nonlocality, so one postulates something even more wildly nonlocal. One doesn’t like quantum randomness, so one postulates something that can have any finely-tuned nonrandom patterns one wants. One then arbitrarily cripples one’s proposal so that, on existing experiments, these vast new powers are never used for anything besides just reproducing the standard predictions of QM (why those in particular?). One holds out the hope that future experiments (which ones? who knows??) will reveal some difference from QM. One doesn’t calculate or derive anything; instead one just uses the proposal’s infinite freedom to decide separately about the results of each experiment. QM has functioned here as an anti-clue, tricking us into thinking that Nature was nonlocal only in an extremely specific way (a way that doesn’t allow superluminal signalling for well-understood reasons, etc. etc.), when it’s actually nonlocal however the superdeterministic correlations want it to be. QM’s previous success remains totally unexplained.
See the difference? 🙂
Comment #61 January 11th, 2022 at 1:44 pm
Partisan #42:
I’m curious, did the tardigrade paper authors say anything to you that would be useful for a reader trying to make sense of their claims?
The most useful thing they said was that, as far as they knew, only a microscopic piece of the tardigrade might be entangled with the qubit. I was surprised that they didn’t seem more concerned about that. It’s like, tardigrades are made of atoms, so isn’t it obvious that you can take a few of those atoms and entangle them with anything you want?
Comment #62 January 11th, 2022 at 1:51 pm
Steven Evans #45:
Also, there is a real problem here, and thinking about a real problem may lead to some insight even if the original idea doesn’t prove correct.
What is the “real problem”? Is it nonlocality? If so, as I’ve tried to explain over and over, superdeterminism makes the problem a billion times worse!
Superdeterminism is “nonlocal” because the constraints you impose on the initial state will not be local ones. If they were, they couldn’t explain the Bell/CHSH experiments, at least e.g. in the version where Alice and Bob use the light from distant quasars as their randomness source.
It’s a billion times worse because, with QM, at least we understand exactly why the nonlocality doesn’t imply superluminal signalling in violation of special relativity—we can prove it as a theorem! We even understand exactly why we can violate the Bell/CHSH
inequality by a certain numerical amount and by no more.
With superdeterminism, by contrast, we don’t understand any of that. Once the initial conditions and the laws are “entangled” with each other, we could easily have superluminal signalling if we wanted that. We stipulate that we don’t have superluminal signaling, we stipulate that our global consistency conditions will be used to reproduce the Bell/CHSH statistics and no more than that, only because we peeked in the back of the book to learn that that’s the right answer. We don’t derive any of this stuff.
In summary, not only have we ended up with radically more nonlocality than we bargained for, we’ve also ended up with zero ability to explain anything we see. Just like certain critics have said about string theory, 🙂 we now have effectively infinite freedom … in this case, the freedom to fit whatever combination of violations and non-violations of QM we want.
Comment #63 January 11th, 2022 at 1:59 pm
Scott #57: Hossenfelder and Palmer are remarkably original in that they chose to disagree on existing experiments.
Comment #64 January 11th, 2022 at 2:02 pm
AZ #51:
it seems like all the technical points being made are already too generous, because: if the initial conditions of the universe conspire to undermine QM experiments … then isnt the entire concept of knowledge/epistemology out the window?
Our superdeterminist friends angrily deny it, but I would say so, yes! 😀
It would be one thing if they said: “Look, we know this looks world-historically bad, like intellectual tennis with no net, like we could explain any experimental result we wanted by saying that the initial conditions of the universe conspired to produce it. But let us show you that it’s not as bad as it looks, that there are rules constraining what we can do, that there is a net!”
The trouble is, those rules never arrive. What arrives instead are words … and more words, and more words, never bottoming out in anything with any explanatory meat.
So, that’s how I came to the conclusion that it is as bad as it looks.
Comment #65 January 11th, 2022 at 2:10 pm
fred #54:
Ok, but if it’s all so obvious, then what was Bell’s motivation to even set statistical independence as a prerequisite, in a way that, when asked “what if SI doesn’t hold?”, he would then immediately cave with “well, in that case, sure, my point doesn’t hold!”?
His motivation was that he was proving a theorem—moreover, about the famously counterintuitive subject of quantum mechanics. And when proving theorems, one tries to make every assumption explicit, no matter how intuitively obvious.
This has an extremely ironic byproduct: often, people will question (or affect to question) an obvious assumption just because it was explicitly stated, even though they wouldn’t question a far-from-obvious assumption that wasn’t explicitly stated!
The people who do vaccine trials don’t mention the statistical independence assumption, for the same reason why they don’t mention the assumption that a fickle God didn’t tamper with their data. It’s left to us, here in the comments sections of nerd blogs, to notice and argue about such assumptions. 😀
Comment #66 January 11th, 2022 at 2:32 pm
Are you sure that talking about superdeterminism rather than politics is better for your (and your readers’) mental health, Scott?
Comment #67 January 11th, 2022 at 2:36 pm
Vladimir #66: ROFL still yes, probably
Comment #68 January 11th, 2022 at 2:40 pm
Sandro #25
There seems to be a miscommunication going on here. In your and Sabine’s “toy model” paper, you use the term “superdeterminism” to refer to any deterministic model where the statistical independence assumption is violated. The model you you give can be expressed roughly as “the future measurements affect the present dynamics and cause the state to collapse to the correct basis”. However, in the terminology I am used to, that category of model is called “retrocausal”, not “superdeterministic”. The latter is used for models like ‘t Hooft’s, where the idea is something like “the initial conditions of the Universe are such that Alice and Bob will never measure in the same basis unless their respective hidden variables also lead to matching outcomes”.
In fact I don’t think models with retrocausality would usually be called “deterministic” at all. I understand “determinism” to mean that the state at any one time fixes the state for all of history, whereas retrocausality implies that information about more than one time is needed.
Scott also seems to understand the word “superdeterminism” like I do, which explains why his criticisms are not aimed and the class of models you are looking at. I can’t speak with confidence about which usage is “correct” or “accepted”, but there certainly is a need to clear up the ambiguity.
Comment #69 January 11th, 2022 at 2:49 pm
What if we make the discussion of “superdeterminism” more concrete…
Bell’s theorem pertains to a kind of EPR experiment in which the angles of polarization filters are chosen “at random” while the particles are in flight. The source of the “randomness” can be anything: the next letter in Moby Dick, the flares of quasars on opposite sides of the universe.
If you aspire to have a local deterministic explanation for EPR experiments in which *that’s* the source of the randomness, then you seem to be saying that your local hidden variables know about words on a nearby page (in the case of Moby Dick) or about the future behavior of quasars based on initial conditions from billions of years ago.
I just don’t see how a local deterministic theory can explain quantum randomness that is modulated by pseudorandom sources like that. Surely they wouldn’t claim that the hidden variables literally had Moby Dick or future quasar behavior encoded in them. They must claim that it’s some physical property more directly associated with the experiment and the particles, that is affected by the transduction of the pseudorandom information into polarization settings. But I don’t remember ever seeing the problem discussed from that perspective.
Comment #70 January 11th, 2022 at 2:59 pm
Sabine’s video reminded me of ZX graphs. You can convert any quantum circuit into a ZX graph. This is surprising because quantum circuits care a lot about time ordering but a ZX graph has no notion of time. If you work with ZX graphs for awhile, this starts to become intuitive. You start thinking of measurement being equivalent to initialization, as outputs being equivalent to inputs.
This sounds a odd, but it is incredibly useful. Certain circuit transformations that normally look complex, such as replacing a Toffoli gate with a teleportation through a magic state with attached CZ corrections, become simple. And you suddenly have a language capable of *easily* describing why spacetime rotations in topological diagrams of surface code computations preserve the computation being performed (despite creating quantum circuits that can look radically different).
(The reason this all works, incidentally, is because these diagrams only specify processes *up to Pauli corrections*. Otherwise the things they describe would trivially violate the no communication theorem. The forbidden things are always masked off by Pauli corrections dependent on not-yet-performed or not-yet-communicated measurements.)
Anyways, that’s the thing-I-know-is-useful that Sabin’s description of superdeterminism reminds me of. But I don’t see how this technique would work as a fundamental description of how reality works. Because in order to apply ZX graph transformations that turn an output into an input, you *need to know the graph*. But in any theory where the universe has some specific state at a given time, and later states are obtained by evolving the initial state, the graph is effectively being built up dynamically. At first you might think that if the evolution is deterministic then the future graph is “already available” at the present, but if you treat the graph as a “future input” that can affect measurements happening *now* then you have created a dependency cycle. The graph affects the measurement but the measurement can affect the structure of the graph (e.g. via a computer program deciding to do different experiments depending on an earlier measurement result). I don’t know how to create a future-input system which always has a unique solution, and no paradoxes, without restricting reality so strongly that it can’t contain things like Turing complete computers.
Comment #71 January 11th, 2022 at 3:03 pm
Scott,
“the issue with superdeterminism is that you end up postulating a nonlocality a billion times worse than the nonlocality that you were trying to explain.”
You do not postulate anything like that. If you see two synchronized clocks you don’t need any nonlocality, only some interaction in the past. You don’t need to posit anything about the Biog-Bang either. The clocks interact electromagnetically and get synchronized, this is all. Likewise, our source and detectors interact electromagnetically. This interaction leads to a sort of “synchronization” of their internal (microscopic) states which manifests itself in the form of the observed correlations. At least, this is my hypothesis. Sabine, ‘t Hooft have different ideas but none of them have anything to do with any fine-tuning of the initial conditions.
“It’s a billion times worse because with QM, we understand precisely why the nonlocal correlations don’t allow superluminal signalling and hence obey special relativity”
QFT works with special relativity only because its interpretation (superdeterministic or non-local) is not clearly presented. In other words, the theory is underdefined. Once you commit to a non-local interpretation (as you seem to prefer) you contradict SR regardless of the non-signaling theorem. SR does not distinguish between “usefull” signals and “useless” ones. Even if your signal consists in a random sequence (like the outcomes of an EPR experiment), your ability to send them instantly at a distance location violates SR. You then need to implement an absolute reference frame (to be able to distinguish the cause and the effect), basically you need to rewrite all physics. Bohmians are trying that but it doesn’t seem succesfull. Implementing such an absolute frame in GR isn’t going to be easy either. So, there is a lot to loose by going non-local.
“And, to say it one more time, Sabine gets out of this only by arbitrarily stipulating that statistical independence is valid when use our random-number generators for a vaccine trials, but becomes invalid when we repurpose the same random-number generators for quantum mechanics experiments. She doesn’t even hint at a theory by which you could derive that statistical independence holds in the one case and fails in the other.”
Consider the case of two macroscopic, neutral object, say two billiard balls. Their internal charges (electrons and nuclei) are continuously interacting, this is what charged particles do. This interaction has two properties:
1. It correlates the internal, microscopic states of the balls. (this is because their states must be a solution to the N+M body EM problem, where N and M are the numbers of charge particles in the two balls; such a solution will be different from the non-interacting balls where the states would be solutions of the N-body and M-body problem respectively)
2. Has no effect on the macroscopic/statistical properties of the balls. Their positions, momenta, angular momenta, temperature, density, etc. do not change because those EM interactions cancel at the statistical level.
So, if an experiment is sensitive to a certain microscopic state (a particular arrangement of internal particles), It will show some correlation between the objects. Otherwise, the object would appear to be independent.
In a Bell test the hidden variable depends on the microscopic state of the source (EM fields at the locus of the emission), so we would expect correlations. A medical test is not sensitive to the microscopic state of the patient. It only records statistical properties, like the concentration of a substance in blood, etc. Hence, we expect the independence assumption to hold.
Comment #72 January 11th, 2022 at 3:17 pm
Sandro #50:
Her videos in general are very good. It is this specific video where you should ask yourself why you yourself wrote: “I think the time spent watching her video would have been better spent …”! I just can’t see how this video should be a favor for people trying to argue against dismissing superdeterminism. People like you, Tim Palmer, or even Jarek Duda. (Note that questioning statistical independence of initial conditions can also mean to question that intial conditions are always the right type of boundary conditions to have as picture in your head.)
What she does is not “soft ridicule”! She cherry picks quotes out of context. That could be fine, if all she wanted to do was to provide examples for the common confusion: “Keep in mind that superdeterminism just means statistical independence is violated which has nothing to do with free will.” But her remarks “As you can see, we have no shortage of men who have strong opinions about things they know very little about, but not like this is news” and “I just hope I’ll live long enough to see that all those men who said otherwise will be really embarrassed” turn her cherry picked quotes into something more sinister!
Comment #73 January 11th, 2022 at 3:21 pm
George F R Ellis,
“No, not just past interactions, also the initial data which determined outcome of those interactions. Without initial data, there are no outcomes.”
If the objects interact by long-range forces/fields they get correlated regardless of the initial state. Consider the case of two orbiting stars. Their orbits will always be ellipses, because this is how the solution to the gravitational 2-body problem looks like. It’s not that you need to fine-tune the initial state to get ellipses. There is also no initial state that would generate square orbits. So, any long range interaction eliminates a huge number of states that might appear physically possible because the only allowed states are solutions to the N-body problem, where N is the number of objects in the system.
Comment #74 January 11th, 2022 at 3:32 pm
As luck would have it, my new Punk Bluegrass band is named “Superdeterministic Tardigrades”!
Comment #75 January 11th, 2022 at 3:46 pm
Sandro #25:
I would really like to understand what you are arguing for because history shows that quantum mechanics confuses the hell out of very smart people and one can never be certain one is not on the confused side on any given argument. I hope we can mutually engage here within the context of this attitude…
In your paper with Sabine you have the following Paragraph :
> Here, the P indicates the location of preparation, and D1 and D2 are the two detectors in a typical Bell-type experiment. Local causality, in a nutshell, says that if all variables are “fully specified” in the backward lightcone of D1, then whatever happens at D2 provides no further information (for a discussion of this point, please see [19]). Quantum mechanics violates this condition because the setting of the second detector matters for the outcome at the first detector. The model we discussed here has the information about both detector settings at P. This means also that the information about the detector setting D2 is already in the backward lightcone of D1 (and vice versa). Hence, whatever happens at D2 provides no further information. Note that this is a local requirement because the place of preparation is necessarily in causal contact with both detectors.
Is this a fair characterization of your general ideas about super-determinism?
Because if so it seems to me to encompass the Bell scenario in name only.
In your model Detectors D1 and D2 may well be co-located with one another and with P, besides “trivial” time evolution which just shuffles variables around (which would be remarkable, but is not what Bell meant by “super-determinism”).
The point of the Bell experiment is that the detectors can base their settings on explicitly causally decoupled information (barring true super-determinism): e.g. light recorded in telescopes co-located with the detectors, coming from opposite ends of the observable universe.
Comment #76 January 11th, 2022 at 3:48 pm
Andrei #71:
1. It’s possible to do a Bell experiment where, for Alice’s and Bob’s randomness, they use the light from distant quasars, which couldn’t possibly have had a chance to become correlated between the Big Bang and the present without violating locality (see here). When and if such an experiment is done, do you predict it will see a violation of QM? If not, then how the hell would superdeterminism explain its results, without the very nonlocality that it was its whole purpose to avoid? Would you say that the quasars (or rather their antecedents) must’ve been able to communicate with each other, and thereby become correlated, in some pre-inflationary epoch?
2. Here is my #1 challenge for you, and all other superdeterminists.
Explain, on the basis of superdeterminism, why the Bell/CHSH inequality can be violated by a specific quantitative amount and by no more than that amount, and also why superluminal signalling is impossible.
Let me now make a public commitment to spend more hours of my life engaging with superdeterminism when, and only when, there’s a credible claim to answer at least one of the two challenges above.
Comment #77 January 11th, 2022 at 3:56 pm
I agree with Maline#68 that Sabine seems to be describing what I would call a retrocausal model, instead of a superdeterministic model.
Comment #78 January 11th, 2022 at 3:59 pm
Maline #68 and Craig Gidney #77: You’re probably right on terminology. In any case, my objections to retrocausality in this context are exactly the same as my objections to superdeterminism. Namely:
(1) you haven’t actually explained anything about why QM has the particular features it does (for example, Bell/CHSH violation by a specific amount but no superluminal signalling), and
(2) meantime, you’ve taken whatever it was that troubled people about QM, and made it a billion times worse!
Comment #79 January 11th, 2022 at 4:12 pm
I think the so called “statistical independence” assumption of Bell is not an assumption – it’s an axiom in the sense that something like this is a pre-condition to doing any kind of science. If you believe the universe makes it fundamentally impossible to run an experiment where you can choose whether to do A or B (e.g., whether something is the control or treatment) then you might as well not do any experiments at all.
Comment #80 January 11th, 2022 at 4:17 pm
Boaz Barak #79: Yup. Sabine’s superdeterminist collaborator Tim Palmer has been emailing me all day to try to explain why that simple logic is wrong, once you take into account properties of the p-adic numbers as well as “compact fractal attractors in cosmological state space.” But it’s not wrong. 😀
Comment #81 January 11th, 2022 at 4:38 pm
@Boaz#78 I disagree. Not because I have some specific way of getting around the problem, but because it reminds me of this Feynman quote ( https://libquotes.com/richard-feynman/quote/lba8f8e ):
> Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong.
Suppose you found yourself in a universe that mildly violated statistical independence. Would you *actually* be helpless to learn anything? Or would you just get on with it and do as best you could? An explicit example of such a universe is the world in “Worth a Candle” ( https://archiveofourown.org/works/11478249/chapters/25740126 ), where events are known to tend towards being narratively interesting. It’s disruptive, sure, but reasoning and planning is still possible.
Comment #82 January 11th, 2022 at 4:45 pm
Craig Gidney #81: Alright. My more nuanced position is that I might be willing to consider a superdeterministic proposal, if
(1) it came with an explanation for why, despite appearances, people can still do controlled scientific experiments, and
(2) the gain in explanatory power was enormous in return for what we were giving up.
Alas, I don’t know of any actual superdeterministic proposals that come anywhere close to meeting either of these bars.
Comment #83 January 11th, 2022 at 4:46 pm
Boaz Barak #79: Absolutely seriously, I get the sense that if someone were to take the conspiratorial claims of superdeterminism and encode them into appropriate first-order statements, they may be completely logically equivalent to the “Satan buried the dinosaur bones to trick us into believing evolution” or “God created 14 billion year old light in space in-transit to trick us into thinking the universe is older than 10k years old” arguments. This kind of conspiracy against experimental reliability calls into question… well, anything.
Comment #84 January 11th, 2022 at 5:22 pm
I never got the need for the name “superdeterminism”, either a world is deterministic or it’s not. Not being deterministic means there’s a source of “pure” randomness (in itself “pure” randomness is actually hard to fathom, some kind of irreducible mystery, really).
If I’m not mistaken, Everett’s interpretation is entirely deterministic. All there is is the wave function of the universe, evolving deterministically. The only randomness is related to which branch we find ourselves on (this has to do with consciousness and not free will or the likes).
So, I fail to understand what superdeterminism would bring that Everett doesn’t already covers.
But it’s worth pointing out that Sabine also rejects Everett. I recall a video where she was saying that the Copehagen interpretation was struggling with the measurement problem, i.e. the “random” collapse of the wave function (as a non linear event), but Everett really makes it worse: we now end up having to deal with one collapse in every possible branch of the multiverse. But I think that this statement may be a misinterpretation of how “decoherence” actually works (according to people like Sean Carroll).
Comment #85 January 11th, 2022 at 5:34 pm
> Of course, I also read about the wokeists, who correctly see the swing of civilization getting pushed terrifyingly far out of equilibrium to the right, so their solution is to push the swing terrifyingly far out of equilibrium to the left
Not that you’re going to approve this comment, but I really just want to drive home how FUCKING INSANE it is that you put “trying to overthrow the US government to install a fascist dictator” on the same level as “being racist is not OK”.
Comment #86 January 11th, 2022 at 5:34 pm
Boaz Barak #79: “something like this” is not the same as “exactly this”. Your “you might as well not do any experiments at all” might in the end turn just into the question of whether you call your observations “experiments” or something else. The dividing line is less obvious then you may expect. If all goes well, the James Webb telescope will do many observations, and most of those observations will be part of some experiment. And some will just be observations. In the end, it is often just a question of (sufficiently big) numbers and statistics. And practically occurring violations of statistical independence might in the end just push-up the required number of observations, among others by limiting which observations contribute to increase the reliability of your experiment.
Comment #87 January 11th, 2022 at 5:54 pm
I also have a tough time imagining how past weakest coupling between particles could persist inside macro system in the middle of classic noise, and then suddenly re-emerge as causes to explain macroscopic effects such as QM.
But it’s also worth pointing out that physical observations appear to be “conspiracies” until they’re explained away by some “conservation law”.
Imagine you could go back in time and find a 19th century physicist that knew nothing about wave functions. And then you show him the double slit experiment, and then ask him to come up with any method to uncover which path was taken without breaking the mysterious interference pattern… he would quickly claim there’s some sort of devilish conspiracy going on here.
The same goes for simple facts like perpetual machines: nature seems to be outsmarting us no matter what we try. Until thermodynamics was invented. And then Maxwell demons were invented and everyone was confused again… until it was shown through information theory that Maxwell demons are an impossibility (the energy needed to maintain their knowledge of a system has to be taken into account in the balance).
Comment #88 January 11th, 2022 at 5:57 pm
What pandemic? Vaccines have turned covid into a flu, these are not “dark times” when it comes to covid. Also, didn’t Manchin halt the slide into the abyss by blocking a huge amount of money being spent (i.e. wasted) by the US government; shouldn’t we celebrate him? If these are “dark times” for those reasons, you may be wearing sunglasses 🙂
Comment #89 January 11th, 2022 at 5:58 pm
To be honest, I don’t know that I understand why determinism is incompatible with controlled scientific experiments.
Maybe we can work through an analogy?
Suppose we’re trying out a new factoring algorithm.
We select two groups of participating integers. One group is assigned to the factoring algorithm, and the other to a placebo algorithm.
Both algorithms are deterministic. The placebo still processes the input integer into a list of smaller numbers, but it’s not trying to factor anything.
At the end we check to see whether the factoring algorithm is substantially more effective than the placebo.
Naturally, the group assignments matter. Maybe the factoring algorithm operates on precomputed values, or is only effective for a certain class of integers. And the placebo itself can still give a correct answer.
We want the group assignments to be independent from the two algorithms.
Suppose we pick the integers in each group, deterministically, from the local environment.
How likely is it that the integer assignments picked in this way coincide with either algorithm in a way that would skew the results?
From this perspective I don’t see why determinism is a problem.
I do see a problem if someone were to argue that the factoring algorithm only succeeded because the universe’s initial conditions conspired to make it so.
But that’s just not a convincing argument, because it ought to be highly improbable?
So why is determinism such a bad platform for science?
Comment #90 January 11th, 2022 at 6:01 pm
Scott #80: Are you aware of Tim Palmer’s work on “stochastic parameterization”? Do you think that it would be a good idea to put all stochastic effects into the initial conditions in that context? Probably not, and then Sabine’s assertion that we often don’t have any good reasons for selecting some probability distribution on the initial conditions might feel less absurd. And without probability distribution on the initial conditions, violation of statistical independence of initial conditions might also feel less offensive.
Scott #82: I fear instead of “(2) the gain in explanatory power was enormous in return for what we were giving up” we will rather have a loss, just like the antropic principle feels like a loss to some scientists (ironically including Sabine, if I am not mistaken).
Comment #91 January 11th, 2022 at 6:04 pm
Superdeterminism doesn’t get to rule out the existence of universes just like our own that are one iota of information off from us that we’re simply unable to interact with due to a lack of interference between our universe’s energy states and theirs. Just because this universe looks like this does not mean that it was “intended” to look like this. Being inside of a deterministic machine does not mean you get to look at the machine’s randomizer.
Comment #92 January 11th, 2022 at 6:08 pm
Well, I entirely agree Scott, indeed ‘super-determinism’ sounds like baloney to me too, because, as you say, it explains nothing (it doesn’t rule anything out).
My current (tentative) best interpretation of QM is a hybrid of hidden-variables/many-worlds, I think there’s still many worlds, but there’s less degrees of freedom than conventional QM says. I I have now at least pinpointed the source of all the confusion: I think the confusion about QM is all due to faulty ontology of the ‘wave function’.
If we assume that the wave function is physical, that either leads to violation of relativity (like Bohm with it’s implied super-luminal signalling) or leaves unexplained features (standard ‘many-worlds’, where exactly how to divide up ‘worlds’ and the observable reality isn’t explained) . If on the other hand, if we assume that the wave function is non-physical, that seems to lead to no objective reality (like QBism, Copenhagen). What gives?
I finally came to the conclusion that indeed the wave function is subjective (it’s not physical), but Copenhagen does NOT follow. It just means that if you want an objective picture, you have to look *deeper* than the wave function. So I eventually came to the realization that there’s a deeper picture that looks like some kind of ‘computational geometry’, a novel kind of geometry that is more basic than the current physics understanding. So there are some hidden variables, but there are still ‘many-worlds’ as well (just less degrees of freedom than in standard QM).
So this is my best current guess: The objective reality is computational geometry, that’s the physical part. The wavefunction is subjective (merely a mental model for computing some aspects of the underlying computational geometry). The ‘many-worlds’ are only a scaffolding (partial reality), there are extra hidden-variables that explain how specific ‘worlds’ become actualized.
Comment #93 January 11th, 2022 at 6:35 pm
gentzen #90:
I fear instead of “(2) the gain in explanatory power was enormous in return for what we were giving up” we will rather have a loss
One of the wonderful things about science is that, if a theory is not only giving you no gains in explanatory power but is actually giving you a loss, you always have the option to reject the theory with extreme prejudice, as I’d recommend in the case of superdeterminism. 🙂
Comment #94 January 11th, 2022 at 6:51 pm
Matty Wacksen #88: I don’t expect you to agree, but to my mind, the single worst thing that Manchin (and for that matter Sinema) are doing is refusing to allow voting reform to pass by a simple majority. They’re thereby setting the stage for a 2024 election that will make the events of January 6, 2021 look like a mere Beer Hall Putsch.
Comment #95 January 11th, 2022 at 7:06 pm
not-a-real-name #85:
Not that you’re going to approve this comment, but I really just want to drive home how FUCKING INSANE it is that you put “trying to overthrow the US government to install a fascist dictator” on the same level as “being racist is not OK”.
Do you understand that, in the 21st century, most Americans would agree that racism belongs with child abuse among the worst evils of which humans are capable? And that while, yes, a nontrivial fraction of them are lying and do harbor racist sentiments, a solid majority are indeed not racist according to any standard that MLK (for example) would’ve understood? That they work with, date, marry people of other races, actually living the promise of “We Shall Overcome” on a routine basis?
Do you understand the terror that many in this solid majority nevertheless feel that they’ll be tarred as racists and have their lives ruined, if they disagree with the slightest detail of a constantly expanding political program, much of which has only the most tenuous connection to race? Do you understand that, if Trump returns to power in 2024 without needing state legislatures simply to throw away votes, then this terror, and the urge to give a middle finger to the people instilling it, will have been the main reason, just like it was in 2016?
These are rhetorical questions, of course. You don’t understand, nor does anyone else who thinks like you do, and that constitutes the left half of our national tragedy. Not as morally culpable as the right half, to be sure, but playing an equal causal role in bringing the tragedy about.
Comment #96 January 11th, 2022 at 7:14 pm
Scott #76:
Thanks for the link. Something that seems to be omitted every time the experiment gets brought up is how they deliberately chose quasars whose past lightcones don’t intersect. Seems like an important detail to miss! (And I literally only just learned of it.)
Scott #93:
Time to dunk on William Lane Craig, who prefers to opt for some objective rest frame that is impossible to discover (or construct…?)
Comment #97 January 11th, 2022 at 8:15 pm
The idea of an experiment is that we can substantiate or refute an hypothesis. For example if the hypothesis is that A implies B, we can do a randomized controlled experiment, where we choose a setting x, and choose at random whether or not to do A on it, and check whether the outcome O is B. However if you think that both the choice of intervention and the outcome are deterministic functions of some hidden variables, then you cannot draw any conclusion.
I am not sure what is the difference between that and the Bell experiment
Comment #98 January 11th, 2022 at 8:23 pm
Specifically it’s not really about randomness in the universe. If we only ran the Bell experiment where the choices for bases for Alice and Bob were based on the even and odd digits of pi respectively, then this would be good enough evidence to reject hidden variable theories.
Comment #99 January 11th, 2022 at 8:42 pm
#98:
I think the superdeterminists would say that the choice of pi rather than some other transcendental number, and the decision to use even/odd digits alternatingly, rather than five at a time, was itself part of the correlation with the measurement results.
Comment #100 January 11th, 2022 at 11:11 pm
Is there a more formal way to state this?
There is some ambiguity here that I find confusing. E.g. there’s the concept of relative randomness, where a string is random relative to one TM but not another.
The system under test encodes a finite amount of information, it won’t be able to distinguish pseudo randomness from true randomness over a sufficiently large range – in a sense it doesn’t know what kind of universe it’s in?
Isn’t there equally a hint of conspiracy in the idea that just because the universe is deterministic, it’s deterministic in a way that would render experiments meaningless?
Comment #101 January 11th, 2022 at 11:13 pm
Scott Says:
Comment #62 January 11th, 2022 at 1:51 pm
“What is the “real problem”? Is it nonlocality?”
That wave collapse or splitting into worlds at measurement is unexplained if QM is claiming to be a fundamental theory is a real problem. If a model containing non-locality is currently the best fit for the data, that’s fine for the model qua model, especially if we can make computers based on it.
I see what you’re saying (the second time round!) about the arbitrariness of no stat ind leading to a failed Bell test, while apparently vaccine trials are unaffected. And why no deviations observed from the Born Rule, etc.? But as a raw observation, the vaccines are seen to work, while in the Bell test stat ind is an explicitly stated assumption so why not consider whether that assumption is justified? (Well, you lay out why not, but Super why not?)
Of course, the SDers need to find a route to empirical evidence (a paper of Sabine Hossenfelder and Tim Palmer apparently proposes a “partial” empirical test of an SD theory) and ultimately you cannot rule out that they will find evidence or some useful insight.
(Personally, I will accept the evidence – and clearly there is currently no definitive evidence for SD, String Theory, Many Worlds, inflation, the multiverse, a universal beginning, universal fine-tuning, dark matter.)
Comment #102 January 11th, 2022 at 11:56 pm
Andrei #73
“If the objects interact by long-range forces/fields they get correlated regardless of the initial state. So, any long range interaction eliminates a huge number of states that might appear physically possible because the only allowed states are solutions to the N-body problem, where N is the number of objects in the system.” So are you offering that as an explanation of how the words in your comment came into existence? Everything is superdetermined, but outcomes are independent of initial data because they are the only allowed states of the system? No other words would have been possible in your comment? – which is therefore not determined by initial conditions, so you don’t have to worry about what set up initial conditions that superdeterministically lead to logical looking statements?
Comment #103 January 12th, 2022 at 12:29 am
Scott,
“It’s possible to do a Bell experiment where, for Alice’s and Bob’s randomness, they use the light from distant quasars”
I think it was already done.
“…which couldn’t possibly have had a chance to become correlated between the Big Bang and the present without violating locality (see here)”
I disagree that the above is rigorously established. Inflation is a hypothesis that lacks solid experimental confirmation. We also do not have a theory of the Big-Bang itself so we do not know what type of correlations could exist back then.
“When and if such an experiment is done, do you predict it will see a violation of QM?”
The experiment was done, QM was not violated.
“If not, then how the hell would superdeterminism explain its results, without the very nonlocality that it was its whole purpose to avoid?”
If one can establish in a rigorous way that the quasars are not causally connected I agree superdeterminism is dead. We are, however, very far from that point. SR is extremely well confirmed experimentally. It is unreasonable to reject SR and go with the inflation hypothesis.
“Would you say that the quasars (or rather their antecedents) must’ve been able to communicate with each other, and thereby become correlated, in some pre-inflationary epoch?”
See above. I prefer not to speculate more about that at this point.
“2. Here is my #1 challenge for you, and all other superdeterminists.
Explain, on the basis of superdeterminism, why the Bell/CHSH inequality can be violated by a specific quantitative amount and by no more than that amount”
I cannot perform the required calculations so I cannot give you a quantitative answer. What I can say is that, based on the fact that there is EM interaction between source and detectors their states cannot be independent, so Bell’s theorem does not apply.
‘t Hooft presented a complete model that may answer your question:
Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy
https://arxiv.org/abs/2103.04335
“and also why superluminal signalling is impossible.”
Because there is a maximum speed limit. In classical EM and stochastic electrodynamics (which is basically still classical EM) nothing can travel faster than light. SR is built in. In ‘t Hooft’s model there is also a speed limit since each cell of the automaton changes its state only as a result of the states of the cells around it. So, the maximum speed would be 1 cell (Planck unit)/update time (Planck time?).
“Let me now make a public commitment to spend more hours of my life engaging with superdeterminism when, and only when, there’s a credible claim to answer at least one of the two challenges above.”
I hope my answers are satisfactory.
Comment #104 January 12th, 2022 at 12:38 am
Scott:
I don’t really see the argument that superdeterminism (in the “initial conditions” sense) is fundamentally unscientific. Science means that we notice regularities in our observations of the world, formulate “laws” that would produce such regularities, and test them using other predictions they give. A constraint on the initial conditions can be valid as such a “law of Nature”; Gauss’s Law in electrodynamics is an example. And if a model predicts we will not be able to take certain measurements,
A big part of your complaint seems to be that ‘t Hooft et al have not actually explained anything. This is of course true, simply because they have yet to produce any actual model for how the superdeterminism would work! They have merely indicating the general direction in which they suspect a good model may eventually be found. No one is claiming that the concept of superdeterminism, on its own, constitutes an explanation for anything.
But I agree with you that the direction is a dead end; no viable superdeterministic model will ever turn up. Such a model would have to clearly express a constraint on the initial conditions that somehow ends up affecting every possible method that someone could use to choose their measurements – but any observable variable at all could potentially be such a method; there is nothing special that could pick out which initial conditions will end up affecting those choices. Therefore, there is just no way to formulate the constraint to give the results we want.
So it’s not that there is anything methodologically wrong with the direction, it’s just that the goal of building such a model is impossible.
Comment #105 January 12th, 2022 at 12:46 am
Paul Hayes,
“This sort of misconception…”
OK, please refute the below argument:
We have an EPR-Bohm setup, two spin-entangled particles are sent to 2 distant stations, A and B. The spin measurements are simultaneous (space-like) and perform on the same axis (say Z). Let’s assume that the result at A is +1/2. We have:
P1. The measurement at A does not change B (locality).
P2. After the A measurement, the sate of B is -1/2 (QM prediction).
From P1 and P2 it follows:
P3: B was in a -1/2 spin state even before the A measurement. The spin of particle B on Z is predetermined.
Symmetrically, the spin of particle A on Z was predetermined as well.
The only assumptions here are 1. locality and 2. QM gives correct predictions. From these it logically follows that the measurement results are predetermined. In other words:
C1: locality can only be saved by introducing deterministic hidden variables (the spins before measurements, or some other property that uniquely determines the results).
Then we have Bell’s theorem which says:
C2: Local hidden variable theories are impossible with the exception of superdeterminism.
From C1 and C2 it follows that the superdeterminism is the only possible way to keep physics local.
Comment #106 January 12th, 2022 at 1:06 am
Boaz Barak,
“I think the so called “statistical independence” assumption of Bell is not an assumption – it’s an axiom in the sense that something like this is a pre-condition to doing any kind of science.”
P1. A pair of orbiting stars do not have independent physical states (they both have elliptical orbits, they orbit around the same point – their common center of mass, the orbits share the same plane). The orbital parameters of such star pairs are not statistically independent.
P2. statistical independence” … is a pre-condition to doing any kind of science.
P3. Binary star systems exist.
From P1, P2 and P3 it follows that science is impossible. See the problem?
“If you believe the universe makes it fundamentally impossible to run an experiment where you can choose whether to do A or B (e.g., whether something is the control or treatment) then you might as well not do any experiments at all.”
Superdeterminism does not imply that you cannot do A or B. You obviously can. What superdeterminism implies is that your “choices” are predetermined by the past state of your brain. Since your brain is a material object it interacts with all other objects (including the particle source). This interaction implies that there are correlations between the internal (microscopic) states of those objects in the past. Since the evolution is deterministic this translates into future correlations. Superdeterminism is the hypothesis that the past states that would evolve into states that contradict QM’s prediction are physically impossible. If we think about classical EM, such a state would not satisfy Maxwell’s equations for example.
It’s not that in superdeterminism you select some suitable, finely tuned, initial state. Any state is OK, as long as that state is physically possible.
Comment #107 January 12th, 2022 at 1:33 am
Mateus Araújo,
“It’s definitely not the only way. Many-Worlds is a much more satisfactory way to preserve locality.”
As far as I know there is no way to recover Born’s rule in MWI, so, at this time, MWI cannot be seen as a valid interpretation.
“Also, I don’t think it is methodologically sound to make these object-level choices about what we want the world to be. It is what it is, and if we don’t like it then it is our problem.”
Agreed. The world seems to be local, SR has a perfect track record. This implies superdeterminism.
“The correct way to proceed is to choose sound epistemological principles and follow them wherever they take us. Which ones? Well, those that have served us well in the past: reductionism, universalism, occam’s razor, the Copernican principle”
How is superdeterminism in conflict with those ideas?
“…and, perhaps more importantly, actually taking physics seriously instead of running away screaming covering our ears.”
I do not recognize myself in the above statement. I have no idea what you are implying here.
Comment #108 January 12th, 2022 at 1:47 am
Scott,
“Even with a discrete model, ‘t Hooft could easily say if he wanted that quantum computers will work exactly like standard QM predicts, since the initial state of the universe was tuned to produce just that outcome—exactly like it was tuned to produce the standard QM outcome in the Bell/CHSH experiment.”
You keep making this claim. Can you point me to a paper by ‘t Hooft where he makes use, either implicitly or explicitly by finely tuned initial states?
“superdeterminists have effectively infinite freedom to pick and choose where they want to agree with QM’s predictions and where they want to depart from them.”
“So then, as should surprise no one, they choose to agree on the existing experiments (like Bell/CHSH), and disagree only on experiments that haven’t been done yet (like Shor’s algorithm)! But they never suggest even a shadow of a hint of a theory from which one could derive where their results will differ from standard QM’s, and where they’ll agree.”
OK, as stated before, ‘t Hooft’s take on quantum computing has nothing to do with superdeterminism. He is making a bet that QM would fail at some point due to the non-continuous structure of space and time. A superdeterministic theory based on a continuous spacetime (such as stochastic electrodynamics) would not imply any failure of quantum computers. You should not generalize a property of a specific model (cellular automaton) to the whole class of SD theories.
Comment #109 January 12th, 2022 at 2:21 am
Scott #94: No, I don’t agree about Manchin. The article you linked to doesn’t even *mention* voting reform. If it were so important to Democrats, they shouldn’t bundle it with their spending bill.
Though you seem to believe Republicans are evil and/or not commited to democracy. Which may be true, but this kind of “the other side is evil and hates us good people” is far closer to the Beer Hall Putch and what followed than a couple of people walking into a government building with funny hats.
Comment #110 January 12th, 2022 at 4:18 am
I was very interested to read your comments about superdeterminism, for years I’ve been reading about it and I kept thinking I must be misunderstanding what they’re trying to say because it can’t possibly be as stupid as it seems to be. But now you have convinced me that I was wrong, superdeterminism really is as stupid as it seems to be.
Comment #111 January 12th, 2022 at 4:19 am
Steven Evans,
“the SDers need to find a route to empirical evidence…”
The point of superdeterminism is that the physical state of the particle source and the physical state of the detector are not independent variables. Excluding conspiracies (finely tuned initial states) the only way to justify such a lack of independence is to look at what long-range interactions happen between them. We have two candidates: gravity and electromagnetism. Gravity is not (probably) involved in the emission of the entangled particles so we have the EM interaction.
Once you take into account this interaction you need to see in what way the physical states are constrained and only take into account the states that are physically possible. If such a calculation can be done, one could predict the results of a Bell test and compare this prediction to the experimental data. in this way you can confirm/falsify a superdeterministic model.
I don’t think one can actually measure those states due to uncertainty principle. Also, if you try measuring the atom at the time of emission you will probably ruin the entanglement.
Comment #112 January 12th, 2022 at 4:27 am
>> Joe Manchin
Scott, whatever you think about Joe Manchin, he did the planet a big favor by blocking “build back better”. Why is that? in short biomass.
The EU is very proud of its renewable energy projects and in Germany up to 40% of electricity is produced from renewables. But what they usually don’t mention is that more than half of that is from biomass and the majority of biomass is actually wood chips, i.e. wood pellets made from trees. In fact trees are chopped down in North and South America (i.e. Amazon) to produce those wood chips, which are then burned in former coal plants.
Obviously, burning wood puts as much CO2 into the atmosphere as burning coal, but in addition large numbers of trees are chopped down. The idea that this is renewable energy is completely idiotic, it would take 20 – 50 years to regrow those trees.
So far the US is not doing the same, until Build Back Better proposed billions in subsidies for biomass. Watch the 2nd half of Planet of the Humans (on youtube) to see who is behind biomass in the US …
But thanks to Joe Manchin this idiotic greenwashing scheme may not come to the US …
Comment #113 January 12th, 2022 at 4:49 am
As Andrei points out (#29, etc), one must give up on at least one of (i) locality, (ii) realism (= completeness), (iii) statistical independence (“free will”). Accepting superdeterminism is a way of keeping (i), (ii), but giving up (iii). As far as I can tell, other than references to Many Worlds, no one in this thread has disagreed with this dilemma (in any case, I don’t yet understand how MW really deals with this).
Superdeterminism (in the sense of maline #68) need not be any more nonlocal than Bertlmann’s socks. Scott claims (#62) that this isn’t the case because the constraints on the initial state won’t be local – why couldn’t they be, if one goes far enough back in time? The answer to this given in #76 only (!!) goes back as far as “the end of any early-universe inflation”. To Scott’s rhetorical question “Would you say that the quasars (or rather their antecedents) must’ve been able to communicate with each other, and thereby become correlated, in some pre-inflationary epoch?”, a superdeterminist would answer “Yes!”.
Of course, as as been repeatedly pointed out (especially by Scott #47, #62, #76) – once one accepts superdeterminism, why does the universe exactly reproduce the predictions of QM, as opposed to all sorts of other cools things? I don’t know _why_, but that’s what empirical observations give, so that’s what we’re stuck with (unless one wants to give up on locality/realism through rejecting superdeterminism, as above).
Does that count as addressing Scott’s challenge in #76 (albeit, probably not _credibly_)? 🙂
As pointed out by AZ (#51) and Boaz (#79), accepting superdeterminism seems to undermine all of science. I agree, though I’m not sure that this issue is solved by allowing QM (or other) style indeterminism (interestingly, people like Alvin Plantinga use a variation on this argument in their defenses of religion).
Comment #114 January 12th, 2022 at 8:29 am
drivenleaf #113: This dilemma is in some sense correct and in some sense false, but was beside the point for the discussion, so I didn’t feel the need to comment. Since you ask, though, here’s the deal:
First of all, assumption (ii) is determinism, plain and simple. Not anything strange and nebulous like “realism” or “completeness”, whatever that means. (i), (ii), and (iii) are simply the assumptions of the CHSH theorem, they are technical and precise. Secondly, one might think from this presentation that giving up determinism is enough to retain locality, but this is not true. If you retain (i) and (iii) it is still not possible to reproduce quantum mechanics.
For this reason, I think it is better to leave the CHSH theorem aside, and go instead with Bell’s 1976 theorem (not the 1964 one, which is a mess). I’ve reproduced it here. It does give us this clean dilemma you wish: either we give up on (i) local causality or (ii) statistical independence.
Except, of course, that we’re not taking Many-Worlds into account. All the proofs above implicitly assume there is a single world, so the dilemma simply does not apply to Many-Worlds. It’s easy to see that Many-Worlds must be local: the only nonlocal thing in quantum mechanics is the collapse of the wavefunction, and it is absent from Many-Worlds. It’s not straightforward to see how the Bell correlations are generated. For that I recommend this excellent paper by Brown and Timpson.
Comment #115 January 12th, 2022 at 8:34 am
John K Clark #110: Yes, just like certain political movements, superdeterminism seems to get whatever mileage it has from well-meaning outsiders’ incredulity—i.e., from their assumption that it can’t possibly be saying what it seems to be saying, because that would be too aggressively ridiculous. With this post, I hope to make a small contribution to the world’s sanity by assuring people that, in this case, looks are not deceiving.
Comment #116 January 12th, 2022 at 8:42 am
Andrei #107: That’s a rather curious objection to Many-Worlds. I assume you think all other interpretations are also invalid, because they simply postulate Born’s rule? There are several arguments for recovering Born’s rule in Many-Worlds, some of which I find nonsensical, some I find satisfactory, but the situation is much worse in the other interpretations that do not even try.
As for the epistemological principles, superdeterminism is an egregious violation of Occam’s razor. It has to postulate by fiat all the correlations from all quantum experiments, that in quantum mechanics are derived from simple rules about vectors in Hilbert spaces. Also, it’s the perfect example of running away screaming covering your ears.
Comment #117 January 12th, 2022 at 9:34 am
Andrei #105: (“The only assumptions here are 1. locality and 2. QM gives correct predictions. From these it logically follows that the measurement results are predetermined.”).
It doesn’t. You’ve assumed what you wanted to deduce (predetermination) in P2: “After the A measurement, the sate of B is -1/2 (QM prediction).”
The QM prediction is in fact “‘After’ the A measurement of spin in the Z direction, the outcome [not state] of the B measurement of spin in the Z direction will be -1/2 if the B measurement is one of spin in the Z direction (and not some other direction)”.
Comment #118 January 12th, 2022 at 10:54 am
And now I feel like I’m losing track of which explanation my own stance most closely correlates with.
My view of entanglement is that when particle ABC with the first letter being the only observable component of the particle gets entangled with particle DEF we end up with particles AGH and DGH due to space-time proximity. When you measure some randomized quantity of AGH it is the same quantity as DGH aside from its location and other “above the fold” differences due to the “influencing” component of the two particles being a shared component.
I don’t think it would be a major stretch to prove out this position if one were to play around for a bit to figure out how to deliberately mess with the state of GH
Comment #119 January 12th, 2022 at 10:55 am
Andrei #108:
You keep making this claim. Can you point me to a paper by ‘t Hooft where he makes use, either implicitly or explicitly by finely tuned initial states?
I mean, look at his ebook, especially the section “Superdeterminism and Conspiracy,” where ‘t Hooft makes plain (as he has in many other places) that he simply bites the bullet, i.e. postulates whatever fine-tuning is needed in the state of the universe to get the desired result. To the vast majority of scientists who think about this issue the way I do, the bottom line is simply: yes, it’s exactly as bad as it sounds, and no clever way is being proposed to get out of it.
OK, as stated before, ‘t Hooft’s take on quantum computing has nothing to do with superdeterminism. He is making a bet that QM would fail at some point due to the non-continuous structure of space and time.
No, this is wrong, because normal QM is as happy as a clam in a discrete spacetime built out of qubits—indeed it’s mathematically simpler than QM in continuous spacetime! It’s only the superdeterminism religion that makes ‘t Hooft believe that there’s any connection between the (possible) discreteness of spacetime at the Planck scale and a failure of QM.
Comment #120 January 12th, 2022 at 10:59 am
A pair of orbiting stars do not have independent physical states (they both have elliptical orbits, they orbit around the same point – their common center of mass, the orbits share the same plane). The orbital parameters of such star pairs are not statistically independent.
Here’s the problem, though. Given two binary stars, you can’t tell from a single observation the direction of the axis of their plane of revolution. In fact, your assumption would likely be that the orientation of that axis would be random.
Now, let’s assume that you made many observations and over time you noticed that binary stars where the primaries are less than 100 AU apart have a preferential orientation of their axes. And that such preference was completely different from those with a separation of over 100 AU. That would be very curious! It would require an explanation!
And the explanation most definitely could _not_ be “well, we know the parameters of the orbits of two binary stars are not statistically independent, so it’s not surprising that the orientations of the axes of revolution are not independent either.” The one correlation by no means implies the other.
The same goes for superdeterminism. There are almost infinite ways for detectors to be correlated, from high to low — that _all_ of those various correlations conspire to produce Bell’s inequality exactly in every case seems improbable, without a direct explanation of _how_ that could be the case.
Comment #121 January 12th, 2022 at 11:02 am
maline #104:
So it’s not that there is anything methodologically wrong with the direction, it’s just that the goal of building such a model is impossible.
This is actually an interesting question for the philosophy of science: is it “methodologically wrong” to pursue a research direction that is (you and I agree) impossible to make work? I suppose it depends on just how obvious is the impossibility of getting anything worthwhile out! For superdeterminism, I’d say that if it’s not obvious here, then it’s not obvious anywhere, and you might as well pursue a quantum theory of gravity based on chunky peanut butter, ennui, and the axiom 0=1, because who knows where it might lead? But here opinions could reasonably differ. 😀
Comment #122 January 12th, 2022 at 11:36 am
In my worthless opinion, both Dr. Aaronson and Dr. Hossenfelder are good and brilliant people who being too inflammatory and insulting in what should be a technical and logical argument.
Having read Back Reaction almost as long as Shtetl-Optimized, I know that Dr. Hossenfelder does not think super-determinism is a good name for what she is proposing, and that she does not think what she is proposing is any kind of fine-tuning conspiracy. Beyond that, I don’t have a clear idea of how she gets to Bell experiment results with it.
Personally, I have long felt that a bit of randomness can actually make a universe (certainly the toy universes of video games) work better, as long as it does not have a major impact except in edge cases.
Side remarks on other issues brought up in the comments: an insurance company actuarial analysis was recently published with the finding that the death rate in the USA was up 40% over the last 12 months, whereas previously to the pandemic it had not varied by more than 10% since WWII (I write from fallible memory). This and other data tells me that covid is much worse than flu (as I also knew from personal experience).
I recently saw the documentary “Four Hours at the Capitol on January 6, 2021”, compiled mostly from on-site videos, with some post-mortem interviews. It was not a few people in funny hats. Most of the congress people there were in fear for their lives, Republicans and Democrats. I will grant that most of the rioters seemed to have a sincere belief that “Trump was anointed by God to be President”.
Comment #123 January 12th, 2022 at 11:38 am
Steven Evans #101:
But as a raw observation, the vaccines are seen to work, while in the Bell test stat ind is an explicitly stated assumption so why not consider whether that assumption is justified? (Well, you lay out why not, but Super why not?)
Because quantum mechanics is seen to work, just like vaccines are!
When we use vaccines or drugs because they passed randomized controlled trials, we’re trusting Statistical Independence—i.e., the assumption that it really is possible to sever causal chains and isolate variables, that gigantic conspiracies don’t poison everything we see—with our very lives. My position is simply that what we trust with our lives, we can also trust with our beliefs about the foundations of quantum mechanics.
That wave collapse or splitting into worlds at measurement is unexplained if QM is claiming to be a fundamental theory is a real problem.
As I keep trying to explain, as difficult as the interpretation of QM surely is, it’s a cakewalk compared to the interpretation of superdeterminism! And crucially, we’re stuck trying to interpret QM, since we can’t just throw QM into the garbage. This is not the case for superdeterminism.
Of course, the SDers need to find a route to empirical evidence (a paper of Sabine Hossenfelder and Tim Palmer apparently proposes a “partial” empirical test of an SD theory) and ultimately you cannot rule out that they will find evidence or some useful insight.
See above about my quantum gravity theory based on peanut butter, ennui, and 0=1, which you also can’t rule out will lead to some useful insight. 🙂
As for empirical tests, what Hossenfelder and Palmer have proposed are various phenomena (e.g., violations of the Born rule) that virtually no physicist seriously expects to show up, but that would indeed create a crisis for QM if they did. What they haven’t proposed is any experimental result or other development that would cause them to reject superdeterminism. And that’s not surprising, because superdeterminism is almost infinitely flexible, just like Hossenfelder claims string theory is.
Personally, I will accept the evidence – and clearly there is currently no definitive evidence for SD, String Theory, Many Worlds, inflation, the multiverse, a universal beginning, universal fine-tuning, dark matter
This is a pretty wild epistemological grab-bag! It includes everything from an interpretation (Many Worlds) that was specifically designed to yield no new experimental predictions, to an empirically-observed phenomenon (dark matter) that’s plainly real (have you seen the pictures of the bullet cluster?), and that doesn’t even present any problem for the current framework of physics—the difficulty is just to figure out its specific nature.
Comment #124 January 12th, 2022 at 11:41 am
Scott #44 You are super welcome. I’m glad I could help make your day 🙂
Comment #125 January 12th, 2022 at 1:00 pm
Just in case, Sabine also made a video on the Quantum Eraser experiment where she says everyone else is wrong, maybe that clarifies her views
https://www.youtube.com/watch?v=RQv5CVELG3U
And, for confused amateurs like me, a preamble video from Sabine on EPR
https://www.youtube.com/watch?v=XL9wWeEmQvo
…
Comment #126 January 12th, 2022 at 1:04 pm
Dear Scott, and Sabine #10:
I happened to rapidly browse through Sabine’s blog post right the day it came. Also, although I don’t best learn by watching video’s, I also watched this one video, for a change, the next day. (Actively reading through text is the method I usually follow. Active reading means, among other things, a lot of scribblings, margin notes, also notings in plain-text files, etc.)
…I found Sabine’s post a bit intricate at a few places, and so kept it aside. It remained that way until now. …Yes, I am a *very* slow reader. But apart from that fact, it also so happened that I also have been very short on time. … Today, I visited Scott’s blog after a couple of weeks or so, and decided to make some time for this topic, after cursorily looking at this post and a few replies. … So, it was only today that I carefully went through Sabine’s post and took my notes.
For the time being, I will jot down only a couple of points.
1. I think that the crux of Sabine’s argument is the passage from:
“Think about the hidden variables as labels for the possible paths…”
to
“…This is just what observations tell us. And that’s what superdeterminism is.”
I would have liked it if people were discussing the topic with reference to this passage.
As to me: If this indeed is what superdeterminism is, then I don’t have any fundamental problem with it, whether physics-wise or philosophically — i.e., if the idea is taken in a broad sense. I may have issues or disagreements in terms of details, implications, polemics, etc. (and I do seem to have a few). But at a basic level, I do *not* find the very idea of superdeterminism itself problematic.
As an aside, I may also point out that Sabine also says: “Keep in mind that superdeterminism just means statistical independence is violated which has nothing to do with free will.”
2. Sabine also says:
“So, in my eyes, all those experiments have been screaming us into the face for half a century that what a quantum particle does depends on the measurement setting, and that’s superdeterminism. The good thing about superdeterminism is that since it’s local it can easily be combined with general relativity, so it can help us find a theory of quantum gravity.”
Let me directly copy-paste what my (hurried) noting regarding this quote says:
Hasty conclusion. The way superdeterminism has been explained here [I mean in the quoted passage], not all its “implementations” need to be local. Superdeterminism, as a broad idea, should be compatible also with non-locality.
3. I also have some other notings but those are, perhaps, not as important. … May be, I will write a blog post around them a bit later, provided that I do find enough of a free time at a stretch.
Best,
–Ajit
Comment #127 January 12th, 2022 at 2:00 pm
It seems to me that ‘superdeterminism’ is classical local determinism in a closed physical causal nexus (Einstein locality).
Bohm/Hiley’s “Ontological” QT interpretation is one example of a ‘not local’ determinism allowing causal influences in a non-physical level of nature underneath/permeating the Planck scale and the limits of quantum uncertainty. This seems to be an alternative to both ‘superdeterminism’ and ‘fundamental’ quantum randomness – neither of which allow ‘free will’.
Comment #128 January 12th, 2022 at 2:11 pm
Mateus #114: thanks for the reply. I’m not sure how the dilemma could have been a side point, since as far as I can tell, it’s the main motivation for accepting superdeterminism.
I do not consider realism to be the same as determinism. By realism I mean what EPR meant – a theory is realist[ic] ”If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.”
Determinism is not an assumption of the CHSH inequality, so I agree wholeheartedly that rejecting determinism does not get one out of the dilemma I posed in #113 – in fact, determinism is the way to preserve locality and realism (in my sense).
Further, here’s Bell (in “Bertlmann’s socks and the nature of reality”): “It is important to note that to the limited degree that determinism plays a role in the EPR argument, it is not assumed but inferred… It is remarkably difficult to get this point across, that determinism is not a presupposition of the analysis. There is a widespread and erroneous conviction that for Einstein determinism was always the sacred principle.”
Tim Maudlin makes this point very forcefully in the introduction of “What Bell Did” – “In particular, he [Bell] insisted that neither determinism nor ‘hidden variables’ were presupposed in the derivation of his theorem”. I take the theorem to be that one must give up on one of (i), (ii), (iii).
I quote this primarily to show that (at least in Bell and Maudlin’s opinion), determinism is not an assumption snuck in through the back door. It falls out of the argument if one wants to preserve locality and realism.
Many thanks for your reference on Bell’s 1976 theorem. I gave it a quick skim, but I’ll read it more in depth when I get the chance.
Of course this discussion has ignored Many Worlds. I happen to still be confused about MW (it’s not at all obvious to me that it’s local). But sure, MW is one way out. Though it should be noted that MW is deterministic.
Comment #129 January 12th, 2022 at 3:25 pm
JimV #122:
In my worthless opinion, both Dr. Aaronson and Dr. Hossenfelder are good and brilliant people who being too inflammatory and insulting in what should be a technical and logical argument.
So let me bend over backwards here: I like Sabine. I’ve found many of her blog posts and (more recently) YouTube videos to be well-made, informative, and entertaining. I often agree with her when she’s ranting against this or that nonsense idea, in which cases her acerbic tone is perfect for the job.
But Sabine does have a tendency to be … wilfully contrarian, which is a well-known risk even with otherwise extremely smart and well-informed people. And with the superdeterminism thing, she truly is throwing stones from a glass house and doing her many viewers a disservice.
I see superdeterminism as having roughly the same scientific merit as creationism. Indeed, superdeterminism and creationism both say that the whole observed world is, in a certain sense, a lie and an illusion (while denying that they say this). They both consider this an acceptable price to pay in order to preserve some specific belief that most of us would say was undermined by the progress of science. For the creationist, God planted the fossils into the ground to confound the paleontologists. For the superdeterminist, the initial conditions, or the failure of Statistical Independence, or the global trajectory-selecting principle, or whatever the hell else you want to call it, planted the random numbers into our brains and our computers to confound the quantum physicists. Only one of these hypotheses is rooted in ancient scripture, but they both do exactly the same sort of violence to a rational understanding of the world.
This is why, while you found my comments here “inflammatory and insulting,” probably most of my colleagues would instead take me to task for having been almost pathologically charitable, measured, and willing to engage this stuff and give it oxygen. 🙂
Comment #130 January 12th, 2022 at 3:48 pm
drivenleaf #128: Realism is definitely not the same as determinism. In discussions of Bell’s theorem, though, people all the time say that the assumption they are using is realism when what they are actually just using determinism.
Note that I said determinism is an assumption of the CHSH theorem, not inequality. I mean strictly the theorem proved by CHSH in 1969. Now there are several different proofs of the CHSH inequality (or rather Bell’s theorem) using different assumptions, so it’s important to specify which.
It seems to me that what you have in mind a derivation following from the conjunction of EPR 1935 and Bell 1964. It’s a valid argument to make: EPR shows quantum mechanics is nonlocal (violating their definition of locality), and argue that “completing” it, that is, making it deterministic, is the only way to restore locality. Bell first shows that one can indeed make quantum mechanics local with determinism in the specific scenario EPR were considering, but then shows that this is not possible in general, leading to the conclusion that quantum mechanics is just intrinsically nonlocal.
I don’t like it, though, because it is an informal argument, not a crisp mathematical theorem like CHSH 1969 or Bell 1976. Bell himself abandoned it, and as far as I know nobody did a satisfactory formalization. Why bother? We have Bell 1976 already.
I don’t see how can it not be obvious that Many-Worlds is local. It is just unitary evolution! How could that be nonlocal? Special relativity is even a symmetry of quantum field theory! It doesn’t get more local than that.
Comment #131 January 12th, 2022 at 4:09 pm
Scott #129
in #52 you conceded
“Sure, perfect statistical independence might be an illusion.”
So, failure of statistical independence is just both too obvious and too vague to underpin everything Superdeterminism is claiming, right?
“they both do exactly the same sort of violence to a rational understanding of the world.”
sure, sure… but please explain to me how “pure randomness” isn’t also doing violence to a rational understanding of the world.
How can a *specific* event (spin up or down, an electron being observed here instead of there, Alice choosing chocolate over vanilla) occur without any prior cause, i.e. be specific while depending on absolutely nothing else in the entire universe? Something has to do the choosing, whether it’s totally outside our universe (i.e. god?) or we just can’t see the causal link, or maybe Everett saved the day.
Comment #132 January 12th, 2022 at 4:21 pm
I think one of the more general questions that Superdeterminism (whether right or wrong) is trying to point out is the problem that QM treats the experimenter(s) and the system measured differently, when, really, everything in the picture could be considered on equal footing (a sound theory should still work).
That’s where Bell admitted that his theory would fail: since Alice and Bob themselves are quantum systems, it should be possible to analyze the entire system Alice + Bob + particles also as a quantum system, and in that case it’s clear that Alice and Bob’s choices can’t just be arbitrary, not in the sense that there’s some conspiracy going on, but that statistical analysis no longer applies, and then that extends to the entire universe.
This goes back to “Wigner’s friends” and similar questions on the strange arbitrary boundary of what we consider quantum systems.
Comment #133 January 12th, 2022 at 4:40 pm
fred #131: Not only is some randomness perfectly, 100% fine as part of a
complete breakfastrationally understandable world, randomness isn’t even the weird part of QM. That would be what you have to do to calculate the probabilities!Not only that, but randomness—or a good-enough approximation thereof—is very often an essential tool for learning about the world (as for example in political polls and randomized controlled trials).
What’s incompatible with a rational understanding of the world, at least on its face, is the hypothesis that, every time you try to learn about the world, a cosmic conspiracy (in effect) already knows what you’re going to do and messes with your experiment, in order to prevent you from gaining reliable knowledge. For example, if every time you tried to conduct a poll, the initial conditions of the universe made it so that you only happened to call Bernie Sanders supporters, no matter how small a percentage of the electorate they were. Yet this, or its analogue for the Bell/CHSH experiment, is exactly what all superdeterministic theories say. The superdeterministic theories differ from one another mostly just in whether they come out and say this clearly or try to obfuscate it.
Comment #134 January 12th, 2022 at 4:44 pm
Regarding the anti-clue point, if MWI turns out to be the “right” way to think about QM, you could argue the earlier developments in QM were an anti-clue in some respects. They suggested some combination of indeterminism, complementarity, anti-realism, non-locality, none of which are supported by MWI.
Of course MWI might not be the right (or best) interpretation of QM, but it’s definitely considered a mainstream and plausible idea.
Regarding the statistical independence assumption, I agree with the criticisms if this assumption is simply dropped. The hope would be you could replace this assumption with something *else*, perhaps a weaker or a different version, that enable the theories to be scientifically viable, while somehow evading the conclusion of Bell’s theorem. It’s not clear to me yet whether the proposals in question achieve this. As an approach, I think it’s a long shot but probably worth trying.
Comment #135 January 12th, 2022 at 5:30 pm
Scott #133
Brains evolved to hold a probabilistic model of the world, not because the world is strictly random at the bottom but because knowledge of the world increases the chance of survival, so brains get naturally selected for, but an organism’s knowledge of its environment is always finite and imperfect (or the equivalence: a system can’t simulate itself from the inside, so all simulation is imperfect and has to be probabilistic).
This can appear a bit paradoxical, but brains would evolve even if the world was deterministic in a classical way. Even though the path of every organism in such a world is “pre-determined”, so that there’s no such thing as actual choices (the apparent decisions an individual makes based on his own brain are just as pre-determined as everything else), when taken across the entire species, each individual can be seen as an independent experiment, and we get probabilities coming from some quasi-frequentist inference.
But all this has nothing to do with the sort of pure randomness the Copenhagen interpretation posits. Again, in this case we’re talking about an effect without a cause. If you don’t see that as fundamentally mysterious, I don’t know what to say. Even Einstein was baffled by it (with his “God doesn’t play dice” quote,… but, ok, who knows what he really meant by it).
Comment #136 January 12th, 2022 at 5:32 pm
flergalwit #134: While indeterminism is indeed a core part of quantum mechanics that Many-Worlds does away with in a completely unexpected way, I disagree about the rest. Complementarity was defended essentially by Bohr only. It was a feature of his interpretation of quantum mechanics, the rest of the scientific community ignored this nonsense. Anti-realism is a more pervasive disease, but it has always been strongly opposed and never formed part of the pragmatic core of the theory. Nonlocality is a more interesting case: while it formed part of the pragmatic core in the sense that people always used the nonlocal wavefunction collapse to do their calculations, the consensus has always been that it is not real. Nonlocality was embraced only by minority interpretations like Bohmian mechanics and collapse models. In high-energy physics not even that, since the advent of QFT nonlocality has in practice ceased to be part of the theory for them.
(I’d argue that Many-Worlds is the only way we know to actually accept indeterminism and make it fundamental, but that’s another story).
Comment #137 January 12th, 2022 at 5:41 pm
Scott, you talk about how awful the world is and then go back to discussing quantum mechanics. Do you ever feel like it might be time to shift your research to something that might address the current awfulness of the world somewhat sooner than quantum theory?
Comment #138 January 12th, 2022 at 5:48 pm
fred #135: You’re wrong in a way that reflects something extremely deep about the world.
Complex numbers were also a “human creation” in the 1500s … until they turned out to be central to QM and other parts of physics. Riemannian geometry was a “human creation” … until it turned out to be central to GR. And so on.
Is this mysterious? Embarrassing? Does it mean that we haven’t really understood anything about how the universe works, since we’re still talking about these arbitrary human creations?
No, it means that what we’d mistakenly called “human creations” are actually math. They’re abstract structures that any other intelligent civilization would also have eventually discovered, and that couldn’t have been other than how they are.
Probability is another such inevitable abstract structure. We know this, among other reasons, because the rules of probability can be derived from general axioms of rationality, which say nothing about nonnegative real numbers adding up to 1 or the like.
To me, then, it’s no more surprising that the laws of physics would use probability at some point, than that they’d use complex numbers or Riemannian geometry.
As for Einstein? It’s a shame that he died a decade before the publication of Bell’s Theorem—because in some sense, every undergrad in my class who learns Bell’s Theorem then understands the problem with local hidden variables—i.e., the need for “God to play dice” if you want locality—more clearly than Einstein ever did. At least Einstein, unlike Bohr, realized that there was something further to understand about the matter!
Comment #139 January 12th, 2022 at 6:06 pm
David Karger #137: Do you have any suggestions? What are you doing to address the current awfulness of the world?
If you must know, the plan for a while has been to
(1) write one or more popular books around computation, quantum mechanics, the nature of reality, and the practice of clear thought itself,
(2) parlay the historic, runaway success of those books into becoming a revered Carl-Sagan-like figure, and then
(3) use the public platform thereby gained to save the world, or at least do as much good for the world as Carl Sagan did.
It’s just taking a bit longer than I’d hoped … especially now that 90% of each day is given over to administrative chores, childcare, doomscrolling, and arguing on social media. 🙂
Comment #140 January 12th, 2022 at 6:40 pm
Scott #138
“Probability is another such inevitable abstract structure. We know this, among other reasons, because the rules of probability can be derived from general axioms of rationality, which say nothing about nonnegative real numbers adding up to 1 or the like.”
To clarify, I’m not talking about probability as an advanced human tool and saying probability aren’t “real”, I’m talking about brains of even the simplest organisms being an organ that stores a probabilistic model of the world, and that’s the “true” origin of probabilities.
For example consider the behavior of fish: if something of a certain size that kinda looks like a shark moves a certain way in their visual field, they’ll move away from it, because for thousands or millions of generations it’s been the case that moving away in such a situation was the right thing to do to increase the chance of the fish surviving in the next 30 seconds. But, for a given fish at a given moment, having this particular neural circuit trigger doesn’t mean for sure that the source of the stimuli is actually a shark. That’s what I mean by the brain being probabilistic.
Same for a baby human, when its mouth touches something that fits the characteristic of a lactating human female nipple, allowing for some margin of error, the baby will automatically suck on it, because doing so is preferable to increase its chances of survival. But, in any particular instance, it could be a french fries rather than a nipple 😛 The point being that the baby brain doesn’t need to use its entire resources to do a perfect match, there are plenty of other aspect of the world to model (including signals coming from its own body, for thousands of crucial auto-regulation tasks), like detecting the right set of feelings that should trigger crying.
Comment #141 January 12th, 2022 at 6:45 pm
It is not obvious to me that a chaotic (deterministic but unpredictable) system, influenced by detector settings, could not preserve some correlations. It is not obvious to me that it could, either, but Dr. Hossenfelder has repeatedly said that she envisions some such system, which would reproduce all the known results of QM theory, plus some new ones in very precise, very low-energy experiments, without any need for fine-tuning or conspiracies.
Given that, when the response to her position is an argument that a fine-tuning conspiracy does not make sense, it seems to me that she would regard that as a non-sequitur response to her position. Perhaps the argument is meant to be that a fine-tuning conspiracy and/or super-luminancy is the only possible way to eliminate randomness in QM, but if so I wish that had been proven and submitted as for a scientific paper, without references to insanity. (Perhaps it has been, somewhere.) I grant that Dr. SH’s recent video made some pejorative allegations also. (Can’t we all just get along?)
Comment #142 January 12th, 2022 at 6:54 pm
It’s also worth pointing out that deep learning networks (by definition probabilistic models) are very successful at predicting solid body physics just by learning from examples fed from a video game engine with advanced solid body physics, i.e. using classical physics.
So the success of probabilistic models isn’t evidence that the world has to be probabilistic at a deep level. The two things are totally decoupled.
Such probability models work because they’re doing an efficient compression of the world dynamics (without explicit encoding of physical “laws”).
Comment #143 January 12th, 2022 at 7:06 pm
I don’t think you represent Sabine’s position correctly.
Superdeterminism is the name for hypothetical class of theories. What she is proposing is a possibility that we can find a theory that gloriously explains and derives Schrödinger’s equation.
That theory would have the following properties (https://doi.org/10.3389/fphy.2020.00139) :
If one can find this kind of theory, it would have explanatory power.
Comment #144 January 12th, 2022 at 7:16 pm
It seems to me that ‘superdeterminism’ is classical local determinism in the model of a local, closed physical causal nexus (Einstein locality).
Bohm/Hiley’s “Ontological” QT interpretation is one example of a ‘not local’ determinism allowing causal influences in a non-physical level of nature underneath/permeating the Planck scale and the limits of quantum uncertainty. This direction, although quite forward-looking, seems to be a very constructive alternative to both ‘superdeterminism’ and ‘fundamental’ quantum randomness – neither of which seem to allow ‘free will’.
Comment #145 January 12th, 2022 at 7:21 pm
Nick Nolan #143: Right, but you can’t find a theory with those properties, which doesn’t say (whether or not it admits to saying it) that the world is giant conspiracy. And this is obvious. There is no viable research direction here—or rather, my peanut-butter theory of quantum gravity is equally viable. See my above comments (or Mateus Araujo’s post) for more.
Comment #146 January 12th, 2022 at 7:27 pm
OK! Even if we believe in superdeterminism, that deals with only entanglement, just one part of QM, QFT. There are so many other successes of QM, QFT. What about them? I understand such a great theoretical physicist t’Hooft has been trying to make QM a deterministic classical theory for something like 20 years or more without any success. So far, in his own words, he has only found “toy models”. So my guess is that QM will stay as a non-deterministic probabilistic theory forever! What do you think?
Comment #147 January 12th, 2022 at 7:48 pm
A bit off topic but how seriously are we supposed to take the word “cube” in Andrei’s description of t ‘Hooft’s discrete model?
“The universe consists of Planck-sized cubes that can have a certain number of states”
Wouldn’t that violate all sorts of basic principles? If you had the right velocity so as not to cross cube boundaries then you’d be in a universal state of rest. There would be cardinal directions orthogonal to the cube faces. So much for relativity and rotational symmetry.
Comment #148 January 12th, 2022 at 8:03 pm
A somewhat random and not totally related question:
I am always a bit confused by the talk about hidden variables. I came to understand Bell’s inequality as a simple case of the pigeonhole principle: if three random variables \(X\), \(Y\), and \(Z\) take values \(\pm1\), then some pair of them must be equal, and, therefore, at least one of the probabilities \(p(X=Y)\), \(p(X=Z)\), and \(p(Y=Z)\) has to be greater than one quarter. This seems to rule out much more than hidden variables – it implies you can’t really talk about the spins along different axes as existing at the same time, whether you think of them as deterministic or random. Am I missing something?
Comment #149 January 12th, 2022 at 8:31 pm
Yury Volvovskiy #148: Yes, that’s correct. Before the measurements are made, the actual reality of the situation is that you have a superposition state, not a classical probability distribution. Much of this discussion only occurs because, 96 years after Schrödinger, many people still don’t want to accept that. They continue to want to impose their preferences on Nature, rather than letting Nature impose its preferences on them.
Comment #150 January 12th, 2022 at 9:44 pm
Scott Says: Comment #123 January 12th, 2022 at 11:38 am
I take your points – SD is impossible-looking; it looks even more difficult to explain than QM phenomena which have the advantage of actually having been observed. But I’m still not feeling the religious fervour, so I’ll definitely crack open the quantum texts this time.
You admit that the wave collapse/splitting into worlds is a complete mystery, though, if QM is supposed to be fundamental.
Many Worlds suggests there are many worlds, but maybe there’s only one. On Dark Matter, there is a confirmed discrepancy between some predictions of current theory and observations, no new form of matter has been identified necessarily.
Comment #151 January 12th, 2022 at 9:48 pm
JimV #141: The natural, obvious way out—the one virtually all physicists take—is just to accept that QM provides a true picture of the world, rather than a false picture that we need to replace while grudgingly grafting its correct predictions onto whichever thing replaces it. QM does have the EPR/Bell kind of nonlocality, but that’s vastly weaker than superluminal signaling, and so weird and subtle that no science-fiction writer would’ve had the imagination to invent it … to the point where once one understands it, one easily believes that if our old conception of the world was too small too accommodate such things, or even to recognize them as logical possibilities, then that was just a problem with our old conception, not with the world.
The whole point of Bell’s Theorem—which was indeed published in a journal!—is that if you can’t stomach this mild sort of nonlocality, then your only alternative is indeed a crazy conspiracy that secretly prevents you from ever measuring the appropriate correlations at all, by tampering with your random-number generator or even your brain. The superdeterminists bite that bullet. They then spill unlimited numbers of words to make it sound like something other than an insane conspiracy theory, but in the end, it is an insane conspiracy theory, because Bell’s Theorem implies that it has to be.
I regret that I’m now at the end of what my powers of exposition and clarification can do. If anyone has read the whole thread to this point and still doesn’t get it, there’s nothing else I can say to make them get it. The debate could be about 2+2=5 and they’d still come away saying that both sides made some good points, who are they to judge, and why couldn’t the 2+2=4 side be more logical and civil?
For that reason, I’ve decided to close this thread tomorrow morning. Get in any last comments before then. Thanks everyone! 🙂
Comment #152 January 12th, 2022 at 9:57 pm
Andrei Says: Comment #111 January 12th, 2022 at 4:19 am
Thanks for the comment. So answer Scott’s points. E.g. why will the output of a random number generator fail to be statistically independent when used in connection with a Bell test, but obey statistical independence in connection with a vaccine trial? How are the failures of statistical independence kept to exactly where an SD theory needs them to be based on observed QM phenomena and why don’t the failures pop up in drug trials?
Comment #153 January 12th, 2022 at 10:09 pm
There is an issue that I think arises in both the Bell case and quantum computing. If you consider quantum mechanics as a “ugly hack” that you are dragged into kicking and screaming, then you’ll keep trying to “bargain with nature” and invent all sorts of conspiracies to fix it. But that would not make scientific progress. The way to make progress is to let nature not only teach you what is true, but also what is beautiful.
I wrote about it here in the context of quantum computing
https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/
Comment #154 January 12th, 2022 at 10:23 pm
Scott #139: one of the main reasons I switched into HCI was that I felt that my theory work wasn’t doing enough to fix current problems (although I am still a big believer in its relevance over the long term). And in the past 7 years, I’ve further shifted my focus within HCI towards online discussion and social media because I consider it one of the main current sources of awfulness in the world that needs to be fixed (plus, it’s our (CS’s) fault). Of course I measure my expected contribution in microimpacts but it still seem preferable to doing nothing.
There’s plenty of room in this pool if you want to jump in!
Comment #155 January 12th, 2022 at 10:36 pm
Boaz Barak #153: Beautifully put!
Comment #156 January 12th, 2022 at 10:39 pm
David Karger #154: Thanks! But I feel like I’m already doing about as much as I want to do in the area of “online discussion and social media.” 😀
Comment #157 January 12th, 2022 at 10:47 pm
Scott #156: fair, but my original question #137 still stands.
Comment #158 January 12th, 2022 at 11:04 pm
One simple question. Would two entangled particles continuing to share a partial common existence that does not respect space-time count as non-locality and would that overcome both the need for conspiracy and the need for statistical independence?
Comment #159 January 12th, 2022 at 11:05 pm
You know, I actually thought the reasoning in the early part of Sabine’s video was OK. She thinks that (a) the wave function is non-physical (merely a subjective model of our degree of information), and (b) wants something deeper than the wavefunction, leading to (c). some sort of hidden variable theory. I’m actually quite sympathetic with (a), (b) and (c), it’s just that she then swerved off into a completely arbitrary fix that makes no sense (superdeterminism).
I still favour some version of many worlds, but I don’t think we can rule out the possibility that there are *some* hidden variables as well. It might be possible to have a good theory that drastically cuts down the number of possible worlds. So there’s still a chance that there’s some unknown kind of underlying geometrical reality that is much more clear-cut than Everett, with hidden variables that explains (a). exactly how to divide up ‘worlds’, (b) the subjective experience of observers in the branches and (c) cuts down the number of ‘worlds’.
Comment #160 January 12th, 2022 at 11:59 pm
Oren #158: I don’t know what “continuing to share a partial common existence that does not respect space-time” means. But it sounds like you might be groping toward my own preferred solution, which is “QM is true, entanglement is real, the Bell inequality is violated, none of this yields faster-than-light signalling, and if your worldview can’t handle it then find a different worldview” 🙂
Comment #161 January 13th, 2022 at 12:12 am
When it comes to the interpretation of quantum mechanics or more generally, scientific debate, there invariably comes a point where one party starts giving lectures on what is considered “good science” or “how we make progress” and “what is falsifiable or not” or “how science ought to be done” without actually engaging the arguments seriously.
Sorry but I consider posts like #153 in this category, i.e. intellectually lazy all the while being entitled to an intellectual higher ground. As far as I can tell, Sabine never claims to consider QM an “ugly hack” nor does she seem to think there is no point to experiments. Indeed, as someone in this thread suggested the world could have weak correlations, much of our science would hold just as well. In fact, every single measurement we have ever done has been on a noisy background yet we seem to be doing just fine. Despite fully being on the Scott (and the majority camp) on superdeterminism, I just can’t stand seeing intellectual grandstanding without content, this is not too different from shallow criticisms of string theory.
I applaud Scott for never doing that even when something as outrageous as superdeterminism comes along. Scott seems to be picking up the gauntlet and willing to take issues head on every single time (c.f, consciousness debate with Tononi being a good example), rather than dismissing ideas by invoking Karl Popper or some ill-defined notion of “beauty”.
Among all the objections raised, the key issue with superdeterminism, at least to me, doesn’t seem to be whether it makes all experiments pointless or that it considers QM a ugly hack (after all, like all scientific questions, it has a birthright to be put forward and to be given its best shot) but that of all the fantastic consequences it could have enabled (superluminal signaling probably a benign one if we apply some imagination), it chose to break something rather mundane about Quantum Mechanics, at some enormous expense.
Comment #162 January 13th, 2022 at 12:29 am
Mateus Araújo,
“That’s a rather curious objection to Many-Worlds. I assume you think all other interpretations are also invalid, because they simply postulate Born’s rule?”
No, because other interpretations do not make the claim that the universe is described by a state evolving in agreement with Schrodinger’s equation. So, they are free to introduce other objects. In MWI it is inconsistent to do so for the same reason it is inconsistent to postulate that 8 is a prime number. We already have a definition in math about what a prime number is and we can check if 8 is prime or not, adding this postulate leads to an inconsistency. Likewise, in MWI one should derive everything (including Born’s rule) from the fundamental object that the theory postulates. As far as I can tell no such successful derivation has been presented. Those derivations based on decision theory are circullar (I see no reason to grant the existence of rational agents in the first place) and I don’t see what a “measure of existence” is supposed to mean.
But even if I accept MWI as valid it’s not clear that the theory is not superdeterministic after all. I don’t see how the experimenter has a choice. He is as “free” as the one in ‘t Hooft’s automaton.
“There are several arguments for recovering Born’s rule in Many-Worlds, some of which I find nonsensical, some I find satisfactory, but the situation is much worse in the other interpretations that do not even try.”
See above, in other interpretations is not inconsistent to add Born’s postulate because they do not make the claim that the quantum state is all there is.
“As for the epistemological principles, superdeterminism is an egregious violation of Occam’s razor. It has to postulate by fiat all the correlations from all quantum experiments, that in quantum mechanics are derived from simple rules about vectors in Hilbert spaces.”
This is a false statement. ‘t Hooft’s cellular automaton does not “postulate by fiat” the correlations. ‘t Hooft shows that the quantum formalism can be recovered mathematically from his discrete model.
Comment #163 January 13th, 2022 at 12:41 am
Paul Hayes,
“You’ve assumed what you wanted to deduce (predetermination) in P2: “After the A measurement, the sate of B is -1/2 (QM prediction).”
The QM prediction is in fact “‘After’ the A measurement of spin in the Z direction, the outcome [not state] of the B measurement of spin in the Z direction will be -1/2 if the B measurement is one of spin in the Z direction (and not some other direction)”.
What is the distinction between an electron prepared in -1/2 state on Z and an electron for which the outcome of a measurement of spin in the Z direction will be -1/2? Exactly none. This is what a -1/2 state means.
The electron at B after the A measurement is indistinguishable from an electron that just passed through the corresponding channel of a Stern-Gerlach device. It is guaranteed to be found as -1/2 on Z (I specified that all measurements are only on Z). If the A measurement was not the cause of this behavior (the locality assumption) it follows that even before the A measurement “the outcome of a measurement of spin in the Z direction will be -1/2”. In other words, the outcome at B was predetermined. There was never a chance to get +1/2 at B, not before the A measurement and certainly not after that.
Comment #164 January 13th, 2022 at 1:03 am
Scott,
“I mean, look at his ebook, especially the section “Superdeterminism and Conspiracy,” where ‘t Hooft makes plain (as he has in many other places) that he simply bites the bullet, i.e. postulates whatever fine-tuning is needed in the state of the universe to get the desired result.”
I read that whole chapter and I could not find anything about any fine-tuning. He is just saying that in order to change the setting at A you need also to change the past (because in determinism the present state is uniquely determined by the past state). Changing the past forces you to change the state of the source in that past, and, obviously, the state of that source at the time of emission. Hence, the hidden variable and the detector’s settings are not free/independent variables.
So, please give me the exact quote where that fine-tuning is involved!
“No, this is wrong, because normal QM is as happy as a clam in a discrete spacetime built out of qubits—indeed it’s mathematically simpler than QM in continuous spacetime!”
There is no such physical object as a qubit. You can have electrons with spin states in a certain direction, or polarized photons, or energy levels in atoms. Those physical properties are discrete, but the electrons and atoms do not move in a discrete space-time. We do not have a discrete theory of QED.
Comment #165 January 13th, 2022 at 1:18 am
Scott #151 (“QM does have the EPR/Bell kind of nonlocality, but that’s vastly weaker than superluminal signaling, and so weird and subtle that no science-fiction writer would’ve had the imagination to invent it”).
OTOH it’s entirely plausible that some mathematician would’ve discovered it before it was found to be needed in physics.
Then – hopefully – no physicist would’ve mistaken it for some weird kind of nonlocality.
Comment #166 January 13th, 2022 at 1:25 am
Mateus Araújo #136: I’d contest one part of this, the claim that “Anti-realism is a more pervasive disease, but it has always been strongly opposed and never formed part of the pragmatic core of the theory.” Maybe I’m misunderstanding what you mean by “pragmatic core of the theory”, but that certainly sounds like you’re referring to “QM as practiced”, which includes a bunch of people who don’t think explicitly about interpretations very much, the “shut up and calculate” majority. And I think that community is, if not explicitly, then pragmatically anti-realist (at least the QFT part of it). Most bootstrap techniques are informed by the attitude that you should be able to get observables from consistency conditions without thinking about any specific “underlying dynamics”, that in some sense theories should be _defined_ as a set of observables with certain relationships. And AdS/CFT is in some sense explicitly anti-realist: the bulk and boundary generate the same algebra of observables, so they are viewed as physically identical, regardless of whether the “real number of dimensions” is four or five.
(Or is my mistake here that you don’t view QBism, for example, as anti-realist?)
Comment #167 January 13th, 2022 at 1:42 am
Nick Nolan #143 and Scott #145:
Changing the second point from “Local” to “Non-Local” (and with the word “Non-Local” to be taken in the same sense in which \Psi in the Schrodinger equation evolves non-locally) *can* get you to a *deterministic* theory though for QMcal phenomena.
Such a theory won’t be “classical”, to use a vague term.
More precisely, the ontology of such a theory won’t be that of the Newtonian mechanics, or of the Lorentz-Maxwellian electrodynamics (because both the latter theories are local theories).
Its ontology *also* won’t be that of the Fourier diffusion (because though the Fourier theory is *non-local*, the primary unknown there is real-valued and not complex-valued, and the configuration space for diffusion involving N number of sources/sinks is 3D, not 3N-D).
But unlike the mainstream QM (which is non-local and defined over 3N-D config. space, but also has only probabilistic outcomes), such a theory would be deterministic — albeit, with a chaotic dynamics (which practically would be indistinguishable from the mathematically defined ideal random-ness).
If I may add: Sabine in her post (at backreaction) has talked about nonlinear dynamics and chaotic regimes. But unlike her and Palmer’s proposal, what I am talking about here is a non-local theory.
Last calendar-year, I built precisely such a theory, and also simulated it, though I didn’t (and wouldn’t) call my theory a “super”-deterministic theory. Calling it “deterministic” is enough, as far as I am concerned.
…Guess all this clarification also answers many (otherwise fine) points by many others in this thread.
As far as this post and this thread is concerned, I am done. For any further discussion, feel free to drop comments at my blog or write me an email. (But that’s something none here / referred to here, would at all be undertaking, I can guarantee! After all, I am *not* a Western guy / an Indian settled in the West / or similar. Not even a JPBTI, for that matter (JEE Passed BTech IITian).)
The comment completes.
Best,
–Ajit
Comment #168 January 13th, 2022 at 1:43 am
“I’ve decided to close this thread tomorrow morning.” On superdeterminism — good.
Perhaps talk on the “struggle for sanity”, social media and Carl-Sagan-like progress can continue on a future post.
Comment #169 January 13th, 2022 at 1:58 am
Scott P,
“Here’s the problem, though. Given two binary stars, you can’t tell from a single observation the direction of the axis of their plane of revolution.”
If by “observation” you mean a position measurement at one time instant, you are right. You need to compare two vectors (the direction of their motion) and you cannot determine a vector at an instant. But any infinitesimally small measurement time would be enough in principle.
“In fact, your assumption would likely be that the orientation of that axis would be random.”
You cannot determine a vector from a single position measurement so I see no justification to make any assumption in regards to that vector. Why should I assume it to be randomly oriented?
“Now, let’s assume that you made many observations and over time you noticed that binary stars where the primaries are less than 100 AU apart have a preferential orientation of their axes. And that such preference was completely different from those with a separation of over 100 AU. That would be very curious! It would require an explanation!”
Indeed.
“And the explanation most definitely could _not_ be “well, we know the parameters of the orbits of two binary stars are not statistically independent, so it’s not surprising that the orientations of the axes of revolution are not independent either.” The one correlation by no means implies the other.”
First, I would try to model the system using GR and see if it correctly predicts the facts. In no way I would assume that there is some instant communication between stars so that when I observe one of them the other would “jump” to satisfy the observed correlation.
“The same goes for superdeterminism. There are almost infinite ways for detectors to be correlated, from high to low”
Indeed, and the only logically coherent position is to reserve judgement untill a calculation/computer simulation of the system becomes available. Not to postulate, out of the blue, that all states are equally likely.
“all_ of those various correlations conspire to produce Bell’s inequality exactly in every case seems improbable without a direct explanation of _how_ that could be the case.”
…just like all binary stars having elliptical orbits seems improbable. Given the infinite number of possible orbits how is that the universe conspires to only make ellipses?
The simple answer is just that in the absence of a calculation taking into account the interactions between the experimental parts one should not assume anything. The problem with Bell’s theorem is that Bell simply asserted that all states have the same probability with no justification whatsoever. The burden of proof is on those that cling to the validity of that theorem. They should provide us with the required calculation, proving that, when all EM interactions are correctly taken into account and all physically impossible states are removed from the count, they actually get equal probabilities. otherwise, I am perfectly justified at dismissing the argument as unsound.
What seems to go unnoticed is that nobody assumes statistical independence where there is a significant interaction. So, Bell’s independence assumption is in fact a non-interaction assumption. I guess that his intuition was based on the fact that the systems are far away. But in this case the intuition is wrong. Orbiting stars will always follow ellipses regardless of the distance. Newton’s laws, just as Maxwell’s are scale independent. Increasing the distance does not give you independence.
Comment #170 January 13th, 2022 at 2:10 am
Andrei #162: (“What is the distinction between an electron prepared in -1/2 state on Z and an electron for which the outcome of a measurement of spin in the Z direction will be -1/2? Exactly none. This is what a -1/2 state means.”)
Now you’re talking about [the state of] a single electron, which isn’t relevant. In your last paragraph you’re begging the question again; focusing on the Z measurement(s). But the fact that the B measurement could or could’ve been other than a spin-Z measurement can’t be ignored. The logic of QP – and QM – just doesn’t forgive such imprecision (See e.g.).
Comment #171 January 13th, 2022 at 2:27 am
Scott,
“QM does have the EPR/Bell kind of nonlocality, but that’s vastly weaker than superluminal signaling”
It’s not weaker at all. EPR admits two logical explanations:
The measurement at A instantly perturbs B. In this case a string of measurement results at A, say 011100101 was instantly “teleported” at B. This is in gross violation of SR and all modern physics should be descarded.
You can see all sorts of excuses, that you cannot send a string that is usefull to you, like a picture of a cat or something. this is nonsense. SR does not distinguish between sending 011100101 instantly or sending a picture of a cat instantly. They are both deadly to the theory.
The other logical possibility is superdeterminism. That’s all.
The violation of statistical independence is a non-issue. You never assume that systems that interact in a relevant way are independent. Orbiting stars are not independent. Synchronized clocks are not independent. Only non-interacting objects, like rigid balls in Newtonian mechanics are independent (if they do not collide). This is because there are no long-range interactions in the theory. No system of interacting objects can be assumed to display statistical independence. So, in fact, the claim of superdeterminism is not that statistical indepence has to be rejected in all situations, but that it does not apply to Bell tests for the same reason it does not apply to synchronized clocks or orbiting stars. Because there are long-range interactions between the experimental parts.
The so-called classical prediction of a Bell test is NOT the prediction of classical electromagnetism (which has long-range interactions). It’s the prediction of rigid-body Newtonian mechanics.
Comment #172 January 13th, 2022 at 2:41 am
Steven Evans,
“Thanks for the comment. So answer Scott’s points. E.g. why will the output of a random number generator fail to be statistically independent when used in connection with a Bell test, but obey statistical independence in connection with a vaccine trial? How are the failures of statistical independence kept to exactly where an SD theory needs them to be based on observed QM phenomena and why don’t the failures pop up in drug trials?”
I’ve answered those issues in my post #29:
“A short mention about the medical test argument proposed by Tim Maudlin and others. In this case we oppose superdeterminism to the mundane explanation that the medicine works. The prior probability that a medicine works as expected is quite high. On the other hand, the hypothesis that the EM interaction between the patients and doctors or whatever is used to select them, is going to have a statistically significant effect (a single virus killed is not enough to cure the disease) is very low. So, it is reasonable to dismiss superdeterminism in this case. The crucial difference between entanglemet and medical tests is that in the case of entanglement the outcome depends on the state of a single particle, it’s not a macroscopic/statistical effect.”
So, imagine two billiard balls 100Km away. Macroscopically they look independent. You move one, the other stays in place, etc. But they do interact electromagnetically. But since they are neutral overall (same number of electrons and protons), those interactions cancel at the statistical level. They could only be observed by looking at the microscopic states (position/momenta of those charges and the electric/magnetic fields). You cannot do that directly (because of the uncertainty principle) but you can reveal them in experiments that are sensitive to those microscopic states. A Bell test is sensitive to the microscopic state of the source because the polarisation of the photons tells you something about the state of the emitting atom at the time of emission. A medical test is not sensitive to the microscopic state. It only looks at statistical properties like the concentration of a certain substance in the blood. We are therefore justified in assuming independence for a medical test, but not for a Bell test.
Comment #173 January 13th, 2022 at 2:52 am
Boaz Barak,
“There is an issue that I think arises in both the Bell case and quantum computing. If you consider quantum mechanics as a “ugly hack” that you are dragged into kicking and screaming, then you’ll keep trying to “bargain with nature” and invent all sorts of conspiracies to fix it.”
This is not the issue. The issue is that EPR + Bell forces you to make a choice: nonlocality or superdeterminism. Not making that choice is not logically tenable since the options are logically incompatible. Both non-locality and superdeterminism are properties of QM. The idea is not to replace QM, but to make your position logically coherent by rejecting either nonlocality or superdeterminism.
The superdeterminist does not oppose letting nature tell you what is true. He makes a perfectly reasonable choice, embracing locality, which nature showed us to be true in the form of SR and rejecting Bell’s assumption, that the state of the detectors is independent of the state of the source, which has no other justification than Bell’s intuition.
Comment #174 January 13th, 2022 at 3:03 am
Mateus #130: When people say they are using realism, they (should) mean just the criteria I quoted from EPR. As you say, it does not at all have anything to do with determinism. And it is all that is required to get the dilemma posed in #113.
What EPR showed is that assuming (i) locality, and (iii) the independence of measurement settings from the quantum state being measured (though I don’t recall if they mentioned this explicitly), we get the conclusion that (C) quantum mechanics is incomplete – that is, there are quantities that can be predicted with certainty [on the basis of information obtained outside the past lightcone of the quantity] which do not correspond to anything in the quantum formalism.
What Bell showed is that any completion of quantum mechanics (i.e. a theory which obeys the realist condition and reproduces the predictions of QM) – whether deterministic or not – must necessarily be nonlocal. The only way out of this is by giving up on (iii), and superdeterminism is one way to do that. (Or by going the route of Many Worlds, which gives up on the implicit assumption that (iv) measurements have unique results). Scott (#141) grants the required nonlocality, and is thereby saved from superdeterminism. I’m not sure if he intended to give up on (my weak form of) realism in #149. His #151 makes me think he accepts the dilemma as posed in #113, and gives up on locality (with the important caveat that it does not allow superluminal signalling).
As you pointed out, giving up on determinism does not allow one to keep (i) locality, and (iii) statistical independence – this very fact would seem to indicate that realism is not equivalent to determinism, and that determinism does not enter into the argument (since if it did, the dilemma would not be well-formulated, though it’s one that Bell accepted).
As for why it’s not obvious that many worlds is nonlocal – I don’t understand how, within a single world, spacelike separated locations coordinate their results. Saying that the worlds “separate” in such a way as to give this result only works if the split into multiple worlds is itself nonlocal. I assume I’m just fundamentally misunderstanding MW, but I can’t figure out where.
Comment #175 January 13th, 2022 at 3:05 am
Paul Hayes,
Now you’re talking about [the state of] a single electron, which isn’t relevant.
Why? You measure the spin on Z of an electron at A. You get +1/2. This tells you that its entangled partner, the “single electron” at B is in a state of -1/2 on Z. Who ever mentioned multiple electrons? Obviously, you can repeat the experiment as many times as you want, but each time you measure a single electron at each station, one at A and one at B. And each time you have to conclude that the spin of the partner at B was predetermined.
“In your last paragraph you’re begging the question again; focusing on the Z measurement(s).”
This is the experiment I use. The detectors are fixed on Z. What’s your problem? What does this has to do with “begging the question”?
“But the fact that the B measurement could or could’ve been other than a spin-Z measurement can’t be ignored.”
We don’t ignore that, but a true theory should be able to deal with any experiment. Fixing the detectors on Z is a perfectly valid experiment. And this experiment rules out local indeterminism. There is no such thing as local indeterminism, it’s dead and burried. The prediction of local indeterminism is just the prediction you get for two coin flips. 50% agreement. we do the experiment, we get 100%. Local indeterminism is falsified.
Bell then focuses on the only local option on the table, deterministic hidden variables, where the detectors are freely moving. This experiment proves that in any local theory the hidden variables must be correlated with the detector’s settings, in other words superdeterminism.
Comment #176 January 13th, 2022 at 4:03 am
4gravitons #166: I do mean QM as practiced. You can do any calculation in QM you want and append to the end an assertion “none of this is real”. It doesn’t change anything. You can also append an assertion that “this is literally objective reality”. Doesn’t change anything either. Contrast this with the case of nonlocality: when you include a wavefunction collapse in your calculation, you are explicitly using nonlocality pragmatically.
I don’t know what you mean by boostrap technique, but your AdS/CFT example is misguided: having two different descriptions for the same reality is not at all anti-realist. It is commonplace in physics, even in classical mechanics, which was never accused of anti-realism.
It’s funny that you mention QFT, though. In my limited interactions with high-energy physicists they were always naïve realists, talking without worry about quantum fields and even particles as real objects. Only in quantum foundations I see people contorting themselves to say stuff like “the quantum state is a representation of our knowledge of future measurement results”.
(I do consider QBism anti-realist. Luckily that intellectual void never managed to suck in a majority of physicists).
Comment #177 January 13th, 2022 at 4:03 am
Andrei #162:
“But even if I accept MWI as valid it’s not clear that the theory is not superdeterministic after all. I don’t see how the experimenter has a choice. He is as “free” as the one in ‘t Hooft’s automaton.”
This only shows that you don’t have the faintest clue what superdeterminism is. In a superdeterministic theory it is not possible to measure hidden variable \(\lambda_{01}\) when your setting is 00. You can only measure the hidden variable \(\lambda_{00}\). Not such conspiracy is postulated in Many-Worlds, or any regular deterministic theory. It is possible to measure a fixed quantum state in any setting at all. There is no mechanism making the quantum state change depending on the measurement setting.
It beggars belief that after all these years arguing about superdeterminism you still haven’t managed to understand this basic point. It does show that I might as well talk to a brick wall. Therefore, I won’t bother replying to you any longer.
Comment #178 January 13th, 2022 at 5:08 am
Andrei #171: ” it does not apply to Bell tests for the same reason it does not apply to synchronized clocks or orbiting stars”
We know what to expect from synchronized clocks or orbiting starts. But how do we know what to expect from quantum systems? Why do we have experiments confirming Bell’s theorem, if one of its assumptions is violated?
I don’t still don’t understand it. Systems interaction? We are interacting right now, and it doesn’t help.
Comment #179 January 13th, 2022 at 6:34 am
The way I see things, the really unacceptable conclusion of quantum mechanics is the conclusion that a quantum computer is possible. Coming from computer science background, if anyone claimed to have an algorithm doing discrete logarithm or hidden subgroup or boson sampling (and especially permanent computation, but they don’t do that), I’ll think he’s probably clueless.
The way I see it, the last thing I will ever give up in response to Bell’s experiment is that the universe does some computationally hard calculations. I’ll give up locality (special relativity doesn’t even require absolute locality, just Lorentz invariance, it’s not the same), I’ll even go to super determinism, or I’ll find out why entanglement of one qubit is possible but general entanglement isn’t.
I’ll be even extra suspicious if the entire idea rested on a theory developed historically before computational complexity. The physicists behind quantum mechanics had no notion of computational complexity.
If I had to choose between a conspiracy of a universe who tricks the physicist into making predictable measurement configurations, and a conspiracy of a universe with exponentially large state that can solve computationally hard problems, my bet is on the universe trolling the physicist. I can actually implement such a trolling universe quite easily, which is something I just can’t say about the computationally hard universe.
Comment #180 January 13th, 2022 at 7:04 am
I think allowing Andrei’s comments to appear in this thread is a bit unkind towards superdeterminism.
Comment #181 January 13th, 2022 at 7:42 am
It seems to me that superdeterminism is almost like astrology (the influence of the origins painted in the stars, because electromagnetism or gravity “connects everything”) except that you do not have to make predictions: the SD Tarot can only be unveiled after the facts are known.
Comment #182 January 13th, 2022 at 8:15 am
Scott #145:
“You can’t find a theory with those properties (Sabine’s list) that doesn’t say that the world is a giant conspiracy” –
I agree if you’re referring to the “initial conditions” idea, but remember that Sabine is allowing retrocausality. In that case, I’m not at all sure that things can’t be made to work. Of course, you’d need to properly tackle the paradoxes of circular causality, but that’s already something that can come up in GR if we don’t rule it out by hand. I don’t know of any no-go theorem to rule out possibilities in this space.
In fact, if wavefunction collapse is real – and AFAICT that’s the only way to explain why we don’t see the incorrect (branch-counting) probabilities that MWI seems to predict – then the collapse cannot be local in the forward time direction, so Lorentz invariance can only be maintained by having the collapse be retrocausal. Penrose for one believes this is likely. I know he believes alot of funny things, but I think he has a point.
Comment #183 January 13th, 2022 at 8:31 am
What seems to go unnoticed is that nobody assumes statistical independence where there is a significant interaction. So, Bell’s independence assumption is in fact a non-interaction assumption. I guess that his intuition was based on the fact that the systems are far away. But in this case the intuition is wrong. Orbiting stars will always follow ellipses regardless of the distance. Newton’s laws, just as Maxwell’s are scale independent. Increasing the distance does not give you independence.
Bell’s theorem gives you a choice between statistical independence and locality. Your ‘solution’ appears to be to discard both! The point of superdeterminism is to preserve locality. But if you are claiming that two objects, anywhere in the universe, are always interacting, then there’s no point. In fact, you seem to agree that the universe is non-local, which is what Bell concluded from his thought experiment.
Comment #184 January 13th, 2022 at 8:35 am
Scott #151:
Independent of whether you are right or wrong, it is a good idea to always try to be civil. On the other hand, requesting that the other side should be more civil is not such a good idea. If the other side should be somebody like Luboš Motl that fails to meet your expectation for civil behavior, then one option could be to try to minimize interaction. But there are also other options.
Everybody decides for himself which behavior is still civil enough. Historically, Cantor decided that Kronecker’s behavior had crossed the border to unacceptable behavior. So he founded the German Mathematical Society, and defended the position that consistency is the important criteria for judging mathematical theories.
For Tarski, the way of the rejection of his theorem about choice by Comptes Rendus de l’Académie des Sciences crossed that line. (“Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest.”) Tarski subsequently defined the still accepted notion of a model of a theory (in mathematical logic), and of truth in such a model.
And when Obama humiliated Trump at the 2011 White House Correspondents Dinner, he might have been right. But maybe he still crossed some invisible, undefinable line.
Comment #185 January 13th, 2022 at 8:39 am
Scott #151
“The debate could be about 2+2=5 and they’d still come away saying that both sides made some good points, who are they to judge, and why couldn’t the 2+2=4 side be more logical and civil?”
I see that when it comes to convincing people who read this blog as a hobby, but, if things are that obvious (and I’m not being sarcastic here), I can’t help but wonder why professional physicists like T’Hooft (a Nobel prize winner) and someone apparently as careful as Sabine as so confused about it… I mean, from a psychological point of view, it’s interesting.
And, if Sabine is so wrong here, should I even bother with watching her videos? If this was about her politics or her views on dealing with covid and whatnot, I would give her a pass, but this is about basic QM and fundamental understanding of physics (but, as someone else noted, she makes it more than just about physics in this video, with strange comments on men being wrong, for no valid reason).
I’d rather listen to people who are more careful about the validity of their own conjectures, someone like Sean Carroll or even Penrose.
Comment #186 January 13th, 2022 at 9:25 am
Hi, everybody! (Long time reader, first time commenter here.) Reading the arguments from the superdeterminist side in this thread, it strikes me there is one issue they are tip-toeing around and, unless I missed something, it is also not mentioned by their opponents explicitly enough. Like, the superdeterminist’s argument for Bell’s inequality violation is that the assumption of “statistical independence” breaks down and the assumption breaks down because, after all, anything in the visible universe shares some common past with anything else. Fair enough, if we wanted just some arbitrary violation of Bell’s inequality, it could be all there is to it. I mean, were we able to test the correlations entering the inequality with infinite precision, we actually should expect to find some discrepancy with statistical independence, at least at some zillionth decimal place, because indeed everything in the universe is related to everything if we look close enough. But come on, knowing this is neither interesting nor sufficient to explain real experiments used as Bell inequality tests. So, I would like to ask dr Hossenfelder or other proponents of superdeterminism, can you quantify how large violation of statistical independence you need so as to explain real-world experiments? Ideally, do you have an idea about mechanism that could give you not just any violation of statistical independence – that is trivial, as long as this violation is allowed to be arbitrarily small – but rather give just the right correlations needed? As a follow-up from this consideration, I think there is also an opportunity to try and make a testable prediction. As I understand the current state of testing Bell’s inequality (but I did not follow it very closely), the violation of the inequality is safely established but the accuracy of the measurements is not super-impressive, so while the results are compatible with standard quantum mechanics, there is a substantial error margin. If so, what would you expect when the precision of experiments increases in the future? Does the superdeterministic theory predict that the correlations between the entangled system and the measuring devices are exactly such that standard quantum predictions will be confirmed with higher and higher accuracy? But if so, what is the mechanism to ensure there is not just any generic statistical dependence but exactly the right correlations? Or, is it like, we have an argument why statistical independence does not hold, so Bell’s inequality will not hold either, but by how much exactly it will be off, that may differ from experiment to experiment and there is no reason why it should always agree with quantum theory? If the hypothesis were the latter, that would be a very bold prediction indeed!
Comment #187 January 13th, 2022 at 9:53 am
One last point, notice that in none of the articles about drug or vaccine efficacy, or any other use of controlled experiments ever discuss how they choose their randomness, whether they used quasars or the pseudorandom generator that came built in Excel. And with good reason, it’s certainly the case that Excel doesn’t give true randomness, but if we were willing to entertain crazy conspiracy theories that its faults are designed to make the experiment successful, then we would not be able to make any progress.
As the saying goes, we should keep an open mind, but not so open that our brains fall out.
Comment #188 January 13th, 2022 at 9:58 am
Boaz Barak #187:
One last point, notice that in none of the articles about drug or vaccine efficacy, or any other use of controlled experiments ever discuss how they choose their randomness, whether they used quasars or the pseudorandom generator that came built in Excel … As the saying goes, we should keep an open mind, but not so open that our brains fall out.
Haven’t you been reading the long defenses of superdeterminism on this thread? Haven’t you learned by now that common sense isn’t allowed in this subject? 😉
Comment #189 January 13th, 2022 at 10:02 am
Vladimir #180:
I think allowing Andrei’s comments to appear in this thread is a bit unkind towards superdeterminism.
If nothing else, one should give Andrei credit for being super-determined! 😀
Comment #190 January 13th, 2022 at 10:09 am
Doesn’t locality require that space-time actually exist as something other than an expression of state on a photon? Why should space-time be special (other than the fact that as interrogators of this reality we were handed a starting condition where space-time are relevant to our structures…)?
Comment #191 January 13th, 2022 at 10:20 am
Andrei #175: (“Why? You measure the spin on Z of an electron at A. You get +1/2. This tells you that its entangled partner, the “single electron” at B is in a state of -1/2 on Z. Who ever mentioned multiple electrons?”)
You did. You at least appeared to be talking about the quantum model of a pair of particles – electrons – prepared in one of the bipartite Bell states. Yet although I’ve pointed out that QM requires one to be somewhat precise in such discussions you seem unwilling to accept that and I’m not getting anywhere. Please just carefully (re-)read that article of Streater’s I linked.
Comment #192 January 13th, 2022 at 10:27 am
Off-topic, what do you think about this letter by Ehud Qimron, head of the Department of Microbiology and Immunology at Tel Aviv University: https://swprs.org/professor-ehud-qimron-ministry-of-health-its-time-to-admit-failure/
Comment #193 January 13th, 2022 at 10:29 am
Scott P.,
The point of superdeterminism is to preserve locality. But if you are claiming that two objects, anywhere in the universe, are always interacting, then there’s no point.
Maxwell’s theory tells us that two charges interact regardless of their position in the universe. The interaction is local, it obeys SR, but it is there. Likewise, in GR any two masses interact. We can see the result of those interactions in the form of galaxies, clusters, filaments. GR is still local (the speed of light is the limit).
“In fact, you seem to agree that the universe is non-local, which is what Bell concluded from his thought experiment.”
No, I don’t. Think about a pair of orbiting stars! The interaction is local, yet the motions are correlated.
Comment #194 January 13th, 2022 at 10:30 am
fred #185:
And, if Sabine is so wrong here, should I even bother with watching her videos? … I’d rather listen to people who are more careful about the validity of their own conjectures, someone like Sean Carroll or even Penrose.
For me, Sabine, Gerard ‘t Hooft, and Roger Penrose all fall into the category of “wilful contrarians,” which is well-known to be compatible with being arbitrarily smart. Listen to them, sure, but like you’d listen to a trial lawyer. When they confidently declare things that 99% of their colleagues disagree with, make sure to listen as well to someone from the other 99%.
This is different from (say) Sean Carroll, John Preskill, or the late Steven Weinberg, who certainly in matters of science, I never once heard to say anything unless there was an excellent reason for supposing it true. It’s also different from Lenny Susskind, who like the willful contrarians, will throw out crazy ideas, but in a playful way, just wanting to see where they lead.
I haven’t watched many of Sabine’s videos, but I enjoyed her recent video on fusion power; my impression is that when she’s not being wilfully contrarian she’s an effective science popularizer. She’s also been very good on debunking the “Physicists Do Standard Quantum Experiment, Get the Standard Result that QM Predicts, Claim it Undermines Reality and Revolutionizes Everything” genre of science writing. When she criticizes the string theorists, I think she’s worth listening to although far too dogmatic in her insistence that beauty can’t be a guide in physics (why did it so often work in the past?). In foundations of physics, to say that I find her instincts untrustworthy would be a titanic understatement.
Comment #195 January 13th, 2022 at 10:40 am
Superdeterminism always strikes me as an (probably unintentional) attempt to bring literal “fate” into physics.
Sorry for the low effort post, I know to little about QM for a relevant statement, but that is my impression of superdeterminism after reading ~60% of the posts here.
Comment #196 January 13th, 2022 at 10:41 am
red75prime,
We know what to expect from synchronized clocks or orbiting starts. But how do we know what to expect from quantum systems?
We do. We know that they are made of electrons and nuclei and we know these are charged and we know what charges do, they interact all the time, at any distance. Since those charges interact it is mathematically incorrect to describe them as if they don’t. A system of N-interacting objects has to be in a state that is a solution to the N-body problem. A system of N non-interacting systems is in a state that corresponds to N solutions of the 1-body problem. Two interacting masses move on ellipses (the solution to the 2-body problem). Two non-interacting masses move in straight lines (the solution to the 1-body problem).
“Why do we have experiments confirming Bell’s theorem, if one of its assumptions is violated?”
The experiments do not confirm the theorem. They agree with QM (not surprising) and disagree with the classical non-interacting case (not surprising either). The interesting problem, the classical interacting case corresponds to what Bell calls superdeterminism.
Comment #197 January 13th, 2022 at 10:44 am
Jamie #192: Yeah, I saw that letter. It’s true that public-health experts didn’t exactly cover themselves in glory in this pandemic. But I found the letter to be hysterical, and wildly wrong in its anti-vaccine animus. In the current Omicron wave, it’s obvious that the death toll is enormously lower than it would’ve been had the world not successfully administered billions of vaccines. One of the most important things we could’ve done better, would’ve been to roll out the vaccines faster, along with near-immediate boosters for each variant as it arose.
COVID is a medical emergency. And, just like when a virus attacks an individual, often the immune response does more damage than the virus itself—but you still need an immune system, because of the damage viruses would have caused if not for it. So it is on the societal level.
Comment #198 January 13th, 2022 at 10:46 am
OK, one more hour and then I’ll close the thread!
Comment #199 January 13th, 2022 at 10:50 am
Paul Hayes,
I think you know very well the EPR-Bohm setup. there is nothing unclear about what I’ve said.
I’ve said:
“We have an EPR-Bohm setup, TWO spin-entangled particles are sent to 2 distant stations”
What is unclear there? How many electrons are involved? Two. What state are they in? Entangled. Then you complain that both measurements are on Z – why not ?, etc.
Just be honest and admit that you can’t refute the argument instead of inventing such lame excuses.
Comment #200 January 13th, 2022 at 11:02 am
Scott #160:
‘ I don’t know what “continuing to share a partial common existence that does not respect space-time” means’
For some observed photon A, A has spin values on planes Aa, Ab, Ac, etc. For some n-planes. Some of these spin states have functional influence on each other and some do not. Every other photon in this existence set is also doing the same thing as A and for photons A and B some of their spins influence each other based on a set of functions.
In a universe that is composed of some subset of photons that are interacting, this would produce sets of spin planes that result in one directional influences (would look like random noise on the influenced side), sets of spin planes that form a circular functional set (would be observable as influencing each other) and sets of planes that are completely non-interacting and would likely form complete other universes.
In this model, space is just another spin plane or set of spin planes that have a heavy influence on the magnitude of functional interactivity of other spins. Time is a parameter of the spin conversion functions and limits like the speed of light would be the expression of function complexity involved in changing one spin state based on another.
In this model, two photons sharing the same exact state are actually the same photon. If you bring two particles into entanglement, they would share some portion of their spin states (and any non-oberved influencers on those states that are not space bound) until such time as they get influenced enough by some other particles to get out of sync.
Is any of what I said inconsistent with the way QM is understood to work today?
Is there something I should go read that would illuminate this question more for me than harassing practitioners in the field???
Comment #201 January 13th, 2022 at 11:06 am
Scott #197. I wonder what you consider the analogue of virus on the societal level? I feel like it’s a matter of priorities. One can think of the response to this pandemic as a societal virus and the hysterical letter as a part of the immune system hopelessly trying to fight it.
Comment #202 January 13th, 2022 at 11:12 am
Jamie #201: My intended society-wide analogue of the virus was … the virus! 🙂
Comment #203 January 13th, 2022 at 11:12 am
Maybe this has been addressed in the comments above and I overlooked it – please point me to the right number if so. But:
How is super-determinism different from ordinary, Newtonian determinism? And in particular: what objections against superdeterminism (especially those revolving around our ability or in ability to do scientific experiments) do not work equally well against ordinary determinism?
I feel that in an ordinarily deterministic universe people not only can but even must do scientific experiments because the law of natures and initial conditions compel them to do so. How is this different in a superdeterministic universe? (And in our universe, for that matter, but let’s take one question at a time)
Comment #204 January 13th, 2022 at 11:34 am
Mateus Araújo #176:
I agree that a lot of physicists are naively realist, even about things that I think we would both agree they shouldn’t be realist (quantum fields are a pretty good example…the SMEFT people I know tend to get really annoyed that the average pheno person forgets that field redefinitions are a thing, and acts like Lagrangian terms that are related under them are independent).
I also agree that, by itself, calculating observables isn’t “in practice anti-realist”, it’s just what everybody needs to do anyway. I do think that “in practice anti-realism” can exist, though. It looks roughly like what positivist philosophers did: insisting that there are certain things “of which one cannot speak” and using that insistence as a way to simplify problems. And I do think that in practice that usage is quite common, at least in HEP-TH. In particular, there are lots of people (including a Simons Collaboration) who do what can broadly be described as bootstrap methods, where the idea is to define a theory purely by means of its observables, intentionally avoiding any specification of what happens “in between”. In the context of scattering amplitudes, for example, this manifests in a goal of defining theories purely in terms of on-shell observable states. For the Conformal Bootstrap (recipient of the aforementioned Simons Collaboration), it means defining conformal field theories in terms of their operator dimensions and correlation functions, without reference to field content.
I do think AdS/CFT falls under this rubric as well. You paraphrase it as “two descriptions of the same reality”, can you clarify what that reality is? What common object are the two sides of AdS/CFT describing, if not the algebra of observables? I’ve seen philosophers insist that in AdS/CFT space-time should “really” have four dimensions or “really” have five, with the other as just an alternate description that isn’t “really” how the world is, but based on your phrasing I’m assuming that’s not your position?
Comment #205 January 13th, 2022 at 12:06 pm
Vincent #203:
How is super-determinism different from ordinary, Newtonian determinism? And in particular: what objections against superdeterminism (especially those revolving around our ability or in ability to do scientific experiments) do not work equally well against ordinary determinism?
With ordinary Newtonian determinism, there’s no global constraint affecting our brains or our random number generators: you just fix the initial conditions and then iterate forward. You still have effective probability because of your ignorance of microstates, so you can still build effective random number generators and use them to choose the control group in your vaccine trial. The fact that you “couldn’t have chosen otherwise” can be left to the philosophers, just like it is in our world.
The trouble with superdeterminism is not the determinism but the “super”: that is, the global constraints that prevent you from making effectively random choices and therefore from doing most scientific experiments. Once you’ve introduced that, you can use it to explain (or explain away) basically any experimental result, according to your convenience.
Comment #206 January 25th, 2022 at 1:01 am
[…] We can have those debates another day—God knows that, here on Shtetl-Optimized, we have and we will. Here I’m asking instead: imagine that, as fantastical as it sounds, QM were not […]