Announcements

Update (May 10): Extremely sorry to everyone who wanted to attend my SlateStarCodex talk on quantum necromancy, but wasn’t able due to technical problems! My PowerPoint slides are here; a recording might be made available later. Thanks to everyone who attended and asked great questions. Even though there were many, many bugs to be worked out, I found giving my first talk in virtual reality a fascinating experience; thanks so much to Patrick V. for inviting me and setting it up.

(1) I’ll be giving an online talk at SlateStarCodex (actually, in a VR room where you can walk around with your avatar, mingle, and even try to get “front-row seating”), this coming Sunday at 10:30am US Pacific time = 12:30pm US Central time (i.e., me) = 1:30pm US Eastern time = … Here’s the abstract:

Schrödinger’s Cat and Quantum Necromancy

I’ll try, as best I can, to give a 10-minute overview of the century-old measurement problem of quantum mechanics.  I’ll then discuss a new result, by me and Yosi Atia, that might add a new wrinkle to the problem.  Very roughly, our result says that if you had the technological ability, as measured by (say) quantum circuit complexity, to prove that a cat was in a coherent superposition of the alive and dead states, then you’d necessarily also have the technological ability to bring a dead cat back to life.  Of course, this raises the question of in what sense such a cat was ever “dead” in the first place.

(2) Robin Kothari has a beautiful blog post about a new paper by me, him, Avishay Tal, and Shalev Ben-David, which uses Huang’s recent breakthrough proof of the Sensitivity Conjecture to show that D(f)=O(Q(f)4) for all total Boolean functions f, where D(f) is the deterministic query complexity of f and Q(f) is the quantum query complexity—thereby resolving another longstanding open problem (the best known relationship since 1998 had been D(f)=O(Q(f)6)). Check out his post!

(3) For all the people who’ve been emailing me, and leaving blog comments, about Stephen Wolfram’s new model of fundamental physics (his new new kind of science?)—Adam Becker now has an excellent article for Scientific American, entitled Physicists Criticize Stephen Wolfram’s “Theory of Everything.” The article quotes me, friend-of-the-blog Daniel Harlow, and several others. The only thing about Becker’s piece that I disagreed with was the amount of space he spent on process (e.g. Wolfram’s flouting of traditional peer review). Not only do I care less and less about such things, but I worry that harping on them feeds directly into Wolfram’s misunderstood-genius narrative. Why not use the space to explain how Wolfram makes a hash of quantum mechanics—e.g., never really articulating how he proposes to get unitarity, or the Born rule, or even a Hilbert space? Anyway, given the demand, I guess I’ll do a separate blog post about this when I have time. (Keep in mind that, with my kids home from school, I have approximately 2 working hours per day.)

(4) Oh yeah, I forgot! Joshua Zelinsky pointed me to a website by Martin Ugarte, which plausibly claims to construct a Turing machine with only 748 states whose behavior is independent of ZF set theory—beating the previous claimed record of 985 states due to Stefan O’Rear (see O’Rear’s GitHub page), which in turn beat the 8000 states of me and Adam Yedidia (see my 2016 blog post about this). I should caution that, to my knowledge, the new construction hasn’t been peer-reviewed, let alone proved correct in a machine-checkable way (well, the latter hasn’t yet been done for any of these constructions). For that matter, while an absolutely beautiful interface is provided, I couldn’t even find documentation for the new construction. Still, Turing machine and Busy Beaver aficionados will want to check it out!

135 Responses to “Announcements”

  1. John Figueroa Says:

    Very cool, looking forward to the talk! This vaguely reminds me of something you wrote about before—where putting a human brain in superposition of distinct cognitive states would imply an unbelievable amount of small-scale control over it, and in particular the person wouldn’t remember what it was like to be in a superposition afterwards, because the atoms in their brain would necessarily have to be rearranged.

    What do you mean by being able to “prove” that the cat was in superposition?

    Also: If I may defend Scientific American’s usage of “not peer reviewed” as a criticism of Wolfram—isn’t peer review relatively good at screening out stuff like this? I guess you don’t really need it since you can tell whether or not to take something seriously in physics with or without peer review. But I have to say, for us lay folk who at best have a surface-level understanding of all this stuff (and that is going to include most people in the news media), it’s a pretty useful signal to have.

    Like, I generally don’t spend mental energy on substantial empirical scientific claims that aren’t peer reviewed. (Not to defend the converse, obviously.) Am I wrong to do that?

  2. Gerard Says:

    Scott

    I’m trying to understand the concept of Query Complexity and right now it’s not making a lot of sense to me.

    It seems to me that classically, at least, the Query Complexity of a boolean function on n-bits would almost always be simply n. You typically need to know all n input bits to evaluate the function except in the special case of functions which ignore some of their input bits. On the other hand I don’t see why you would ever need to make more than n queries since once you’ve done one query nothing prevents you from just storing the result in memory.

    I must be missing something so I’m hoping you could provide some more clarification or a classical example where this isn’t a trivial measure.

  3. Scott Says:

    John Figueroa #1: Especially now, and especially in interdisciplinary fields like quantum information, formal peer review is a filter that still allows through an almost unbelievable amount of crap. And for anything big and in the public eye, there’s now an informal peer review (on blogs, etc.) with the advantage of being public. So, yes, formal peer review is a better-than-nothing Bayesian signal, more important for some types of work than for others. Ultimately, it’s telling you what 2 or 3 extremely busy experts selected by the editor thought about the work (hopefully they picked experts in the actually relevant subspecialties—though since the reviewers’ names aren’t public, you can only guess about that part). The point is that, rather than debating the metaquestion of what process was or wasn’t followed to find out what anonymous experts thought, if I’m going to comment at all, then I’d prefer just to tell you directly what I thought about the work! 🙂

    Regarding your first question: by “prove,” I really mean “do an interference experiment” (that is, do an experiment whose outcome is sensitive to both branches of the superposition). If that didn’t make sense, then come to the talk? 🙂

  4. Scott Says:

    Gerard #2: Everything you said is true! For an n-variable Boolean function, the query complexity will never exceed n, and for a randomly selected such function it will almost certainly just be n. The part you’re missing is just that there are many specific functions of great interest for which various measures of query complexity will be smaller: for example, √n or even log(n). Indeed, the most famous quantum algorithms, like Shor’s and Grover’s, are precisely based around functions whose quantum query complexities are asymptotically smaller than their classical ones.

  5. Peter Morgan Says:

    Wolfram has too little idea of it, but constructing a Hilbert space from a CM starting point is not so hard. One only needs to say “Koopman” and the Gelfand-Naimark-Segal construction.

  6. Craig Gidney Says:

    Scott #4: I think you typo’d the number of boolean variables n vs the number of possible inputs N. Either that or I’m missing something very fundamental about how many queries are needed to find the single satisfying input to a function that takes n bits as input.

  7. Scott Says:

    Craig #6: Given a Boolean function f(x)=f(x1,…,xn), you need at most n queries to xi’s to determine x and therefore f(x). f is not what we’re querying; it’s just the final result that we’re trying to compute. Does that clarify?

  8. Craig Gidney Says:

    Scott #6: Yes, that clarifies.

  9. Kevin Zhou Says:

    Thanks for saying something about Wolfram’s hype. There was a frustratingly long period where all recognized experts just refused to comment. I’m just a grad student, so in various corners online I was essentially alone in criticizing it.

  10. Rainer Says:

    at (3):
    Gerard’t Hooft also believes that the universe is a cellular automaton.
    And he also believes in superdeterminism.
    I think both ideas are simply wrong.
    But Gerard’t Hooft has won a Nobel price and so maybe I am wrong.

  11. David E Speyer Says:

    @Craig It might help to clarify that queries are allowed to be adaptive. The simplest example is the function with three inputs defined as f(0,y,z) = y and f(1,y,z) = z . This is not independent of any one of its bits, but you can always look at x first and then adaptively choose a second bit to check.

  12. Scott Says:

    Rainer #10: No, you’re not wrong. 🙂

    (Well, if you stretch the definition of “cellular automaton” far enough, then just about anything, including our universe, could be one. But “superdeterminism” is a worthless idea, similar to the observation that you could eliminate covid by just shooting everyone dead.)

  13. Scott Says:

    David Speyer #11: Yes, thanks!

  14. Richard Gaylord Says:

    John Figueroa comment #1 – the highly flawed anonymous peer review process for theoretical physics publications essentially ended with the advent of arViv.org. which does not review posted manuscripts. you might want to read

    http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review,

    there is also a myth regarding the use of peer review of book manuscripts. if you have the misfortune of dealing with a book publisher, you will discover that publishers use the review process not for quality control but to determine the potential profitability of a book. i have had colleagues who have had their textbook manuscripts rejected specifically because they were original and therefore unlikely to be adopted for use in courses.

    the review process in science (including obtaining the opinions of others in the academic promotion process) is not simply flawed; it is inherently corrupt. one is no more likely to filter out the garbage in a manuscript review process than one is to filter out the garbage in a democratic political election.

  15. Gerard Says:

    Scott #12

    > But “superdeterminism” is a worthless idea

    That seems like an extreme view that I’m surprised to see you advocating. If one’s goal is to know what is true then discounting theories because you find their implications unpalatable doesn’t seem like an optimal strategy.

    I don’t know much about superdeterminism and I don’t have any strong opinions about it one way or another but it seems to me that the fact that the concept has been enunciated and hence made available for analysis and criticism alone makes it more than useless. This is especially true since the idea doesn’t seem obvious. While the notion that the entire future of the universe was encoded in it’s initial conditions was, I believe, a fairly popular view in the age of classical physics that something similar could also occur in an apparently quantum world seems, a priori, quite surprising.

  16. Gerard Says:

    I meant to add that your analogy makes little sense to me:

    > similar to the observation that you could eliminate covid by just shooting everyone dead.

    The later is a potential solution to a practical problem that one may reject because under a certain popular (but by no means universally accepted) set of utility assumptions the result of the solution would be worse than the problem itself. But the theory you’re comparing this too isn’t about solving a problem but about attempting to better understand the true nature of reality, which is what it is independently of your utility function.

  17. Scott Says:

    Gerard: No, you don’t understand what “superdeterminism” says. It doesn’t merely say that the whole history of the universe is encoded in its initial state, and that’s still true in QM. That would be sane, and it’s understandable that people who never looked into it would round the idea down to something sane, something that they could imagine physicists seriously arguing about. But ‘t Hooft is completely adamant that he means the insane version.

    The insane version is not merely that the initial state of the universe determines everything that happens—it’s that quantum phenomena, and specifically the Bell inequality violation, are explained by “conspiratorial correlations” in the initial conditions, which affect our brains, and the random number generators in our computers, and prevent us from ever deciding to do the experiments that would reveal that the world is actually classical.

    It might be difficult to get across to a layperson just how staggeringly terrible an idea this is—hasn’t physics also revealed other weird things to be true? For starters, though:

    (1) One could “explain” almost anything whatsoever using this deus ex machina: not just Bell inequality violations, but even superluminal communication, telepathy, precognition, etc. etc. were those ever discovered to exist. Thus, the idea is utterly devoid of explanatory or predictive value.

    (2) ‘t Hooft is not even consistent in how he applies the absurd idea. Thus, he predicts that quantum computers will never outperform classical ones—not noticing that if they did, that (like anything else) could be “explained” by the deus ex machina just as easily as he “explains” Bell inequality violations.

    (3) Superdeterminism wasn’t even the original idea. In ‘t Hooft’s earliest papers in this direction, he simply says the world is a classical cellular automaton and doesn’t even notice the Bell inequality violation issue (!). Only after numerous critics pointed out the issue, did ‘t Hooft tack on superdeterminism as a way of asserting that he still might be right.

    Just like curing covid by shooting everyone, “superdeterminism” is a cure that’s a billion times worse than the disease—and this becomes obvious the instant you look past words and names and credentials and discuss what the actual proposal actually is.

    I have no doubt that ‘t Hooft richly deserved his Nobel Prize for asymptotic freedom. But I’d like to imagine that, even if 50 Nobel laureates were insisting that squares are round and 2+2=5 (as thankfully they’re not), I’d continue the quest to tell it like it is.

  18. Anonymous Says:

    Scott 17

    What would a sane version would look like ? If the initial state encodes the whole history of the universe how other than ‘conspiratorial correlations’ can we reason about the bell inequality violations ?

  19. Gerard Says:

    Scott #17

    > The insane version is not merely that the initial state of the universe determines everything that happens—it’s that quantum phenomena, and specifically the Bell inequality violation, are explained by “conspiratorial correlations” in the initial conditions, which affect our brains, and the random number generators in our computers, and prevent us from ever deciding to do the experiments that would reveal that the world is actually classical.

    I still don’t see why any of that is necessarily “insane”. Are you so convinced that you have free will that you cannot even conceive of such a “conspiracy” being true ?

    As for dismissing the theory because it can explain anything, the same could be said for the “simulation hypothesis”. If the universe is a simulation there’s no reason the programmer couldn’t have the universe obey the standard model everywhere except at 365 Mulberry St. from 3 to 3:05 PM tomorrow when everything switches to Harry Potter rules. Yet quite a few people seem to take that hypothesis seriously.

  20. Gerard Says:

    Scott:

    > But I’d like to imagine that, even if 50 Nobel laureates were insisting that squares are round and 2+2=5 (as thankfully they’re not), I’d continue the struggle to tell it like I see it.

    Those statements reduce to pure logic that can be verified by a computer. For me they have the highest epistemic status next to the assertion that subjective experience exists. The nature of the physical world, including time and space, cannot (at least currently) be reduced to such mechanically verifiable statements.

  21. Scott Says:

    Anonymous #18:

      how other than ‘conspiratorial correlations’ can we reason about the bell inequality violations ?

    By just accepting that quantum mechanics is true and the state of the universe is a quantum state—precisely the simple, obvious step that the superdeterminists refuse!

  22. Scott Says:

    Gerard #19: No, you’re still not getting it. You can totally believe in determinism and in no free will (should you “choose to” … har, har), knock yourself out, without accepting the superdeterminist conspiracy theory (which additionally requires ridiculous fine-tuning of the universe’s initial conditions, with foreknowledge about human brains). The opposite of superdeterminism is not free will; the opposite of superdeterminism is just sanity.

    And yes, the simulation hypothesis is also empirically sterile, for precisely the reason you say. But at least that hypothesis doesn’t pretend to explain specific phenomena (like Bell inequality violation) that it obviously can’t. (Or if and when it does, I’ll ridicule it just the same.)

  23. Rainer Says:

    I have always wondered why so many brilliant physicists or philosophers struggle with the a non-deterministic world.

    The orthodox interpretation of QM has opened the door a bit to get a glimpse at free will.
    But instead of being delighted about that and trying to open the door further, many physicists/philosophers try hard to close the door instantaneously: “No, no, please give me back my determinism!”
    And so, strange theories will be postulated, like super-determinism or many worlds theories.

    A pure deterministic world is in my view a very depressing world.
    But yes, if one gets depressed or not by that is of course already determined at Big-Bang …

  24. Will Says:

    Gerard #19: It might be helpful to say that in this context “insane” does not (primarily) mean “obviously wrong”. Rather “insane” is used in the sense “incompatible with ordinary human life”.

    For instance, suppose I believe I am, at this moment, just seconds away from death. This belief is certainly not obviously wrong – it has been correct for some people at some times. But if I hold this belief for an extended period of time, and follow it consistently, I cannot live an ordinary human life because I will not plan more than a few seconds into the future.

    The issue is superdeterminism is that it undermines essentially all forms of scientific or otherwise plausible reasoning about the natural world, because it implies that experiments we make will be corrupted, not just by the usual possible natural phenomenons or malicious forces that might corrupt an experiment, but by a malicious conspiracy that can only be described or explained as a force devoted to ensuring that this particular experiment fails, with no other consequences. Every time we observe something we could just say “the atoms in my brain conspired to direct my eyes to look at this thing at the same time that some incredibly unlikely event happened, there’s no reason to expect it to happen again.

    The damage is much greater than from the simulation hypothesis, which one can assume will follow predictable rules except for the intervention of the simulators, which will follow predictable (though unknown to us motivations). The simulation hypothesis only poses similar problems if combined, for example, with the belief that simulated lives have no moral value.

  25. Scott Says:

    Will #24: Thank you for saying what I was trying to say, but better! After like the 50th or 100th time that people brought up “superdeterminism” to me—every time, with the rhetorical trick of opposing it to free will, as if that (rather than the very possibility of doing science) were the anti-superdeterminists’ issue—I found it impossible to stay calm. Superdeterminism is a perfect example of what Paul Krugman calls a “zombie idea,” one that keeps shambling around no matter how many times it’s demolished.

  26. Scott Says:

    Rainer #23: OK, but one would hope that we could choose physical theories for other reasons—e.g. how well they actually explain the phenomena we experience—and then let our best theories teach us about the determinism or indeterminism of the universe as a byproduct, rather than prejudging the determinism question based on which answer we find less depressing!

  27. Gerard Says:

    Scott and Will

    > The issue is superdeterminism is that it undermines essentially all forms of scientific or otherwise plausible reasoning about the natural world,

    I don’t think that’s the case, though. If it were superdeterminism should be easy to falsify because we have ample empirical evidence to suggest that the scientific method is effective at explaining/predicting a wide range of phenomena.

    > Every time we observe something we could just say “the atoms in my brain conspired to direct my eyes to look at this thing at the same time that some incredibly unlikely event happened, there’s no reason to expect it to happen again.

    You could do that but if the observation is in fact a common phenomenon then the probabilistic model you derive from the above model would do a very poor job of predicting future observations and you should reject it.

    How would your behavior change if you were convinced that superdeterminism were true ? I suspect the answer is that it would not.

    Let’s consider another theory that I’m sure we all would agree to call “insane” – the theory that the Earth is flat. Now while I certainly consider that view to be insane I don’t necessarily consider it to be insane (or at least as insane) to postulate that the Earth exists only as a data structure in some simulation program to which adjectives like flat and round don’t even apply. So why is the first theory insane while the second is not ? They both assert that the true nature of the Earth is radically different from what everyone believes it to be. The difference is that the first theory implicitly assumes that there is some experiment you could perform that would convince you that the Earth is in fact flat. The second theory makes no such claim because in it the “true nature” of the Earth is not observable from within the simulation, it can only be accessed via certain types of queries, none of which provide any information about the “true nature” of the object in question.

    So it seems to me that theories like superdeterminism, the simulation hypothesis or the many worlds interpretation of QM are not scientific in the sense that they don’t (at least at present) make falsifiable predictions about empirical observations (to further clarify I mean that they don’t make predictions which differ from those of accepted physics) but they are nonetheless not without interest because they present different possible pictures of how the world could “really be”.

    Now, I’m aware that there has been a tradition, particularly among American physicists following Feynman, of rejecting such theories out of hand, the “shut up and calculate” approach. But I didn’t think Scott subscribed to that view since he seems quite willing to discuss the many-worlds interpretation, for example.

    So it seems to me there must be some reason for singling out superdeterminism for rejection that has still not been clearly articulated in this discussion.

  28. Steven E Landsburg Says:

    Re #4: You meant to say independent of ZFC, not just ZF, no?

  29. Scott Says:

    Gerard #27: Let me try one more time—and then I’ll respectfully bow out, due to a combination of my cortisol levels and my need to prepare the SlateStarCodex talk for tomorrow.

    There are philosophical debates—for example, about the simulation argument, the Many-Worlds Interpretation, etc.—that people like to have even though it’s not clear that they’ll ever have empirical consequences for anything. And that’s totally fine, for those who enjoy philosophy.

    But then there are beliefs like the one that JFK was killed by an all-powerful worldwide conspiracy, which have the special property that no matter what evidence you raise against the hypothesis, the believer will confidently assert that the evidence was planted by the conspiracy itself—that’s how all-powerful the conspirators are! I hope you agree that these beliefs are not merely empirically sterile: rather, they’re empirically sterile in a uniquely pathological way.

    Now my position, and I think Will Sawin’s as well, is that superdeterminism is much more like the omniscient-JFK-conspiracy hypothesis than it is like the many-worlds hypothesis. Why? Because superdeterminism, just like the JFK conspiracy, requires an unbounded amount of rigging. In both cases, there are thousands of data points against the conspiracy theory: every Bell experiment, every fact about Oswald’s life. In both cases, we have to accept that the conspiracy’s tentacles reached in to corrupt every one of those data points—but, weirdly and inexplicably, only those data points, rather than all sorts of unrelated facts about the world. But as Will pointed out, if you’re allowed to posit a conspiracy to dismiss any amount of data that you don’t like, and only the data you don’t like, with no explanation as to why the conspiracy is so surgical in its effects—at that point you have to give up on science and common sense alike, which is as good a criterion for insanity as any that I know.

  30. James Gallagher Says:

    ‘t Hooft richly deserved his Nobel Prize for asymptotic freedom.

    ahem, it was for renormalization of Yang–Mills theories in electroweak sector.

    Later, he did understand how it could be applied to strong interactions with asymptotic freedom, but didn’t publish his results before Gross, Wilczeck and Politzer.

  31. Scott Says:

    Steven #28: By the Shoenfeld absoluteness theorem, an arithmetical statement is independent of ZFC iff it’s independent of ZF.

  32. Sniffnoy Says:

    Scott #31:

    I was going to say “That’s not true, Shoenfeld’s absoluteness theorem only applies to statements sufficiently low on the arithmetical hierarchy”, but apparently, no, it’s statements sufficiently low on the analytical hierarchy, so, huh, I guess that is true.

    But also I think it’s worth noting here what exactly this TM is claimed to do; i.e., it’s specifically supposed to halt iff ZF is inconsistent, not just generally be independent of ZF. Since ZF is consistent iff ZFC is, we have the equivalence of ZF and ZFC for our purposes here that way.

  33. Kevin Zhou Says:

    For what it’s worth, I completely agree with Scott on the problems of superdeterminism, and have had lots of agonizing experiences arguing about it with people who think it’s the same thing as ordinary determinism. Like, if it were, we wouldn’t have put “super” in front!

    This misconception seems to have been fueled by a spate of popular articles that (rather disingenuously) avoid discussing the unique problems of superdeterminism and frame it as just the opposite of randomness.

    It’s part of a larger problem where “intellectual” popular articles, long on big words but short on math, deliver a profoundly misleading picture. I’ve met plenty of well-meaning, intelligent people who have read these and come out convinced that Bohmian mechanics has been proven right and is only kept down by organized conspiracy, quantum computers definitely won’t work, spacetime must be made of Planck-scale pixels, MOND is “more scientific” than dark matter, virtual particles can fuel perpetual motion machines, physics stopped seeking truth when we tossed out the ether, and the theory of everything has already been found, though they’re not sure if Lisi, Weinstein, or Wolfram did so first.

    Among the few sources that don’t do this, I think this blog is the 2nd most popular in the world (Quanta magazine being 1st, Carroll’s blog 3rd), though of course all have infinitesimal reach compared to the bad sources.

  34. Metanoid Says:

    Is there an inherently natural place to draw the line between classical and quantum? The question has been nagging me lately in the context of the teleportation paradox, as it seems to be conventional wisdom of a sort that the no-clone theorem can’t be invoked as a resolution unless cognition somehow necessarily involves quantum computation.

    However I sense a subtle equivocation there, in that even “classical” discrete-state automata like conventional computers can’t be realized by classical physics proper–since even a hand-cranked adding machine is made of solid parts like gears, rods, and springs that don’t exist in a world consisting only of classical fields and point particles.

    So which sense of classical is the truly relevant one? Classical in the sense that the time-evolution of the system is always in principle predictable and observable from the outside, or classical in the sense that the system could in some form be realized in a world where hbar=0? Granted it’s far from obvious how things like the NCT would apply even to the former case, but it still makes me uneasy given how “spooky” the issues underlying the teleporter paradox are to start with.

  35. David Griswold Says:

    I think that while it is certainly more to-the-point to criticize Stephen Wolfram’s various “breakthroughs” directly on their content, criticism of the process is also valid. His end-run around peer review isn’t in-and-of itself disqualifying, certainly, but the problem for me is that he seems to want to be able to present his ideas in a way that don’t really allow any feedback.

    What exactly is the mechanism for presenting him with measured criticism of his ideas? As far as I can tell it seems he wants to talk past the people who people who could criticize his ideas, and speak more to the public and popular science press, who can’t. Witness the Scientific American article, in which when presented with the criticism that the idea that simple cellular automata can produce remarkable and highly unexpected complexity has been widespread long before his various pronouncements, he replies in a content-free way: “I’m disappointed by the naivete of the questions that you’re communicating,” “I deserve better.”

    That’s not a substantive response. To say that Conway didn’t see that cellular automata could produce virtually boundless complexity is ludicrous; I and legions of others have played with Life for decades since the original Scientific American article that popularized it, with exactly that idea in mind. I have seen no evidence that Wolfram has any ability to absorb and respond to criticism; he just brushes it off and continues proclaiming his greatness.

  36. ppnl Says:

    My first thought on superdeterminism is that it is poorly motivated. If you hate the loss of determinism enough then almost any alternative will be preferred.

    My second thought is that superdeterminism is just a hidden variable theory with a pathological twist. If the theory is local and deterministic then local particles must contain the information needed to decide how to be measured. Well any classical theory can do this. But that hidden information must be completely hidden even in principle. Worse, to recreate Bell’s inequality the particle needs to have local knowledge of a distant non-local measurement of its correlated twin. How can it have that information? Well if you go back in time far enough then all the particles of all the measuring devices and observers were local to each other even if you have go back to the big bang. From there they can calculate that in the distant future they will all be part of a test of Bell’s inequality and so conspire to violate Bell’s inequality. But how much information do you have to preserve locally in order to violate Bell in all aspects of future interactions? And for gods sake why?!? How much do you have to hate the loss of determinism in order to make this seem reasonable.

    And finally is there enough local data storage available to make this work? I believe the limitations on local information is the source of Gerard’t Hooft’s claim that quantum computers will fail. But that failure will mark the failure of quantum correlations. Bell’s inequality will begin to be obeyed contradicting the claims of QM.

    People who believe in superdeterminism might want to try and design a cellular automata that displays superdeterminism. I think they will have difficulties.

  37. Anonymous Says:

    Don’t you find holding a talk in virtual reality a rather silly kind of skeuomorphism? Why go to the trouble of digitally reproducing the all the annoying limitations of a physical meeting (limited space, not being able to hear from a distance) instead of doing the talk as a regular video stream?

    Or, since by your own admission, you’re not a very good speaker (“[here is the] transcript (which I read rather than having to listen to myself stutter)”), why not just release a paper?

  38. Bunsen Burner Says:

    Can a superdeterministic theory exist though? I know it’s been asserted that it provides a loophole to Bell’s Theorem, but I’ve never seen a formal result for this. After all, it also has to be consistent with Kochen-Specker, PBR, and so on. Are superdeterminism proponents really asserting that all possible measurements in our universe will confirm QM even though QM is actually wrong? Intuitively I find it hard to believe that such a theory is possible. Think of all the ways you can create your apparatus; something in the universe’s initial condition will always make the apparatus give certain results and not others?? Surely that would make all random number generators suspect, and that this could be tested?

    Maybe the assertion is just that humans are incapable of creating an apparatus that can disprove QM, or even conceive of one? This still seems to be quite problematic. The initial conditions of the universe are such that they force evolution to evolve minds that have cognitive limitations such that they can never even conceive of an experiment that violates QM. Very bizarre. Only human minds? Would the superdeterminist assert that any alien minds would also be limited in this way? What if we create real AI?

  39. David E Speyer Says:

    I’m trying to figure out why I can’t embed Shor’s problem into an unstructured problem. Here are two ideas:

    (1) Let Q be a quantum circuit to do period finding. I certainly can put non-periodic data into Q. Consider the problem of, given any string x, outputting the most likely output of Q on the string x. It seems to me that a quantum computer could solve this problem by running Q many times, but a classical computer that could do this could do period finding. Maybe the problem is that, with general input, Q will not have any high probability output, so you won’t get any repeated hits wen you repeat the experiment? But maybe there is some other way of using Q?

    (2) Let x be a string of length N and consider the problem of finding the smallest divisor d of N such that x is d-periodic. Now there is always an answer — we can take d=N. Shor’s algorithm can do this (I think?) so your result says that there is also a classical algorithm for this. I find that surprising, but maybe I shouldn’t?

  40. Scott Says:

    Sniffnoy #32: Yes, thanks, that would’ve been a simpler argument! 🙂

  41. Scott Says:

    Metanoid #34: I don’t know the answers, but those are excellent questions. Indeed, I like them so much that 7 years ago they inspired me to write an 85-page essay (“The Ghost in the Quantum Turing Machine”). 🙂

  42. Scott Says:

    David Griswold #35: You raise a good point regarding peer review. Maybe the point is just that, having been on the front lines of the war against popular-science hype for 15 years, I’ve resigned myself to a world where, even when there is peer review, many people will simply tone down their claims by the minimum amount needed to get them published, and then go right back to saying whatever the hell they want about their work in their press releases, TED talks, etc., which is all the public will pay attention to anyway, and which leaves blogs like this one as one of the last lines of defense. Having said that, the (vanished?) world that you describe sounds like it had some real advantages! 😀

  43. Scott Says:

    ppnl #36: Extremely well said!

    While it ought to be possible, in principle, to design a cellular automaton that would implement superdeterminism, such a CA would be unbelievably contrived and ugly (e.g., would it need an unbounded amount of information per cell?). This probably explains why, for all their philosophizing, I’ve never once seen the superdeterminists put even a concrete toy model forward.

    (This is also my answer to Bunsen Burner #38.)

  44. Scott Says:

    Anonymous #37:

      Don’t you find holding a talk in virtual reality a rather silly kind of skeuomorphism? Why go to the trouble of digitally reproducing the all the annoying limitations of a physical meeting (limited space, not being able to hear from a distance) instead of doing the talk as a regular video stream?

    LOL, I had to look up that word.

    The truthful answer is that this is my first time ever trying it!

    Yes, it does seem a little silly, but maybe also fun? Since the lockdown started, I’ve attended dozens of Zoom talks, so I can report that compared to RL talks, the central thing they’re missing is a hallway—a place where people can form and join little impromptu conversation clusters before and after (and even during…) the talk. I’d love to know whether “skeuomorphism” could help replicate that! Hopefully I’ll know more after I’ve tried it!

  45. Scott Says:

    David Speyer #39: Yeah, you’ve put your finger precisely on one of the main exercises one needs to go through to gain enlightenment about the nature of quantum query complexity!

    For (1), yes, exactly as you said, once the data becomes non-periodic you violate the promise that Shor’s algorithm needs to produce a single output w.h.p. In some sense, the whole upshot of the Beals et al. result (D(f)=O(Q(f)6)) was that there’s not going to be any clever way around that.

    For (2), let x be a random string of length N/2, and consider xx. Then almost certainly d=N/2, but if I flip just a single random bit, then almost certainly d=N. But detecting that single random flipped bit requires (at least) a Grover search. Therefore quantum can give at most a quadratic speedup.

  46. David Griswold Says:

    It seems to me that superdeterminism in some sense simply doesn’t matter even if it is true. I personally have no problem with a deterministic universe and lack of free will, so that is not a valid objection; we don’t choose theories because we like or don’t like the consequences.

    But the point of science is to make predictions, either about past events we haven’t examined yet, spacelike events we can’t have gotten information about yet, or future events. If superdeterminism adds a “conspiracy” to uphold Bell’s theorem, and all predictions that Bell’s theorem will be upheld are thus confirmed, then the predictions still work, and Bell’s theorem is still required to make those predictions, *whatever* the reason.

    In other words, Bell’s theorem still acts like an immutable part of the laws of nature. If superdeterminism *specifically* conspires to uphold Bell’s theorem, then Bell’s theorem is still baked into the world, in that very specific way, and any complete superdeterministic theory will have to explicitly require somewhere that events that don’t uphold Bell’s theory do not occur. Measurements don’t have to involve conscious observers- any interaction that *occurs*, observed or not, will still obey Bell’s theorem, so it is a simply a difference that doesn’t make a difference.

  47. Metanoid Says:

    Scott #41: I knew of your paper but had not previously read it. The “gerbil objection” seems pretty clinching unless one either outright rejects Church-Turing or allows for actual p-zombies.

    What is the physical motivation to prefer causal arrows that originate at the subjective moment of choice? Without one it all seems a bit like using GR to defend the Tychonian model of the solar system.

    Why does it repeatedly use free will as a proxy for all things “spooky” including continuity of identity, when it was the latter concern–certainly related, but still quite distinct–that seems to have been the primary motivation for writing it?

  48. Joshua B Zelinsky Says:

    @Scott #43

    ” This probably explains why, for all their philosophizing, I’ve never once seen the superdeterminists put even a concrete toy model forward.”

    But this is for the obvious reason that the superdeterministic setup of the universe prevents any such model ever being produced.

  49. ppnl Says:

    Scott #43

    Yes I think unbounded local information will be needed. And that creates problems with the Bekenstein Bound. Up until now we have only had to worry about high energy colliders accidently creating world ending black holes. Now we have to worry about quantum computers creating black holes because of the universe’s pointless and stupid attempt to maintain the illusion of QM. Unbelievably contrived and ugly? Yeah, I think so.

    Anyway I think it is the recognition of the limits on local information that caused Gerard ’t Hooft to predict a failure of quantum computers.

  50. David Griswold Says:

    Scott, as a follow-on to my last comment, I would think that a quantum complexity theorist is in kind of a perfect position to formally prove that a superdeterminism QM is unreasonably more complex than QM, if as ppnl and you point out an unbounded or exponentially growing amount of information must be tracked in the underlying “true” theory (and probably computation performed as well, since the theory must actually compute, for each decohered measurement what decision is required to uphold Bell’s theorem for all eventual observers), compared to unadorned QM. That seems like a solvable problem.

  51. Eric Says:

    Would’ve loved to see your talk, Scott, but sadly didn’t get in in time and the lobby mechanism seems to have really struggled. Was there a recording made?

  52. Mateus Araújo Says:

    Bunsen Burner #38:

    It is trivial to construct a superdeterministic model that can reproduce Bell correlations. You just need to push the correlations into the distribution of hidden variables. I’ve done it here, for example, but this has been known for decades. I don’t think anybody bothered to publish such a thing because it’s pointless.

    On the other hand, constructing an actual superdeterministic theory, that would take a initial state and evolve it through dynamical equations, is extremely hard, probably even humanly impossible. The problem is that you would need to be able encode the conspiratorial correlations into this initial state in such a way that they would show up in an experiment afterwards, and the dependency of the experiment on the initial state can be extremely complicated. For example, you would need to encode a initial state that would determine the bits appearing in a particular enconding of Back to the Future, conspiratorially correlated with the outcomes of the loophole-free Bell test done at NIST. It will never be done.

  53. Bunsen Burner Says:

    Mateus #52

    Thanks, that’s quite illuminating.

  54. Mateus Araújo Says:

    I feel I have to defend peer-review here. First of all, it is actually useful for filtering out the garbage. To give a concrete example, Joy Christian has been trying for more than a decade to publish his denial of Bell’s theorem in a reputable journal, without success. It has been rejected over and over again, by several different referees working for several different journals. It’s the system working as it should. He recently managed to get it published in Royal Society Open Science, despite the referees explaining very well why it’s nonsense (we know this because the reports are public), which only shows what kind of crap this journal is. I have myself rejected several nonsensical papers from other authors. Of course passing peer-review is not guarantee of correctness (nothing is), but it dramatically increases the likelihood.

    Secondly, it is very useful for non-experts to know whether a work passed peer-review. For example, crazy people try to add their crazy theories to Wikipedia all the time. The overwhelming majority of the nonsense can be discarded simply by noting it hasn’t been peer-reviewed. It is then much easier to deal with the nonsense that has been peer-reviewed. In these cases the “peer-review” happened in a predatory journal, or there will be a Comment explaining why it’s wrong, when it was published in a serious journal.

    Thirdly, suppose you are a researcher that wants to use a result from another field, and you either don’t have the background to assess its correctness, or you don’t have the time to go through the details of the proof (let’s be realistic, who has?). If I see that it has been published in a reputable journal, that’s good enough for me. For example, right now I’m working on a paper that uses Anup Rao’s concentration bound. The argument makes sense to me, but I’m definitely not an expert, and it’s a nontrivial piece of work. I saw that it was published in the SIAM Journal on Computing, and thought great, I’m using the bound (incidentally, the publish version remarks that a referee found a mistake in the original proof).

    (Unrelated rant: why do computer scientists seem to completely ignore the concentration bound, and focus only on parallel repetition theorems? You’re never going to win all the games in any remotely realistic scenario! Bounding that probability is just irrelevant for physics, you really need get a bound on the probability of winning slightly more games than you would expect from the Bell bound.)

  55. ppnl Says:

    David Griswold #50,

    Yes but what do you mean unreasonable? To me it seems hopelessly Kafkaesque from the start. But if you hate the loss of determinism enough then anything can seem reasonable.

  56. Scott Says:

    Mateus #54: OK, you raise some good points. Yes, submitting for peer review would’ve been a good way for Wolfram to learn in advance what people will now write on the Internet. It’s just that one doesn’t need to spend too much time critiquing process when there’s so much to critique in the outcome! 🙂

    I’m a big fan of Anup’s concentration bound, which actually solved an open problem that I had raised! My wife Dana could answer your question better than me, but briefly, I think computer scientists care about parallel repetition theorems mostly for purposes of PCP amplification, so they’re happy with whatever works well enough for that purpose.

  57. Scott Says:

    Eric #51: Sorry about that! Yes, I believe there was a recording. In any case the PowerPoint slides are now linked from the update at the top of the post.

  58. ppnl Says:

    Mateus #52

    I liked the blog post. It gets to the heart of the matter.

    Superdeterminism is the claim that the macro-universe looks classical and deterministic but if you look deeper you see it is actually quantum and nondeterministic. But if you look deeper still it becomes deterministic again.

    But that means that the underlying deterministic process has to simulate a quantum process. That cannot be efficient in either time complexity or memory space complexity. For example if I am running a quantum computer to factor a million digit number it is actually a classical program running on a deeper deterministic process. The underlying classical computer must be vastly more powerful than the quantum computer it is simulating.

    And how does the parts of the deep classical computer know that they are part of a quantum computer? It seems that they must store a record of their past interactions in order to correlate their current interactions so as to produce on aggregate results consistent with valid factors of my million digit number.

    That’s just insane but worse it is totally pointless.

  59. ppnl Says:

    Scott,

    Any good questions on quantum Darwinism at meeting? I admit I don’t really get the point of it. You look at the world around you and thus carve off the coherent part of it and the decohered part will look very classical. What more do you need?

    I have always been fascinated by the idea of using a quantum cellular automata to explore these issues.

  60. Scott Says:

    ppnl #58: No, I didn’t bring up “quantum Darwinism” and there were no questions about it. I’m sure there’s good work that’s been done under that slogan, but I don’t sufficiently understand either how it differs from plain old decoherence, so am not a good person to answer questions about it.

  61. DR Says:

    I was thinking that if people wish to push these Turing machine engineering problems in a slightly different direction, one could aim for a TM that halts iff BZ is inconsistent. This is, roughly, ZF – F (or less, minus Replacement and some quantifiers becoming bounded). This system of axioms is sufficient for everything in ‘generic’ mathematics. That is, everything not including axiomatic set theory and Friedman’s Borel Determinacy theorem. If BZ is inconsistent, then so is ZF (and hence ZFC). Or one can try variations on this theme, like taking NBG, with its finite axiomatisation (it’s not clear to me if this helps reduce the number of states, though). There will of course be some overhead, but trimming down the contentful part of the TM while still giving a `halts =>math is inconsistent’ result seems interesting to me. At least mildly.

  62. Scott Says:

    David Griswold #50: I think it would depend enormously on how you phrase the problem! On the one hand, the BPP≠BQP conjecture at the core of quantum complexity theory should already suffice to rule out some versions of superdeterminism, at least those that are concrete enough to say anything definite at all. On the other hand, don’t underestimate superdeterminism’s near-infinite power, as the deus ex machina par excellence (pardon my Latin/French mixture), to accommodate any demand and wriggle out of any no-go theorem, through yet more conspiratorial rigging of our brains by way of the universe’s initial conditions!

  63. mjgeddes Says:

    Interesting talk Scott!

    Based on this, there’s been a slight reduction in my confidence in ‘Many Worlds’ Interpretation, dropping from probability of 55% that it’s true, to new probability of 45%.

    I am also now slightly more confident about the metaphysical musings I posted previously, where I suggested that what it *means* for something to exist is related to complexity measures of some sort.

    I liked the ‘Interpretation Decision Tree’, nice and straight-forward. I do think you made a slight dent in the puzzle, but it’s still just very very frustrating that we still can’t seem to get a definitive resolution here.

    I think perhaps we should think of the wave function as ‘Information’, and the confusion is arising because information underpins both physics and cognition – so there’s a blurring of some kind between epistemology and ontology

    I think ‘Information’ *may* be the fundamental reality, and physical reality is in some sense ‘information squared’ , but I’m not confident I’m making sense.

  64. William Gasarch Says:

    Finding smaller and smaller TM’s whose behaviour is ind of set theory: is this
    1) a fun game but not important mathematically
    2) important mathematically because … (please fill in)
    3) not important mathematically yet but might be because … (please fill in, might be that
    if we get the number of states very low then …)

    I’ve thought (1) in the past but I am happy to be told otherwise–if this email gets
    through your spam filters, which sometimes they do not. No capital letters or exclamations so should get through, the the question of which of my emails gets filtered may be ind of ZF, though not ZFC.

  65. Rainer Says:

    I am not convinced that superdeterminism will be insane but “normal” determinism will be sane.
    In fact, I think “normal” determinism is insane too.
    Normal determinism says that anything (in particular what I’m just writing here in this blog) has been determined from the Big-bang on.
    That’ so highly improbable and absurd that only an antropocentric argument works: “it is as it is, because it happens that I live in this n’th random trial of the universe”.
    But with this argument Quantum conspiracy can be included easily.

  66. I Says:

    Thanks for the talk Scott. If anything, your oration seems to have improved. Anyway, a quick question: do you think the universe is computable? What of ‘other realities’ if you believe in them e.g. would the world of Forms consist of computable structures?

  67. Scott Says:

    William Gasarch #64: Let me give you my own reasons for trying to find smaller and smaller TMs whose behavior is independent of ZFC.

    (1) If nothing else, it’s a fun game, and certainly no worse than many other games of trying to optimize some quantity that people play in math!

    (2) I actually care whether the value of (say) BB(6) is already independent of ZFC, or whether you need to go all the way up to BB(200) or whatever before you get independence. This is one way to try to make more quantitative an extremely old question, of whether the Gödel incompleteness phenomenon is confined to rare and bizarre reaches of math, or whether it rears it head even for questions that might “naturally arise” in mathematical research. (I.e., the question that Harvey Friedman made his life’s work—it’s probably not by coincidence that Harvey got extremely interested in our project!)

    (3) More broadly, there’s a question about the origin of life. Bear with me here! One wants to know: just how unlikely is it that a self-replicating molecule, like RNA, would arise from the primordial soup? Just how many planets do we need in the universe, before it’s likely to happen on at least one of them? A very abstract mathematical analogue of that question would be: if I throw together Turing machines randomly, just how long will it take before I get a machine that does something of genuine mathematical interest? The consistency of ZFC is not the only question of genuine mathematical interest, but it’s certainly one of them… 🙂

  68. Scott Says:

    I #66: I think that the physical Church-Turing Thesis—by which I mean, the statement that any device that you can build in the physical world, to whatever extent it behaves in a predictable way at all, can be simulated (given input data that it’s actually possible to collect) by a Turing machine to any desired precision—remains on extremely solid ground. So in that sense, yes, I think “the universe is computable.” On the other hand, I also believe quantum mechanics, according to which amplitudes form a continuum, and storing even a single quantum state to unbounded precision would take infinitely many bits. But I don’t count that as “real” uncomputability, since amplitudes are never directly observable (they’re only used to calculate probabilities), and since they’re governed by a linear differential equation.

    I have no problem with ascribing some abstract, Platonic existence to universes where you could solve the halting problem or other uncomputable problems, any more than I have a problem with ascribing a Platonic existence to e and π.

  69. Rainer Says:

    Scott #68:
    How does your device simulate “Process 1”, i.e. the measurement selection and the wave collapse ?

  70. Gerard Says:

    Scott #67

    > One wants to know: just how unlikely is it that a self-replicating molecule, like RNA, would arise from the primordial soup?

    That seems like only a small part of the real question which is how unlikely is it that organisms as complex as humans would arise. Just based on the information content of the human genome it seems to me extremely likely that this probability is the inverse of some super-astronomical number (ie. one that far exceeds the cardinality of any set of objects in the observable universe). Further it seems very likely that finding such solutions constitutes an NP hard search problem. Since in many years of research on genetic algorithms, they have never been found to solve NP hard problems efficiently, I conclude that our existence only makes sense if a virtually infinite number of trials were available.

  71. Bunsen Burner Says:

    Scott #68

    So what do you think about the complexity bounds on simulating any physical device? Do you think every device can be simulated by a TM running in a particular complexity class, or do you think the complexity is unbounded?

    How would your simulation deal with singularities, or geodesic incompleteness say. Or do you think these are impossible to physically exist?

  72. Scott Says:

    Rainer #69: As long as the device is allowed to be a randomized Turing machine, simulating the Born measurement rule is trivial. Even if you force the Turing machine to be deterministic, it can still calculate a complete list of probabilities, leaving only the final draw to be done with (say) a spin of a roulette wheel. Either way, I personally see the introduction of randomness as a relatively minor step, and I strongly prefer to reserve “falsifying the Church-Turing Thesis” for possibilities like being able to solve the halting problem in the physical world. (To do this, we simply need to phrase the Church-Turing Thesis appropriately, for example in terms of randomized Turing machines.)

  73. Scott Says:

    Bunsen Burner #71: Given our current knowledge, I think the hypothesis to beat is the Quantum Extended Church-Turing Thesis—i.e., the hypothesis that everything that can be feasibly done in the physical world, can be simulated in BQP (i.e., polynomial time on a quantum computer). At any rate, if there’s any counterexample to this, it looks like it will need to come from quantum gravity (see e.g. the recent work of Bouland, Fefferman, and Vazirani about black holes and AdS/CFT).

    Singularities, and incomplete geodesics, simply represent the places where classical general relativity breaks down. They’re part of why we know GR has to be superseded by a quantum theory of gravity, which should (among its other tasks) cure the singularities, presumably via new effects at or near the Planck scale.

  74. Bunsen Burner Says:

    Scott #73

    Can you please elaborate what you mean by feasibly done? Do you mean by a person, or are you talking about any physical process? If the latter how would you characterize the weather? Do you think it’s possible to simulate and predict the weather in BQP?

  75. Scott Says:

    Bunsen Burner #74: By “feasible,” I mean using time that scales polynomially with the size of the system being simulated. I was careful to add a clause about “to whatever extent the system behaves in a predictable way at all” to exclude cases like the weather, which are typically hard to predict simply because
    (1) the systems are chaotic, and
    (2) you can’t possibly learn the initial state to the requisite precision.
    In other words, if the difficulty of prediction has nothing to do with computational complexity, and is just about gathering the input data, then I consider it not a fair fight and not a counterexample to the quantum ECT.

  76. Filip Says:

    Scott #68: I don’t understand the physical CT thesis (“finitely realizable physical system”?!) nor your explanation (“to whatever extent”?! “given input data”?!).

    How do you simulate Google’s Sycamore chip?

  77. Filip Says:

    Filip Says: Your comment is awaiting moderation.
    Comment #76 May 11th, 2020 at 4:38 pm
    Scott #68: I don’t understand the physical CT thesis (“finitely realizable physical system”?!) nor your explanation (“to whatever extent”?! “given input data”?!).

    How do you simulate Google’s Sycamore chip?

    Edit: I was confused. Doing the same that IBM wanted to do would suffice.

  78. Scott Says:

    Gerard #70:

      That seems like only a small part of the real question which is how unlikely is it that organisms as complex as humans would arise.

    Once you have a self-replicating molecule, Darwinian selection takes over, which can massively raise the probability of interesting behavior—by how much, no one knows how to calculate from first principles. Genetic algorithms actually do work (I’ve coded them up and messed around with them), for limited purposes like finding decent approximate solutions to NP-hard optimization problems. And of course, the 4.6-billion year history of the earth is a vastly bigger stage for a genetic algorithm than any that was ever tried, or is likely to be tried, in a computer.

  79. Scott Says:

    Rainer #65:

      I am not convinced that superdeterminism will be insane but “normal” determinism will be sane.

    Even if normal determinism is “insane,” it’s a form of insanity that’s been part of humanity’s philosophical environment for thousands of years, and to which we’ve therefore developed some immunity. Whereas superdeterminism is a new, much more virulent insanity against which many otherwise intelligent people seem to have no antibodies. 😉

  80. Craig Gidney Says:

    Finding out superdeterminism was true would be akin to finding out that the universe had a holomorphic-esque property allowing you to infer the entire state of the universe from the state of a local spacetime chunk (as in Devs). To me it just seems like the constraints you would have to place on the state of the universe to make it work are fundamentally incompatible with complex phenomena like computers and physics experiments. The search for an initial state with the property that in its future the Bell inequalities were satisfied across all of space and time would simply fail and yield the empty set.

    The problem really comes down to the fact that there’s asymptotically more events in spacetime than there is information capacity in the initial state. Superdeterministic systems are hugely overconstrained, so it’s unlikely that a solution exists unless the local-predicate-across-all-of-spacetime that you care about is being enforced by the mechanical rules of the universe instead of by the initial state being a particular way.

    You could work around this problem by allowing unbounded information into the initial state, or capping the amount of time you care about (e.g. only until heat death), but those workarounds create the vacuous kind of theory that Scott is rightly mocking.

  81. Scott Says:

    Metanoid #47: In my way of thinking, I suppose the connection between free will and continuity of identity is as follows. If there’s no “free will” (or more precisely, Knightian unpredictability), then given a sufficiently advanced brain-uploading technology, there’s no continuity of identity either, since you just need to make copies of the code for predicting everything that the person in question will do, and you’ve split their identity. If you want an unsplittable locus of identity, then it should be physically impossible to make a perfect copy of it without destroying it, so it should remain Knightianly unpredictable by any means other than iterating it forward, which is all that’s meant in GIQTM by a system’s being “free.”

  82. Ronald Monson Says:

    Scott, would I be right in guessing that when you say that that TM’s behaviour is independent of ZF you actually *believe* that its behaviour is therefore fundamentally unknowable (assuming ZF’s consistency which presumably would also be part of the belief).

  83. Tommaso Says:

    Hi Scott, on SciRate there is this paper that seems to attract attention:

    https://arxiv.org/pdf/2005.03791.pdf
    The Power of Adiabatic Quantum Computation with No Sign Problem
    Matthew B. Hastings
    We show a superpolynomial oracle separation between the power of adiabatic quantum computation with no sign problem and the power of classical computation.

    So that means that D-Wave machines *can* actually solve NP problems, right?

    (joking aside, can you see any useful implications of this? Useful as in “here is a somewhat-natural problem that can be efficiently solved by an adiabatic QC”)

    Thanks

  84. Bunsen Burner Says:

    One other thing I don’t get about superdeterminism is if the correlations between particles are now all in the initial conditions, wouldn’t we expect them to disappear after a while? In particular, what does it even mean for there to be correlations between particles in QFT? Particles can be created and destroyed after all. The relevant dynamic equations, namely The Standard Model + General Relatively seem to me to be too complex and too non-linear to allow highly fine-tuned correlations in the initial conditions to exist for all time.

  85. I Says:

    Scott #68

    RE continuity: do you accept Buridan’s principle? That choosing between two options with a continuum between you must traverse is impossible in finite time?

    Also, aren’t your comments about how every feasible physical system has an effective BQP simulation almost the claim “BQP is not NP complete in our universe”?

  86. Hans Wurst Says:

    The so-called “Buridan’s Principle,” as well as Zeno’s paradoxes, may not have been meant to be descriptive of actual experience. They may have merely been meant to show that knowledge through the senses is not always in accord with knowledge through pure thought.

  87. Scott Says:

    Ronald Monson #82:

      Scott, would I be right in guessing that when you say that that TM’s behaviour is independent of ZF you actually *believe* that its behaviour is therefore fundamentally unknowable (assuming ZF’s consistency which presumably would also be part of the belief).

    No, I wouldn’t go that far. I’m quite confident (if not absolutely certain) in Con(ZF), Con(ZF+Con(ZF)), etc., in which case the Turing machines encoding these statements all run forever.

  88. Scott Says:

    I #85: To whatever extent I understood your questions, no and no. With a simple use of randomization, Buridan’s ass need not starve, and physics in BQP is a completely different hypothesis from NP not in BQP.

  89. fred Says:

    Scott, did you actually wear a VR headset?
    Or was it VR in a broader sense?

  90. Scott Says:

    fred #89: No, I didn’t wear a headset, although apparently it’s possible to with Mozilla Hubs. It was “VR” only in the sense that you have an avatar that walks around in a 3D room. Which was indeed massive overkill for a lecture like this one, but interesting to try once. 🙂

  91. ppnl Says:

    Gerard #70

    I don’t think the evolution of human like complexity is at all improbable given a stable environment that lasts long enough. I think this is proven by how fast life appeared and developed on earth. If we were the result of a Boltzmann brain like improbable event then you would not see such steady progress.

    The information content of our genome just isn’t that large. I could store my genome on my hard drive a thousand times over. When they sequenced the human genome one of the surprises was how few actual functional genes were there. Most of it is junk, the equivalent of the broken and deleted files on my hard drive.

    So I just don’t think human complexity is gated by evolutionary improbability. It might be gated by astrophysics however. I don’t expect human complexity to be common on any humanly reasonable scale of time and space. But on a cosmic level I would not be surprised if it were very common.

  92. ppnl Says:

    Rainer #65

    Determinism seems to be wrong but I don’t see how it is insane. In fact it is a very useful concept even while being wrong.

    In fact one could argue that it makes more sense than the weird nondeterminism that we seem to have. we replace “it is as it is, because it happens that I live in this n’th random trial of the universe” with “it is as it is, because it happens that I live in this branch of the quantum metaverse.”

    People accept superdeterminism exactly because this seems perverse to them. It shocks them into retreating to determinism.

    On first sight the universe is a scary place with things happening with no apparent reason. You invent god’s and demons and throw virgins into volcanoes to appease them. Then you start to see patterns. Rocks fall with a predictable path. This pattern is so pervasive that you can use it to predict and explain the movement planets and comets with wild precision. And those deterministic patterns gives you the power to make steel and plow fields and understand the world around you. No more virgins need die.

    Then you find that that determinism is an illusion built on indeterministic chaos and nobody can explain how the illusion works.

    Superdeterminists really just want to make the world safe for virgins.

  93. I Says:

    Scott #88

    1) Your stance on things seems pragmatic: if there’s some flaw in a procedure that means we can’t do some task except in a infinitesimal number of cases its not a problem. So would you say the arbiter problem is not a problem? Even though there’s no theoretical solution due to Buridan’s principle?

    2) Sorry about the “BQP is not NP complete in our universe thing”, that didn’t make sense. Take two: you’re saying any reasonable physical computation can be simulated in BQP. Which seems totally fair.

  94. Sniffnoy Says:

    Scott #88:

    While obviously the Buridan’s Ass thought experiment is really old, the term Buridan’s Principle was AFAIK coined by Leslie Lamport in this paper (see also this older one). (Note: the actual download links for the papers are off to the side…)

    As Lamport points out, just adding a “OK randomize if it’s even” branch doesn’t get rid of the fundamental problem, because now you have some point where your decider stalls between randomizing and not randomizing! (Although I’m assuming this is what you meant; if you meant the other possible meaning of “simple use of randomization”, well, that maybe works, I don’t know, but I’ll get to that in a bit.)

    I guess I didn’t state what the fundamental problem is, but it’s already been metioned by I #85: You have a decision system which is supposed to land in one of a finite number of discrete states; but this decision system lives in a world of continuous physics, and therefore must itself operate in a continuous manner. Therefore, there has to be some intermediate point at which it will fail to reach a decision, since you can’t continuously map a connected set onto a disconnected set. And if you’re not on that boundary but just near it, then (I guess? Not sure he does this formally) it may not take forever but it will take longer than the time available.

    So, as Lamport says, even though digital electronic components (which are what he’s interested in) are supposed to be digital as they’re called, that they do ultimately have to operate by analog means ensures that there will always be some inputs to them that will cause glitches, where it doesn’t return a clean 0 or 1, or at least fails to do so by the next clock tick where it’s expected.

    The thing I note Lamport doesn’t consider, even though he does talk about how adding noise can’t change this, is that he doesn’t talk about true randomness. Now it seems this shouldn’t make a difference, just on the general principle that deterministic classical systems, and ones with true randomness, shouldn’t be so different. But it does mean the argument is not so simple. Because while you can’t continuously interpolate between two decisions (within the discrete space of decisions), you can continuously interpolate between their corresponding delta distributions, obviously. (Kind of like spontaneous symmetry breaking if I understand correctly; the continuity is maintained, not in the individual worlds that result, but in the whole ensemble.) And if this is what you meant by “a simple use of randomization”… maybe that works? IDK? He doesn’t address it.

    But also maybe it doesn’t work? The problem here is I don’t really know much of anything about continuous dynamics so I can’t really say. (I’m hoping somebody does?) Like, yeah, it’s easy to postulate that oh at this point it has a certain probability of ending up falling into a certain decision, but it still has to actually get to that decision via the continuous dynamics of the world, and it’s possible that rules out such schemes. Like, maybe just the simple requirement of linearity on how the distribution evolves causes a problem here for all I know. Like I said, I don’t know this subject.

    And then of course there’s the question of QM — Lamport briefly addresses this but pretty obviously in an ad-hoc and slapdash manner. It would be nice to see this in general just with, y’know, what happens if we allow amplitudes. Unlike simple randomness, using amplitudes really can make a difference in general, as you obviously well know. But if linearity does cause a problem, well, amplitudes are linear in the same way, so it would likely come up with QM too.

    Anyway I don’t know if linearity, or anything else for that matter, does cause a problem. I’m kind of hoping somebody does? Like someone must have studied this right? Regardless I’m not sure getting around Buridan’s principle is as simple as just randomizing!

    (And yeah, this is basically just a copy of my latest Dreamwidth entry. 😛 Wonder if I #85 and I were both thinking about this because we both saw Lamport’s article on Hacker News at the same time. 😛 )

  95. Ronald Monson Says:

    Scott #87 “No, I wouldn’t go that far.”
    In which case, I’m curious as to what part of a conceivable proof of that TM’s non-halting might *not* be formalizable in ZF.

  96. Scott Says:

    I #93 and Sniffnoy #94: OK, apparently you were both riffing on some essay by Lamport that I hadn’t seen and knew nothing about!

    In reality, we appear to live in a universe governed by quantum gravity, which means: continuous at the level of amplitudes, discrete at the level of measurement outcomes. And that almost certainly changes this discussion. But I’m fine with the possibility that, in a classical and continuous universe, any Buridan’s ass would have some tiny but nonzero possibility of getting stuck between the options and starving. It’s like how a pencil has a tiny probability of balancing on its point, or how a transistor has a tiny probability of getting stuck between the “0” and “1” voltage levels—possibilities that we almost always neglect in practice.

  97. Scott Says:

    Ronald Monson #95: The part that asserts the existence of a large cardinal, with which to construct a model for ZF.

  98. Sniffnoy Says:

    Scott #96: Yeah that’s why I wrote the long comment — I saw I #85 talking about Buridan’s principle and was like, OK this is going to need some explanation. 😛 Whereas they seemed to assume they could just say “Buridan’s principle” and you’d know what was meant. (Also it was a chance to ask some questions about it I’d been wondering about just in case anybody had answers. 😛 ) But I think part of the point of what Lamport wrote is that you maybe shouldn’t neglect the case of the transistor not giving a clean output; like I mentioned, it’s digital electronic components (considered as decision systems) that he’s actually primarily concerned with in those papers! 🙂

  99. Ronald Monson Says:

    Scott #97: But if another proof then arrives asserting the existence of another large cardinal in proving that that TM halts – what is the fact of the matter?

  100. Scott Says:

    Ronald Monson #99: The only way for what you said to happen—i.e., ZF plus a large cardinal LC1 proves that a TM runs forever, but ZF plus an even larger cardinal LC2 proves that the TM halts—is if ZF+LC2 was actually inconsistent (since any theorem of ZF+LC1 will also be a theorem of ZF+LC2).

    This would surely force a huge reconsideration of the foundations of set theory, perhaps even bigger than the one caused by Russell’s Paradox. I don’t expect it to happen; certainly nothing like it has happened. Yes, adding more large cardinal axioms lets you prove more arithmetical statements (by Gödel’s Theorem), but it’s never been found to invalidate the statements that were previously proved.

    In slightly more detail, there’s obviously some fact of the matter about what happens when you run the TM in question: either it halts, or it doesn’t halt! (Set aside for one moment what we can prove about it.)

    If the TM halts, then in the scenario you describe, even ZF+LC1 would be inconsistent (so in particular, unsound). For whenever a TM halts, it’s provable in ZF that it halts, by just simulating the TM step-by-step until it halts. So ZF+LC1 would prove both that the TM halts and that it runs forever.

    If, on the other hand, the TM runs forever, then ZF+LC1 might be consistent and even sound, with only ZF+LC2 being inconsistent. (And the inconsistency would arise only because ZF+LC1 proved that the TM runs forever. If not for that, the theorem of ZF+LC2 that the TM halts might be totally unfalsifiable, so that ZF+LC2 would be unsound but would still remain consistent.)

  101. LGS Says:

    Scott #100, small correction: if the TM halts, then not only would ZF+LC1 be inconsistent, even ZF would be inconsistent – because the TM is searching for an inconsistency in ZF!

    In particular, if ZF+LC2 proves that the TM halts, then we know for sure that ZF+LC2 is unsound, regardless of whether the TM truly halts or not. So even if you give me a large-cardinal-assisted proof that the TM halts, I wouldn’t believe the TM actually halts, because all you’ve done is show your proof system to be unsound!

    You’d have to give me something like a PA proof that the TM halts for me to believe that it halts. Of course, that’s equivalent to giving me a PA proof that ZF is inconsistent.

  102. Scott Says:

    LGS #101: Oh right! I’d completely forgotten what the TM in question was doing. 🙂

    So, please interpret my comment as being about what we can conclude if it’s an arbitrary TM, not necessarily one that searches for inconsistencies in ZF.

  103. Ronald Monson Says:

    Scott #100
    Ok, so ZF+LC2 proving that the TM halts wouldn’t shake your faith in ZF+LC1’s proof of its non-halting because of ZF+LC2’s necessary inconsistency.

    But what about an axiom system whose theorems weren’t a superset of ZF+LC1. Suppose ZF+X proves that the TM halts but X doesn’t assert anything about higher cardinals. Yes, you did originally stipulate that it must but what if it was then pointed out that X actually captures a reasoning step you’ve used hundred of times in previous papers.

  104. Scott Says:

    Ronald Monson #103: But I’ve never used any mathematical reasoning step in any of my papers that was outside of ZF—indeed, I don’t think I’ve even used anything that couldn’t be formalized in PA, or some small fragment of PA. (No, not even in my paper with Adam Yedidia, though you could view the interest of that paper as conditional on a belief in the consistency of set theory. Naturally, I’m not counting informal remarks, e.g. about P vs. NP and its possible independence from set theory.)

    And I’m not at all unusual here: the vast majority of mathematicians and computer scientists—especially those who (like me) ultimately care about extremely concrete questions—never need more than tiny fragments of ZF in their research. Were your questions motivated by the mistaken assumption that needing to add a new axiom to set theory is some kind of everyday occurrence in math?

  105. Ronald Monson Says:

    Scott #104: No, I am aware that most mathematicians/computer scientists work (almost?) always within ZFC; my questions were motivated by trying to get a grasp on to what extent the halting/non-halting of TTM (that TM whose non-halting is equivalent to ZF’s consistency) is fundamentally unknowable or provable via larger cardinals (and in the latter case these proofs’ soundness). Either of these possibilities I find very striking, so striking that the last question was aimed at eliminating any loopholes including the existence of hidden reasoning steps outside ZF.

    The subtle but consequential difference between believing and knowing that all past (or acceptable) proof-steps are formalizable in ZF is that for the former, however unlikely, one can then at least conceive of a new “concrete step” being formalizable as X. This could conceivably then lead to a ZF+X proof of TTM halting in contradiction to a purported ZF+LC1 proof of TTM running forever. In such a scenario, X’s concreteness would surely trump a belief in LC1 thereby questioning the soundness of any set-theoretic proof using higher cardinals in relation to the fact-of-the-matter (non)halting of TTM.

    Anyway, accepting ZF’s ability to capture standard reasoning about TM’s, there remains something intriguing about the celestial world of set-theoretic arguments involving multiple infinities having something to say about the (albeit infinite) behaviour of a very concrete, finitely describable machine (or at least apparently being the only remaining avenue for potentially having something to say).

    Of course, this is hardly new given the known existence of such TTM’s but it seems to come into greater focus the smaller, more graspable the TTM while raising in my mind further questions/applications (proofs of ZFC’s consistency in stronger systems translating into insights into TTM’s non-halting or even other types of long-term behaviour in general TM’s?). Here’s hoping TTM can be further reduced!

  106. LGS Says:

    Scott #104, I don’t believe you’ve never used arguments outside of PA. Have you never said “for all subsets of this infinite language L, …”? That’s already not something you can formalize in PA.

    It might be true that, with sufficient hard work, all the theorems you’ve proven can also be proven in PA. But in many cases that will require non-trivial additional work!

    And if you go much below PA, you stop being able to even define the Ackermann function, which you’ve used in your “who can name the bigger number” essay, and which comes up naturally in computer science.

    Ronald #103, if ZF+X proves the TM halts, then ZF+X is unsound. This is true for any X. That’s because if ZF+X shows the TM halts, then it just proved its own inconsistency! So I will disbelieve any proof in ZF+X that the TM halts, for any set of axioms X. Now, what if X was something very believable, such as an empty set of axioms? That is, what if ZF proved the TM halts? In that case we would know ZF is unsound, and I’d stop using it and search for a better axiom system. But I’d still not be sure if the TM halts (that would require ZF to be *inconsistent*, not merely unsound).

  107. Alexei Says:

    Scott #87:

    Could you explicate please the source of your confidence (if not absolute certainty) in Con(ZF), etc?

    I do find such questions very mysterious and I would be inclined to agree with a suggestion by Ronald Monson #82 on “fundamental unknowability”.

  108. Scott Says:

    LGS #106: The issue is that I’ve never tried to optimize for using the weakest possible axioms, or even formalized my proofs at all. That being so, the only way I can make sense of the question is, “have I proved theorems that would inherently require powerful set-theoretic principles, even to the limited extent that (say) the Roberson-Seymour Theorem did?” And I believe the answer is no. As I said, informal remarks (so in particular, things in popular articles) don’t count. But if anyone doesn’t believe me, they’re welcome to comb through my research papers and suggest counterexamples!

  109. Gerard Says:

    ppnl #91

    > I think this is proven by how fast life appeared and developed on earth. If we were the result of a Boltzmann brain like improbable event then you would not see such steady progress.

    I don’t think it’s an either/or choice between the spontaneous appearance of a fully functional mind and a process that allows such to occur from fundamental physics with reasonable probability. I also don’t think you can use the observation that a certain event occurred once to make any assertions about its probability, not when the possibility of making the observation is conditioned on the occurrence of the event.

    That said it’s possible that my intuition about the improbability of the evolution of complex, intelligent organisms is incorrect. It assumes, in particular, that the number of individual organisms that have ever existed on earth is bounded by some physically reasonable number (like the number of atoms on earth, for example). However reproduction is an exponential process and organisms could be thought of as subsets of the set of all atoms on earth and then you’re talking about a potentially much larger number, which conceivably could have allowed the exploration of a significant fraction of the search space in 4-5 billion years.

    ppnl #92

    > On first sight the universe is a scary place with things happening with no apparent reason.

    From my experience of life that remains true, the existence of patterns for sufficiently simple systems notwithstanding.

  110. Scott Says:

    Alexei #107: Heuristic and empirical, just like with the Goldbach Conjecture or any more quotidian open problem. When I try to picture a countable model for ZF, like Gödel’s constructible universe L, I find it almost impossible to imagine anything breaking: more (secretly countable, even if not in the model) sets just keep getting constructed forever. And while this intuition wouldn’t count for much by itself, no contradiction has been found in more than a century of extensively working with ZF. Of course I might be wrong.

  111. ppnl Says:

    Gerard #109

    First I think you vastly overestimate the complexity of humans. The Kolmogorov complexity of the human body cannot be that great. After all we are built with a recipe that is only a gigabyte or so in size. Most of that is junk and most of what isn’t junk is identical to the DNA of a redwood tree. We are but minor variations on a theme.

    I think it was Chaitin who pointed out that “complexity” as it used in chaos theory is really about simplicity. It is about how a tiny simple set of rules can lead to behavior that is so complicated that it is impossible to predict.

    The human brain contains a hundred billion neurons. There simply isn’t enough genetic information to define the exact arbitrary location and connection pattern of all those neurons. Instead your genes only provide a fractal seed that grows your brain as a fractal. It’s like the Mandelbrot set. It looks infinitely complex but a child could understand the rules that create it. Finding and modifying these fractal seeds should be simple for an evolutionary algorithm.

    There is an estimated 1.5 trillion species alive on the planet now. Never mind the number of individuals alive today and over the last 4 billion years.

    The first life we have evidence for was 4.28 billion years ago. That’s only a few hundred million years after the earth formed. About 750 million years later we had photosynthesis. Must have been a harder problem than the origin of life itself.

    The first Eukaryotes (more complex cells) appeared several billion years later. An even harder problem to solve apparently.

    It was only 150 million years after this that multicellular life appeared. Apparently not a hard problem for evolution to solve.

    Algae happened about 700 million years later and complex land plants were 150 million years later. The first vertebrates was like 200 million years later still.

    The human brain evolved from primitive tiny brained protomammals in a mere 65 million years. Easily the simplest problem of the bunch for evolution to solve.

    But my point is none of these problems was particularly hard for evolution to solve. Once a replicator appears it will in time explore all the paths open to it. The biggest impediment to this progression is astronomical events like supernovae and asteroid impacts. In fact there were several mas extinctions on earth that nearly ended it all.

  112. Anon Says:

    I have some interesting arguments in favor of super determinism, mainly around the information-theoretic meaning of true “free will”.

    Maxwell’s demon is an example where, if the demon had “free will”, he could have potentially used his “free will” to break the second law of thermodynamics. So it shows that in some circumstances, physical theories already object to the idea that you can make some kinds of decisions.

    There’s still long way to go to show any meaningful super-deterministic toy theory tho. My intuition of what it would mean, is some theory where keeping the detectors in the maximally violating settings, for many particles, over long enough time, will be the same impossibility as keeping the door open correctly as the Maxwell demon. Just like it’s possible to implement a Maxwell demon with few particles but not indefinitely, you could violate bells equation few times but not indefinitely, because consistently violating it will hit a similar information-theoretic obstacle.

  113. Gerard Says:

    ppnl: #111

    > After all we are built with a recipe that is only a gigabyte or so in size.

    A gigabyte is a tremendous amount of information to discover by random chance or by an algorithm that does a bit better than random chance. The search space has a cardinality on the order of 10^(2.4e9).

    For comparison I think there are less than 10^100 atoms in the observable universe.

  114. ppnl Says:

    Anon #112

    No, Maxwell’s demon cannot violate the second law of thermodynamics with or without free will. It turns out the demon would need a flashlight in order to see what it is doing. Do the math and the flashlight will always use more energy than the demon can gain.

    Entropy is about information. You use the flashlight to gain information and reverse entropy but that is overmatched by the entropy increass caused by the flashlight using power.

    Free will really has nothing to do with it at all.

  115. Gerard Says:

    Follow up to #113

    To further illustrate the scales involved suppose every atom in the universe were a computer that could check one candidate solution every nanosecond. Then the number of solutions that could be tried since the universe began would be on the order of 10^126, which is a completely negligible fraction of the total search space.

  116. Ronald Monson Says:

    LGS #106:
    No you are right, I overlooked the same step you previously pointed out: ZF+X is not a viable loophole-it is going to either be inconsistent and/or unsound.

    The intuition I was trying to get at is perhaps better expressed with the following example.

    With &#960(x) the number of primes less than x, li(x) the logarithmic integral function,
    and &#916(x)= &#960(x)-li(x).

    All the early numerical evidence suggests that &#916(x) is permanently negative (including the belief of a young Ramanujan no less). Littlewood however, famously showed that &#916(x) crosses 0 infinitely often with, say C, being the integer corresponding to the first crossing.

    Now consider the TM that starts counting upwards by increasing x and halting only if x=C and &#916(x) &#62 0.

    (Curiously, no explicit C is known but it is known to be bounded above by C1=e^7279513468).

    Let C1* be the TM step number corresponding to the testing of C1. A young Ramanujan -“The Man Who Knew Infinity”- would have claimed that such a TM would halt at C1* (although he might have become a bit suspicious if an explicit C1* was given but perhaps only momentarily given his deep [often justified] faith in his special intuition).

    We now “know” however, that this TM will run forever and a proof of this fact-of-the-matter could be formalized in ZFC. Even though such a machine could never practically be run beyond a number as massive as C1*, the vast majority of mathematicians/computer scientists would feel in their bones that this TM would still be beavering away at C1* and beyond.

    Contrast this with the “provable” non-stopping of Z (the ~1000 state TM halting iff ZF is inconsistent). Proofs of Z’s non-halting must occur in extensions of ZF but I’m not sure they would inspire anywhere near the same increase in confidence that Z would still be chugging away beyond C1*. In fact, set-theoretic proofs do exist of ZF’s consistency for some logical systems, but these don’t seem to have translated into any extra conviction, much less certitude of Z’s non-stopping (see Scott’s comment #110, for example). But if this is the case in what sense can they even be considered as proofs of, or even relevant to, Z’s long-term behaviour?

    This is what I mean about the “fundamental unknowability” of Z’s assumed non-halting and therefore, its almost axiom-like status (axiom-like because axioms are generally accepted at face value whereas this “axiom” is more one-sided; falsifiable but “unknowable”).

    All of the popularizations of Adam Yedidia’s and Scott’s paper seem to have employed the click-bait heading

    “This Turing Machine Should Run Forever Unless Math is Wrong”

    whereas I think the bigger, epistemological point is roughly expressible as

    This Turing Machine is believed to run forever but Math will never be able to prove it!

  117. ppnl Says:

    Gerard,

    Evolution is not a random search. It contains a random process but that does not make it a random search. Many algorithms employ a random process and a lot of them are inspired by evolution. You have genetic algorithms, evolutionary algorithms, simulated annealing, neural nets…

    These are used in the real world to solve real problems. That last one was used to create the alphazero program which taught itself to play both chess and Go at super human levels.

    This is an old and silly argument from creationists. Richard Dawkins countered it almost 40 years ago with his weasel program.

    You talk about our genome as if it is the only string that would allow intelligent life. That is nonsense. There are probably more ways to create an intelligent being than there are atoms in the universe. And surrounding each solution is a vast cloud of partial solutions that point the way. The problem has structure. That structure can be explored algorithmically. Evolution is the algorithm for the job.

    You need to stop thinking of evolution as a random search. It. Just. Isn’t.

  118. jhuyt Says:

    Clarifying #116

    LGS #104:
    No you are right, I overlooked the same step you previously pointed out: ZF+X is not a viable loophole-it is going to either be inconsistent or unsound.

    The intuition I was trying to get at is perhaps better expressed with the following example.

    With pi(x) the number of primes less than x, and li(x) the logarithmic integral function,
    and d(x)= pi(x)-li(x)

    All the early numerical evidence suggests that d(x) is permanently negative (including a young Ramanujan no less). Littlewood however, famously showed that d(x) crosses 0 infinitely often with, say C, being the integer corresponding to the first crossing.

    Now consider the TM that starts counting upwards by increasing x and halting only if x=C and d(x) negative.

    (Curiously, no explicit C is known but it is known to be bounded above by C1=e^7279513468).

    Let C1* be the TM step number corresponding to the testing of C1. A young Ramanujan -“The Man Who Knew Infinity”- would have claimed that such a TM would halt at C1* (although he might have become a bit suspicious if an explicit C1* was given but perhaps only momentarily given his deep [often justified] faith in his special intuition).

    We now “know”however, that this TM will run forever and a proof of this fact-of-the-matter could be formalized in ZFC. Even though such a machine could never practically be run beyond a number as massive as C1*, the vast majority of mathematicians/computer scientists would feel in their bones that this TM would still be beavering away at C1* and beyond.

    Contrast this with the “provable” non-stopping of Z (the ~1000 state TM stopping iff ZF is inconsistent). Proofs of Z’s non-halting must occur in extensions of ZF but I’m not sure they would inspire anywhere near the same increase in confidence that Z would still be chugging away beyond C1*. In fact, set-theoretic proofs do exist of ZF’s consistency for some logical systems, but these don’t seem to have translated into any extra conviction, much less certitude of Z’s non-stopping (see Scott’s comment #110, for example). But if this is the case in what sense can they even be considered as proofs of, or even relevant to, Z’s long-term behaviour? This is what I mean about the “fundamental unknowability” of Z’s assumed non-halting and therefore, its almost axiom-like status (axiom-like because axioms are generally accepted at face value whereas this “axiom” is more one-sided; falsifiable but “unknowable”).

    All of the popularizations of Adam Yedidia’s and Scott’s paper seem to have employed the click-bait heading

    “This Turing Machine Should Run Forever Unless Math is Wrong”

    whereas I think they missed a bigger, epistemological point roughly expressible as

    This Turing Machine is believed to run forever but Math will never be able to prove it!

  119. Gerard Says:

    ppnl #117

    Genetic algorithms (ie. evolution) aren’t) a random search but in all probability they are not exponentially better than random search. If they were they should be able to solve NP hard problems in polynomial time, which would imply P = NP.

    A polynomial time speedup probably isn’t going to change that exponent of 2.4 billion by all that much.

    Yes, there are probably many solutions that generate intelligence and no one knows how many so I admit that is loophole in my argument. Nonetheless intuitively I still find the numbers against the evolution of intelligent life forms being a likely process to be overwhelming. Another factor to consider is that evolution isn’t even searching for complex or intelligent life forms, just ones that are good at reproducing and bacteria and insects perform much better by that criteria than do humans.

  120. ppnl Says:

    Gerard #119

    Let me be clear, by random search I just mean guessing at random. If you could never get an exponential speedup over this then you could never make any decision. Ever.

    To see how this works imagine trying to guess this weeks big game lotto numbers. For simplicity lets call it a six byte string. To correctly guess the string I would have to make one guess every second for ten years. Because you are simply trying to guess a random number a random guess is the best you can do. The problem has no structure that an algorithm can take advantage of. Most problems of interest do have structure. This structure can and often does make them exponentially faster to solve.

    Dawkins program found a 28 byte string in eleven seconds on hardware from forty years ago. A similar program today could find a gigabyte string in no time. Yes that is an exponential speedup over random guessing. It can do this because the program knows when it makes a bad guess at any individual letter. That gives structure to the problem that lets it beat random guessing. Exponentially.

    Evolution works similarly. An animal with a bad guess (bad mutation) dies without producing many if any offspring. Just like the Dawkins program it exploits this structure to get an exponential speedup over random guessing. And yes it will be exponential.

    Your comment about implying P = NP is total nonsense. Many problems can be solved exponentially faster than random guessing. Why you thing that this one particular problem implies P=NP is difficult to imagine. Where did that even come from?

    It is true that intelligence isn’t the explicit goal of evolution. But that my friend is what makes evolution so beautiful. Instead of one arrogant species that thinks everything is all about it we have diversity. We have fish that can live under a thousand atmospheres pressure. We have birds that fly high and worms that go low. We have bacteria that live in boiling of water and worms that live in arctic ice. Intelligent life is just one tiny corner of a vast diversity. One minor solution to survival that has not yet proved itself over time. Darwin himself said it best:

    “There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved.”

    You: “Intelligent life is probably impossible elsewhere because evolution takes like a googleplex years.”

    Me: “It didn’t take a googleplex years here.”

    You: “Statistical fluke.”

    That’s what it boils down to unless you want to argue for some kind of intelligent design. Your arguments were in fact created by intelligent design proponents.

  121. Gerard Says:

    ppnl #120

    > Many problems can be solved exponentially faster than random guessing.

    Yes, those problems are in the complexity class P.

    > Why you thing that this one particular problem implies P=NP is difficult to imagine. Where did that even come from?

    It comes from the observation that most optimization problems that don’t have some very simple structure such as linearity or convexity appear to be NP hard. In fact many of them have been proven to be NP hard. If you can solve NP hard problems in polynomial time then P = NP.

    Now I don’t think that the evolution of intelligent life has actually been proven to be NP hard. That would be extremely difficult to do since we don’t even have a clear definition for what constitutes intelligence but it certainly seems reasonable to believe this problem – the search for an intelligent organism – is at least as hard as any optimization problem we are familiar with.

    There are countless applications where solutions far less complex than that are required. If genetic algorithms were actually able to solve them efficiently I would expect to have heard about it and I would expect many programmers to be losing their jobs to GA’s. As far as I can tell none of that has been happening.

    > You: “Statistical fluke.”

    Yes, but it’s a very special kind of statistical fluke, one which has to have occurred for the fluke to be observed in the first place. In other words my “explanation” is to invoke the anthropic principle. I think that without that it’s very difficult to explain our existence in any kind of remotely naturalistic way.

  122. ppnl Says:

    Gerard,

    It doesn’t matter if any or even all optimization problems are NP-hard for finding perfect solutions. I could well believe that finding the perfect god brain could take countless googleplexes of years. But we don’t have a perfect god brain. We have a hodgepodge of sub-optimal choices. We have horrible memories yet the brain reconstructs memories in a way that give us the illusion that we have good memories for example.

    Crappy design is common in other parts of our body as well. Our eyes have the neural wiring in front of the retina leading to a big blind spot. This is then fixed with a software patch in which the brain edits in an extrapolation of what it thinks should be there. The laryngeal Nerve goes from the brain to the neck by way of the heart and wrapping around some large arteries. Why? Because fish don’t have necks. Seriously.

    Evolving a perfect god brain may or may not be hard. Evolving the brain we have clearly isn’t.

    To see how this works look at the traveling salesman problem. This is known to be NP hard. But there are algorithms that work in poly time that get within 3% of the optimal solution. I would be happy with a brain within 3% of god like. I’m guessing I fall short.

    >”Yes, but it’s a very special kind of statistical fluke…”

    It isn’t just one fluke but literally millions of them. We evolved brains that can comprehend the world. But eyes also evolved many different times independently. The power of flight evolved many different times independently. Evolution has solved vast numbers of problems many of which have nothing to do with us. Even the brain evolved in many steps. Sixty five million years ago the largest brain was the size of a squirrel brain. Average brain size and intelligence has vastly increased in many lineages since then. My dog is more intelligent than anything alive then. Was that also just a fluke? We are just a statistical outlier in a process that was happening generally.

    It is as if you watch a cake being produced and insist that it is a fluke while ignoring the fact that you are in a bakery where hundreds of different kinds of cakes are being baked. The earth has an estimated 8.7 million species but somehow it is all about you.

  123. Gerard Says:

    ppnl: #122

    > But we don’t have a perfect god brain.

    Yes, that’s quite the understatement.

    Still, as far as I know, no GA has ever come close to discovering even the most basic of intelligent algorithms from scratch though I’m sure it’s been tried many times (in fact in a previous job I once used a tool based partly on that principle, it didn’t work too well).

    For that matter where’s the GA that can generate a website that’s within 3% of a spec ?

    > It isn’t just one fluke but literally millions of them.

    Sure, but many of those occurred on the chain of development that led to humans, so they are themselves covered by the anthropic principle. Once that direct chain is in place I don’t think it’s surprising that various offshoots of it should occur.

    > The earth has an estimated 8.7 million species but somehow it is all about you.

    I’ve never understood this need for self-denigration that many people have. The human brain is by far the most complex known object in the universe. For all we know universes that contain such a thing could be incredibly rare occurrences in an infinity of universes that have arisen. So we may indeed be by far the most interesting things around, the fact that we are profoundly flawed and that it would probably have been better if we (or anything else) had never existed notwithstanding.

  124. Tracy Hall Says:

    Scott #44: There is a Zoom seminar I have started attending (in Australia, from Utah) in which the organizers have taken pains to provide for a hallway of sorts: A few minutes after the presentation and questions are over, they explain that they are going to assort the participants randomly into breakout rooms of five or six participants each, and also make each participant a host of the meeting so that he or she can see who is in each room and move between them. The first time I landed in a breakout room where someone asked about a possible generalization of a construction mentioned in the seminar, and a few of us started working together on a common whiteboard. I contributed for a while, left the room to greet the speaker in another room and discuss something else that had come up during his presentation, and came back to see the conclusion that the generalization didn’t provide anything beyond the original theory. The second time I landed in a room where everyone else happened know each other and started discussing teaching conditions in Australia, and I left after a short while without joining any other breakout rooms. Neither of these might sound like much of a “success” story, but I think that the organizers have succeeded fairly well at replicating what can happen in the hallway.

  125. Sept Says:

    Gerard@123:

    Get me a machine with the computational power of Earth, and a smooth fitness function, and I’ll get you a GA that will produce websites within 3% of spec without difficulty.

    More broadly, I note that you seem to be taking some very odd positions based on a combination of overconfidence in your intuitions and some elementary misunderstandings (such as the definition of superdeterminism, and the suspiciously creationist misunderstanding that biological evolution is a process of random search).

  126. Anon Says:

    ppnl #114: You didn’t understand me. I’m assuming a demon that doesn’t look at what its doing but just chooses a sequence of choices. What prevents a demon from choosing the correct choices without looking?

  127. Sept Says:

    Anon@126:

    I’m not ppnl, but if I understand your question correctly, it’s equivalent (after removing the blind and therefore superfluous demon) to that of why a container of gas in equilibrium doesn’t spontaneously separate into two regions of different temperature before a separator closes down between them at some time t0.

    In most contexts, the answer is that nothing prevents that separation except probability.

  128. Chris Says:

    Could anybody give me a hint why sometimes (e.g. Rainer #23) randomness in nature is seen as an enabler for free will?
    Obviously randomness affects whether I can know in principle what the mind will decide next.
    But in everyday language I would assume most people associate with “free” something more like “free of external influences / factors that influence the result” rather than “known in advance”.
    So if it’s nature/physical laws that determine the decisions of your brain and not your mind,
    does that make you any more free if nature uses randomness and not determinism?
    Or in other words:
    If you want a truly free will (at least to my naive understanding), then you would need some kind of mind which is a entity outside of the universe and therefore free of it’s laws.
    To me it would not improve my freedom at all if my slave master picked his next command for me by rolling the dice instead of looking it a up on a already written list 🙂

  129. James Gallagher Says:

    Chris #128

    Could anybody give me a hint why sometimes (e.g. Rainer #23) randomness in nature is seen as an enabler for free will?

    I think because the randomness is not just “chaotic” but that there is a deterministic equation describing its evolution (The Schrödinger Equation), it would allow a process like (biological) evolution over billions of years to develop the ability to “load the dice” to enable certain macroscopic outcomes, that have no (entirely) deterministic explanation, but are “intended” by the subject.

    Modern experiments show that it does take order of milliseconds to “make a decision”, and the physical outcome of the “decision” may precede our conscious recognition of it being made, but that could still be “free-will”.

    I do wonder how much “free-will” is actually present in animals though, even humans hardly do anything unexpected, and mental disturbances due to chemistry in the brain account for much of the crazy behaviour.

  130. Andrei Says:

    Scott,

    “Whereas superdeterminism is a new, much more virulent insanity against which many otherwise intelligent people seem to have no antibodies.”

    As you probably remember I strongly disagree with this view. Let me explain.

    1. It is important to have in mind the “minimal” superdeterministic theory. This theory has the minimal number of assumptions that would still make the theory superdeterministic. Fighting against a specific superdeterministic model proves nothing because that model could be flawed, yet superdeterminism could still be true.

    2. A minimal superdeterministic theory must deny Bell’s independence assumption, that the settings of the detectors at the time of detection must be independent of the properties of the entangled particles. If we agree that:

    a. the settings of the detectors are nothing but their physical state (position/momenta of particles + field configuration for example), and

    b. the properties of the entangled particles are determined at the source at the moment of emission,

    It follows that the minimal superdeterministic theory requires that the state of the source at the moment of emission (position/momenta of particles + field configuration) and the state of the detectors at the moment of detection (position/momenta of particles + field configuration) must not be independent.

    It is easy to notice that if the minimal superdeterminism is true Bell’s theorem is busted as one cannot arrive to Bell’s results. One needs to check which states of the detectors are possible for each possible state of the source and then add those states. This, of course, it’s not possible without doing a lot of calculations, which would be different for different theories.

    So, if you still hold to your position provide a good reason for believing that minimal superdeterminism, as defined above, necessarily implies a “virulent insanity” or fine-tuning of the early universe or any other assertion you have made against superdeterminism.

  131. Andrei Says:

    Bunsen Burner,

    “One other thing I don’t get about superdeterminism is if the correlations between particles are now all in the initial conditions, wouldn’t we expect them to disappear after a while? In particular, what does it even mean for there to be correlations between particles in QFT? Particles can be created and destroyed after all. The relevant dynamic equations, namely The Standard Model + General Relatively seem to me to be too complex and too non-linear to allow highly fine-tuned correlations in the initial conditions to exist for all time.”

    It is not true that superdeterminism requires “highly fine-tuned correlations in the initial conditions”. This is a straw-man argument. Think about a system of two massive objects orbiting each-other. Their positions are not independent variables (they are correlated), the objects can be arbitrarily far away and there is no nonlocality involved. We don’t explain such correlations by postulating finely-tuned conditions at the Big-Bang, but by understanding that distant objects interact (localy) through fields (gravitational, electromagnetic) and those interactions cause correlations.

    In the case of a Bell test we also have interactions between source and detectors v(both EM and gravitational) so their states have to be correlated in some way. Nobody tried to perform the required calculations so we don’t know if those correlations could give you the QM’s statistics or not, but the superdeterministic hypothesis says they might. Obviously, one is not limited to EM/gravitational interactions. ‘t Hooft proposes a model based on discrete physics (his Cellular Automaton interpretation) but the basic mechanism is the same. You have distant objects interacting by some fields.

    In regards to your point about the Standard Model, I need to say that superdeterminism implies that the Standard Model is an emergent, statistical theory, based on a deterministic fundamental theory. So the mathematical aspects of the Standard Model could be just an artifact of the data loss implied by the move from the exact theory to the statistical one.

  132. Andrei Says:

    Craig Gidney,

    “The problem really comes down to the fact that there’s asymptotically more events in spacetime than there is information capacity in the initial state.”

    This is false. In a deterministic theory (say classical electromagnetism) there is no increase of information. You can calculate any future (or past) state given the present one. So one does not need an infinite amount of information (assuming that the universe is finite, sure). There are also no “events” in such a theory. It’s just charges moving around and fields changing magnitude and direction.

    Superdeterministic theories are just a subclass of deterministic theories so the above applies.

  133. Andrei Says:

    ppnl,

    “Superdeterminism is the claim that the macro-universe looks classical and deterministic but if you look deeper you see it is actually quantum and nondeterministic. But if you look deeper still it becomes deterministic again.”

    Correct.

    “But that means that the underlying deterministic process has to simulate a quantum process. That cannot be efficient in either time complexity or memory space complexity. For example if I am running a quantum computer to factor a million digit number it is actually a classical program running on a deeper deterministic process. The underlying classical computer must be vastly more powerful than the quantum computer it is simulating.”

    I think that you place an equal sign between classical physics (General Relativity, classical electromagnetism) and the “classical” computer (a device working with a limited number of bits. Probably the word “classical” is at the root of this confusion. Classical theories based on continuous space-time “process” an infinite amount of information in a limited time. The velocity of an object is a real number with an infinite number of digits and this is transformed instantly into another such number when a force is applied. So, a classical theory could very well “simulate” a quantum one.

    “And how does the parts of the deep classical computer know that they are part of a quantum computer? It seems that they must store a record of their past interactions in order to correlate their current interactions so as to produce on aggregate results consistent with valid factors of my million digit number.”

    As shown above, classical physics does not work like a classical computer. And your question makes no sense. If there is a device that is built to calculate the factors of a number it will do so, regardless of it being “quantum” or “classical”. And, sure, a classical deterministic state does “remember” the past. By evolving the theories’ equations you can get from the present state to any past (if the theory is reversible, classical EM is) or future state. Nothing is lost.

  134. Andrei Says:

    David Griswold,

    “Bell’s theorem still acts like an immutable part of the laws of nature. If superdeterminism *specifically* conspires to uphold Bell’s theorem, then Bell’s theorem is still baked into the world, in that very specific way, and any complete superdeterministic theory will have to explicitly require somewhere that events that don’t uphold Bell’s theory do not occur.”

    Bell’s theorem is based on an unjustified assumption, that the state of the detector at the time of detection and the state of the source at the time of emission (that determines the hidden variable) are independent (not correlated). There is absolutely no reason to accept this assumption as true. The physical systems that we call “source” and “detector”, being large aggregates of massive/charged particles – electrons and quarks, interact both gravitationally and electromagnetically, and, as a consequence of this interaction, are expected to display some correlations. The burden of proof stands on Bell’s theorem’s supporters to show that those correlations still keep the states independent enough to grant the assumption. Until such a proof is presented the most reasonable position is to reject Bell’s theorem as unsound, being based on a premise with unknown truth value. Hence, superdeterminism is perfectly reasonable.

  135. James Gallagher Says:

    Hmm, arguing how free-will is not possible is so much easier than arguing how it is possible.