Dismantling Quantum Hype with Tim Nguyen

Happy Thanksgiving to my American readers! While I enjoy a family holiday-week vacation in exotic Dallas—and yes, I will follow up on my old JFK post by visiting Dealey Plaza—please enjoy the following Thanksgiving victuals:

I recently recorded a 3-hour (!) YouTube video with Timothy Nguyen, host of the Cartesian Cafe. Our episode is entitled Quantum Computing: Dismantling the Hype. In it, I teach a sort of extremely compressed version of my undergraduate Intro to Quantum Information Science course, unburdening myself about whatever Tim prompts me to explain: the basic rules of quantum information, quantum circuits, the quantum black-box model, the Deutsch-Jozsa algorithm, BQP and its relationship to classical complexity classes, and sampling-based quantum supremacy experiments. This is a lot more technical than an average podcast, a lot less technical than an actual course, and hopefully just right for some nonempty subset of readers.

Outside of his podcasting career, some of you might recognize Nguyen as the coauthor, with Theo Polya, of a rebuttal of “Geometric Unity.” This latter is the proposal by the financier, podcaster, and leading “Intellectual Dark Web” figure Eric Weinstein for a unified theory of particle physics. Now, I slightly know Weinstein, and have even found him fascinating, eloquent, and correct about various issues. So, in an addendum to the main video, Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

Enjoy!

74 Responses to “Dismantling Quantum Hype with Tim Nguyen”

  1. gentzen Says:

    Dismantling … hmm

    Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

    Stephen Wolfram is not coauthor of any of the following papers, but I still wonder whether this should remind me of Nelson Tansu.

    Hypergraph Discretization of the Cauchy Problem in General Relativity via Wolfram Model Evolution (https://arxiv.org/abs/2102.09363) p. 90: “The default choice of evaluation order, as shown above, therefore corresponds to a particular path in the multiway evolution graph, as demonstrated in Figures 59 and 60, with vertices merged on the basis of hypergraph isomorphism, using a slight generalization of the algorithm presented in [96].”

    ZX-Calculus and Extended Wolfram Model Systems II: Fast Diagrammatic Reasoning with an Application to Quantum Circuit Simplification (https://arxiv.org/abs/2103.15820) p. 48: “Generic Wolfram model evolutions are therefore described in terms of multiway evolution graphs, with the canonical updating order shown previously hence corresponding to a single path through this graph, as demonstrated in Figures 9 and 10, with state vertices merged on the basis of hypergraph isomorphism, using a generalization of the algorithm presented in [46].”

    Pregeometric Spaces from Wolfram Model Rewriting Systems as Homotopy Types (https://arxiv.org/abs/2111.03460) p. 20: “In the Wolfram model these are specified via approximate hypergraph isomorphisms, which are based on extensions of standard error-tolerant graph matching algorithms [28], [45]”

    [28] Horst Bunke (1998): Error-tolerant graph matching: a formal framework and algorithms. In: Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), Springer, pp. 1–14.

    [45] ([46]/[96]) Jonathan Gorard (2016): Uniqueness Trees: A Possible Polynomial Approach to the Graph Isomorphism Problem. arXiv:1606.06399.

    So either these are just self-citations without much consequence, or their implementation really uses “a slight generalization of the algorithm presented in [96]” that may be polynomial and correct, or polynomial and incorrect, or not-polynomial in general, but still correct. [96] seems to have 5 citations in total. Not bad. For comparison,
    Deep Weisfeiler Leman has 11 citations, and many of those are “self-citation” or “not independent”-citations too.

  2. Johnny D Says:

    Ha, here again you use the word simulate in a funny way (funny to me at least, and ‘again’ refering to my comments on non-Abelian exchange stats from a previous post, yes a shameless plug!). 😜 You refer to error correction as simulating a logical qubit. In this case the logical qubit would exist and not just be simulated. Just because something is in the state space of a qc doesn’t imply it is a simulation. The states actually exist.

  3. Scott Says:

    Johnny D #2: In your worldview, it seems like there’s no need for the word “simulate” at all, since every “simulation” (as I’d call it) just brings about whatever it’s simulating. Thus, simulated annealing should just be called annealing. Weather simulations should just be called weather enactors. And Second Life should just be called … Life!

    I fear that, if we changed language in this way, we’d end up reinventing the concept of “simulation,” just using a different word for it. That’s what almost always happens when you try to ban a concept that practitioners in some field were actually using.

  4. Johnny D Says:

    I am certainly against banning in general. 😀

    Simulations are simulations and existence is existence. Computers simulating weather is simulation. It is not weather. Qcs simulating weather is simulation. Qcs creating quatum states with well defined mathematical properties is existence not simulations.

  5. Spencer Hargiss Says:

    I don’t even get how Wolfrom can claim to explain a simple interferometer with multi-way graphs if the different branches of “branchial space” don’t interact. Let alone Bell’s inequalities. How can he explain any quantum interference pattern at all?

  6. Michael Vassar Says:

    Honestly Scott, I can’t see how you could disagree with Johnny D here.

  7. mls Says:

    I had been somewhat surprised to discover that Dr. Wolfram responded to Dr. Aaronson. Earlier this year, he chose a different course of action in a different forum.

    Since there are questions about how a Wolfram affiliate might understand the relationship with physics, I will indicate a partial history. I must speak of an affiliate because Dr. Wolfram, himself, chose to not participate in any of the ensuing discussion.

    In March, Dr. Wolfram made the post,

    https://cs.nyu.edu/pipermail/fom/2022-March/023177.html

    to the FOM mailing list. As the title of this post suggests, his computational perspective relates “physics” to “metamathematics.” Naturally, this post had motivated some interesting discussions. The significant challenge to Dr. Wolfram’s broad claims had eventually been made by Dr. Friedman,

    https://cs.nyu.edu/pipermail/fom/2022-March/023199.html

    Dr. Wolfram never answered this challenge.

    One highlight of Dr. Friedman’s response is his position that “metamathematics” is a vague notion. I have to agree with this in so far as it is now loosely described as mathematics used to study mathematics. However, it has its origins in Hilbert’s hope to realize a consistency proof by analyzing the syntactic form of proofs. The path to computation follows directly from this isolation of syntactic forms. And, unfortunately, this emphasis on syntactic forms “feeds” right into the manner by which analytical philosophers “define” mathematics. So, I disagree with character of Dr. Friedman’s challenge about metamathematics when the historiography is taken into account.

    Otherwise, the challenge is proper and ought to have been answered.

    One of the FOM participants whose original interest had been physics is Vaughn Pratt. He has written an excellent SEP entry on algebra, should anyone be interested. At one point, he issued a challenge concerning physical theory which he took as significant. It involved calculations for two fundamental constants.

    An affiliate of the Wolfram project, Juan Caballero did attempt a reply. Respectfully, he reminded readers that he had only been speaking for himself. His response to Dr. Pratt can be found in the link,

    https://cs.nyu.edu/pipermail/fom/2022-August/023533.html

    Because of my personal interests, I would have liked to see a reply to Dr. Friedman. I have provided the exchange between Dr. Pratt and Juan Caballero without bias or judgement because some of Dr. Aaronson’s readers may be curious about how others see a relationship between Dr. Wolfram’s program and physics.

    Understanding the tension between physicists making claims about mathematical foundations and mathematicians making claims about physics, I acknowledge that a focus on Bell’s theorem is far more pertinent.

  8. Ilio Says:

    Michael Vassar & Johnny D #2-4-6, Out of random curiosity, do you also reject the name « quasiparticles » because of their well defined mathematical properties?

    The point is, if you reject this name, then you’ll soon need to differentiate between « particles from emergent states » and « particules not from emergent states ». Which is why the convention of calling the later « real » and the former « quasireal » may be useful, at least condition on agreeing that naming conventions are just conventions.

  9. gentzen Says:

    Michael Vassar & Johnny D & Ilio #2-4-6-7: But isn’t this just the “old” confusion why the word/concept “emulation” is needed in addition to the word/concept “simulation”? If a modern computer provides an environment able to play old retro-console games, then it is called an “emulation” and not a “simulation”. So if a physical quantum computer provides an environment to run a “logical” quantum algorithm (i.e. one where the physical details have been idealized away), then calling it an “emulation” seems more appropriate than calling it a “simulation”.

    However, both words/concepts suffer from the fact that normally they “emulate” or “simulate” something which has physically existed at some point (or at least is planned to physically exist at some point in the future for some application cases of “emulation”). So the worry by Johnny D that “emulation” or “simulation” is the only way to ever bring that something into existence remains, in a certain sense. But I am pretty sure that this has happened multiple times with “emulation” before in the past.

  10. Martin Mertens Says:

    Hey Scott, watching the episode I get the impression you’re not someone who likes to say “A tensor is an object that transforms like a tensor”. I’m curious, how would you put that idea into words in a less annoying and circular way?

    (Others are welcome to reply)

  11. Scott Says:

    Martin Mertens #10: A tensor is a k-dimensional array of real numbers, for some k=0,1,2,…, which in physics can typically take a different value at each point in spacetime, and which has special rules for manipulating it to help you keep track of which functions of the real numbers depend on your choice of coordinates for spacetime and which functions don’t. (The latter functions, of course, are the ones of actual physical interest.)

  12. Mitchell Porter Says:

    I’ll put in a word in favor of Eric Weinstein’s “Geometric Unity” theory – not out of conviction that it’s correct, but simply because I think it’s far more technically interesting than anyone else seems to realize. The most famous part of Tim and Theo’s refutation is the argument that the “shiab operator” which couples the 4d metric to a Yang-Mills field on the 14d metric bundle, requires the Yang-Mills gauge group to be complex, and this is physically problematic. But Eric wants to bite that bullet; it’s known you can get away with this in lower dimensions, along the lines of Hitchin’s construction of “Higgs bundles”. Whether or not this can be generalized to higher dimensions, someone ought to make a lower-dimensional toy model of Geometric Unity which *does* directly use a Hitchin-type construction. And this is just one of many lines of investigation arising from GU; there are also connections to supercritical string theory and Kirill Krasnov’s work on generalized spinors in 14 dimensions.

  13. Ilio Says:

    Martin Mertens #10: A tensor is a type of number whose structure helps to describe, interpret, or organize neural information flow (artificial, biological, or mixed). It is also rumored to have some applications in physics and other simple (non evil) problems.

  14. Martin Mertens Says:

    Thank you Scott and Ilio!

  15. Bob Says:

    Martin Mertens #10: A tensor is multi-linear map. It is often represented as a matrix though this would be cumbersome for higher order tensors.

  16. Johnny D Says:

    llio and gentzen #8 #9: I don’t object to quasiparticle. Ultimately these words help us to understand phenomenon, but reality often requires us to look at the mathematical model to understand better. Words pail compared to the math description of reality.

    This conversation was accidental. I asked Scott why he used ‘simulate’ when referring to the Google Quantum ai et al’s observation of non-Abelian exchange stats. I thought he might respond with some suttle math argument that I missed. I should have asked directly to everyone, what do you make of this work?

    To everyone, what do you make of this work?

    The importance to me is: the extended Church Turing thesis is that reality is in BQP. Evaluating knot polynomials is BQP complete. Ergo, if you accept ectt, in one way of thinking, everything is knot polynomials. We are all knot polynomials. In qc this is realized by braiding anyons. All math well and good, but now we have physical realization of this braiding phenomenon.

    This is a realization of holography as these braids are in 3d spacetime. I always imagined that holography would be like science through the 20th century where math is so concise that fundamental theories fit on a t-shirt. Alas, the 21st century likely teaches us that no concise holography exists, but holography does exist, enter the qcs!

  17. manorba Says:

    to me the difficulty about tensors was the fact that they are a very convenient and economical notation, but in reality they represent a bunch of distinct linear equations. Scalars where trivial, vectors and forms required just a little more thinking. but tensors!
    I remember i had the classic enlightment moment when, after days of stuggling with the fact that the definitions and transformation laws were pretty clear but then again i couldn’t really grasp the mechanism, i watched the pilot episode of the season “General relativity” of the series “Continuing Education” by some “Prof. Leonard Susskind”. He Just introduced the GR tensor and began to fiddle with some of the equations. 10 minutes later i was asking myself what was my problem with tensors.

    Bear in mind that my school math never went beyond the parabola equation. I had to pick my scientific education back up by myself well over my forties and by Cthulhu i was and still am rusty.

    by the way did i read somewhere that tensors are hats on top of points?

  18. Bob Says:

    gentzen #9:

    Both terms are a problem if you’re trying to be precise. In computational physics we talk of computational models. This makes it clear that we are dealing with a representation of a phenomenon rather than the phenomenon itself. Maybe this is all that a simulation should mean, the evaluation of a representation. Emulation is usually used in the context of trying to imitate the behavior of a system on a platform different to the original.

  19. Danylo Yakymenko Says:

    Along the refutation of automata-like theories of everything the science should drop the abstract idea that everything in the universe could be explained by interaction of its tiniest parts. We should look for a bigger picture where the whole pattern makes sense, not just the bits of it. Quantum Mechanics is already suggesting that, because measurements of parts of an entangled state correlate, but not influence each other.

    By the way, a lot of introductory courses on quantum computing avoid (or postpone) the discussion of density matrices. I find it somewhat lacking, since the measurement of parts of entangled states is the very natural question that arises when you first learn QM. For example, people might ask what is the state of a particular qubit in the middle of a quantum computation.

  20. Random person Says:

    Hey Scott, I thought you’d be the perfect person to ask about this since I’m having an extremely difficult time understanding randomness but I’m completely open to changing my mind, and it’s not a challenge but rather an inquiry, since you have a lot more knowledge and education than me.
    I saw your smaller interview video with Nguyen (not the lecture long form one), and you talk about the universe “generating randomness” and so on.

    Well, to me, randomness at a fundamental level, that the universe is itself random outside of our knowledge, seems completely incomprehensible.
    I’m sure I don’t have to go into great lengths about this since it’s a pretty common intuition in the physics community it seems like that everything should have an explanation, but in the case of randomness, this is not the case. A random event is by definition unexplainable since the physical stuff that produces that event, does not contain the information about the event outcome, if it did it would not be random. The common intuition is that everything is deterministic, like a video tape that is played back, and no information or “physical stuff” is deleted or added to the process, but in the case of random events, you /have/ to add new physical stuff to the process to have a random event.

    My first question though is, how do you empirically prove that an event is truly random? I think if an event is random there is by definition unknown things happening in the system, and as such you don’t have a complete account of the random event, and thus there isn’t proof that it is random. And that’s the catch-22 of the whole thing because you can say well that’s why it’s random, but then the question remains.

    The second question is why are you and seemingly a lot of other physicists satisified seemingly with the statement that the universe has randomness/random events? Why isn’t there an intuition there to ask how that randomness works or why it’s there? Or like, why is that the final answer when it goes against all other instances where it seems like physicists want to ask “why” and give a deeper explanation?

    I’ve also thought it could be my brain and the way human brains work that we always want an explanation for things, but then I would say, even if our brains are unable to understand it, we still lack the explanation either way, and so it’s not known either way. Like saying our brains are unable to understand it isn’t an answer.

    Also last thing to close my own thought “loopholes” – with the bell ineuqality experiment, have they also done the “inverse” of the inequality experiment where they have predicted and measured /non-entagled/ particles to make sure they are always able to predict non-entangled ones as well as entangled ones? I’m sure they have, but I’m just asking for my own “database of knowledge” and wikipedia didn’t give any particular info.

    If that’s the case then at least there is a very strong case something true is going on in the math and in reality in that sense.
    Thanks for reading and I hope it’s not too obnoxious but I am /really/ struggling with this idea to the point of going insane as it is completely off the walls to me.

  21. JimV Says:

    We have had long discussions of determinism versus “free will”, in the sense that the free-willers seem to think they can overcome determinism by some sort of intrinsic magic, which I object to on the grounds that without determinism nothing can be determined/explained. Also, some determinists seem to think that determinism does away with responsibility, whereas I think that determinism is the basis of responsibility. The issue of whether some randomness makes sense is a new one.

    Many things seem random to us because of chaotic sensitivity to initial conditions, or lack of precise knowledge of the initial conditions which may have determined a result. Such as coin flips and die tosses, etc. In general if we can’t precisely predict something and it has no discernable pattern we call it random. For all we can prove, that might be the only sort of randomness that exists. I think Einstein may have thought so. (Not bad company to be within.)

    Personally I am comfortable with the notion that pure randomness exists, as long as it averages out to mostly deterministic at the macroscopic level. That gets back to the question of why anything exists, and I think randomness is a possible answer to that. Afterall, starting with pure nothingness there would be no rules, no laws of nature, so something could emerge randomly. Having emerged though it would need to have consistent rules to continue to exist and evolve. Then the Anthropic Principle comes into play. The universe could perhaps be full of particles and forces which don’t interact with us at all, randomly emerging. The ones we do interact with support our existence. For now, anyway.

    Jack Vance wrote a short story long ago about life in a completely random pocket of the universe (which the Earth wandered into). There was no determinism and the result was not pretty. I have felt grateful for what determinism we have ever since.

  22. Ilio Says:

    Gentzen #9, I #8 shouldn’t talk for 0-2-4-6, but many related words are best interpreted as low dimensional tensors (often as low as vectors) to/from which one can attribute/infer meaning based on their own set of experiences and training. For exemple suppose on my left I can see a mesh of cellular automates emulating the same classical pendulum placed on my right, then I could use the simulation/emulation direction to classify and attribute order to what I’m seeing (is the right mimicking the left or vice versa, or neither, or both?). With that in mind, I read #0 use of « simulation » as a way of saying what matters was the logical bit, either because it’s perfect by itself or from heavy error-correcting imperfect logical qbits. So, just relax your kernels and you’ll see #0 probably meant very little harm, if any, to your thoughts set.

    Johny #16, in my book words are a mathematical description of the reality, but I don’t know yet how to turn that into useful contribution to to the Google Quantum ai et al’s observation of non-Abelian exchange stats. Actually I have two questions: shouldn’t you say that reality is more likely in PSPACE (because it seems exponentially hard to simulate the boundaries of any black hole)? Second, what ideas make you say no concise holography « exists »?

  23. Scott Says:

    Random person #20: To answer your last question first, people have by now done basically every experiment you can possibly imagine involving at most a few qubits. And the results of such experiments are always completely 100% explained by quantum mechanics. Accept QM and none of it is a surprise; reject QM and none of it makes sense.

    Meanwhile, the whole point of the Bell inequality is that, if you want your universe to have even the appearance of being quantum-mechanical—never mind the “underlying reality”—and you also want it to be deterministic, then you pay a very steep price, involving nonlocal hidden variables that seem among other things to flagrantly violate the spirit of special relativity. And even once you pay that price, you get a theory with no extra explanatory power, one that’s empirically equivalent to just saying that QM is true and the measurement outcomes are random the way QM says they are, full stop. Given this, the overwhelming majority of those of us who work on these topics consider that price not worth paying (although a minority still does want to pay it).

    Incidentally, as you might know, the Many-Worlders would say that the “randomness” of quantum measurement outcomes isn’t really “out there” in the world, but is just an artifact of our subjective experience of a deterministic but constantly-branching wavefunction. In other words, they treat it as not essentially different from the “randomness” that makes you you rather than some other person, which presumably you don’t have a problem with?

    Even if you reject the Many-Worlders’ account, I personally don’t see anything wrong with the basic laws of the universe containing a quantifiable stochastic component — it makes them no less “lawlike”! But if you do have a problem with it, then Many-Worlds might be the best way to reconcile yourself to the known facts of physics.

  24. mls Says:

    Random Person #20

    When you speak of a demand for explanation, you are speaking about a “trilemma.” One option of this trilemma is “foundationalism,” and this option is significant for theories/conceptions of truth. The other components of the trilemma are “circularity” and “infinite regress.”

    Foundationalism involves making stipulations about the use of symbols to indicate that particular symbols shall be interpreted as refering to an “individual,” however that may be intended. These stipulations are needed because we cannot see through each other’s eyes. If one were to make a declarative statement about the device you are using to read this, the truth or falsity of the statement would be determined in relation to this reference. This is called a correspondence theory of truth.

    Considerations like this lead to notorious distinction between syntax and semantics in first-order logic. When semantics is deprecated, you obtain a constructive logic called Herbrand logic,

    https://www.cs.uic.edu/~hinrichs/herbrand/html/herbrandlogic.html

    Note, however, that truth is important if one perceives proofs as using truth to justify the syntactic transformations corresponding to steps in reasoning.

    First-order logic, is not the only logic that uses a stipulation of this kind.

    Years ago, when Alexander Grothendieck passed away, David Mumford had been asked to write an article for Science magazine about his importance for mathematics. Dr. Mumford describes his experience on his blog. The work that mathematicians have done involving logic is largely intractable. Grothendieck worked in a branch of mathematics called category theory. Largely because of William Lawvere, category develops logic in a manner different from first-order logic. Dr. Lawvere has a set theory different from the Zermelo-Fraenkel set theory claimed to be foundational for mathematics. And, Dr. Lawvere has developed smooth infinitesimal analysis as a means to understand the real numbers differently from the classically developed real number system as a complete ordered field.

    Besides unfamiliarity with this mathematics, the science community is faced with the dilemma that their classical reasoning is “realist” in the sense of first-order semantics. Whenever category theorists attempt to challenge the commonly accepted Zermelo-Fraenkel set theory, they must confront the philosophical debates which portray the internal logic of toposes as “anti-realist.”

    Physicists abhor these debates. That they may even broach discussions in the physics community is because of the incompatibility of quantum mechanics with general relativity.

    But, the correspondence theory of truth is a high hurdle.

    In part, the issue with physics is that the “identity” of first-order logic is different from “identity in time.”

    Non-truth in propositional logic is in opposition with truth.

    Existence in first-order logic is in opposition with contradiction.

    Unfortunately for the internal logic of toposes, the intuitionistic character of its logic still involves an intolerance for contradiction.

    The problem for physics is to understand a similar dichotomy for “persistence ”

    Completed infinities enter physics through the law of inertia. Pedagoically, this law is introduced through Newtonian mechanics with two cases: “at rest” and “in motion.” In so far as it relates to a purely deductive reasoning, the deduction uses the relativity of space with respect to rectilinear motion. When the theory of relativity tranforms the opposition of “at rest” and “in motion” into the idea that “everything moves,” one is effectively declaring that zero and non-zero are “the same.” And, because “at rest” and “in motion” of the initial pedagogical explanation are described with an implicit “forever,” the analysis of “identity in time” as it seems to be applied in physics carries an essential contradiction from the vantage point of logics which portray contradiction as fatal.

    This is why your innate sense of logic and reasoning have such difficulty with the evidence for quantum mechanics.

    I pointed out above how there are logics different from first-order logics which use the same syntactic stipulations as first-order logics. Among them is negative free logic which contains a logical principle called the indiscernibility of non-existents. For reasons similar to those above, analytical philosophers would find this logic problematic. But, there is some history to be considered.

    Thoralf Skolem had been a finitist in certain respects. Finitists speak in terms of “potential infinities.” Technically he is a semi-intuitionist with ideas arising from the influence of Poincare. Poincare argued as an intuitionist on the basis of how mathematical induction yields knowledge claims concerning an unrealizable number of cases. One equivalent formulation of induction is that every set of natural numbers has a least element.

    Naturally, with a fixed labeling, that least element could be any natural number.

    Or, with the indiscernibility of non-existents in mind,… Poof! an initial finite sequence of numbered cases “vanishes” to the oblivion of non-existence.

    I am hoping that this sounds like the Copenhagen interpretion to you.

    By contrast, insistence upon the principles of first-order semantics will drive one to the modal semantics of Kripke frames. Or, in the parlance of physics, “many-worlds,” or “string landscapes.”

    For what this is worth, Dr. Aaronson did invite a discussion of paraconsistency in the past. Although tentative, this may be a means of understanding why it is applicable.

    If, physics tells us that the universe is telling us to use a different logic, then we ought to consider it.

    I hope this gives you some idea of what may lie behind your consternation.

  25. Random person Says:

    Scott #23: Thanks for the reply and thanks for being so explicit.

  26. Johnny D Says:

    llio #22: The argument for reality being in BQP is in any finite volume of space, there are finitely many degrees of freedom so they can be mapped to finite number of qubits. The interactions all have finite rates, over some minimal distance, so the state’s complexity should evolve at most linearly in time. (Note that I said IF you believe this. I did not say I believe it in a philosophical sense, but I believe it in all practical situations!)

    The conciseness of holography is a field still being worked out. I use ‘likely’ as my estimate of the result being not concise. It is also fun to imagine. The t-shirt would say, “My ai/qc hybrid system calibrated a TOE and all I got was this lousy t-shirt.”

  27. JimV Says:

    “…then you pay a very steep price, involving nonlocal hidden variables that seem among other things to flagrantly violate the spirit of special relativity. And even once you pay that price, you get a theory with no extra explanatory power, one that’s empirically equivalent to just saying that QM is true and the measurement outcomes are random the way QM says they are, full stop.”

    Except (minor quibble) for the stated position (at one time) of some such as Dr. Sabine Hossenfelder, that there could conceivably be another way that does not pay that price and adds some explanatory power. Which does not mean that she is correct, just that the above statement may not be all-inclusive. However I am not sure that she still considers her initial position (chaotic sensitivity to detector settings) to be viable. Last I knew though, she did not believe that sufficiently sensitive, very-low-energy, repeated experiments had been done to rule that out.

    It seems to be too late for any reconcilement of the differing views, as everyone is too busy concentrating on other things. I acknowledge that the consensus is as Dr. Aaronson has stated it, and that is the way a layperson should bet.

    (As always, a moderator might well decide this comment is not worth publishing without any justifiable complaint from me. If anything, I think moderation is generally too accepting at this site.) (Not heretofore in this thread though, with the possible exception of my previous comment.)

  28. Scott Says:

    JimV #26: If you want both determinism and the appearance of QM, then you can either pay the billion-dollar price of nonlocal hidden variables, or you can pay the quadrillion-bajillion dollar price of the sort of superdeterminism that Sabine wants. Whereas QM is available to you for the low, low price of understanding that “determinism” applies only at the level of the wavefunction, not the level of observers’ experiences. 😀

  29. Ilio Says:

    Johnny D #25, Yeah, QM is good evidence for reality in BQP, but my question was more like: is it wrong to interpret Harlow-Hayden 2013 as evidence for reality in PSPACE? If yes, why? If not, why QM over GR? I got that you may be more interested in practical situations than in pulling hairs out of black holes, but hey: « Shut up and don’t calculate black hole radiation! »… you should definitely make t-shirts. 🙂

  30. Benjamin Says:

    Scott, anything to say about this nerdphobic shit in the Atlantic? https://www.theatlantic.com/ideas/archive/2022/11/economics-sexual-harrassment-women-sexism/672239/

  31. Johnny D Says:

    llio #28: I don’t understand how a black hole interior can be entangled with Hawking radiation. I subscribe to the view that GR is thermodynamics. In particular, BHs must have many dof. Quantum systems decohere when they interact with many dof. Ergo quantum systems decohere when they are at center of BH. I understand that you can ignore this decoherence and do lots of math, but I don’t understand how it could be physical. So to me HH doesn’t effect my view of reality. BHs can no more be entangled than neurons in my brain. Am I missing something?

    If you assume spacetime is discrete, it is, in any finite volume, finite. This finite set is where quantum fields can exist. Assume you smash 2 particles with gargantuan energy together. There will be few spacetime points around this. A story that makes sense to me is that these few points cannot form a BH prior to the energy scattering. BHs require many dof.

    An analogy: one gas atom in a box still has an average kinetic energy, so maybe temperature. The box has a volume, so you have everything for the ideal gas law except pressure. To define pressure in this case, you have to average over long periods on time. With many particles, essentially no time. BH also require some time if small dof. So large energy of 2 particles won’t work as the energy won’t hang around to form BH. BH are not in superpositions any more than other systems of many dof like people, planets, etc.

    My bet is that quantum gravity doesn’t exist in any meaningful way. QFT in 4d spacetime also doesn’t exist in any exact way. However, holography saves the day as now you are in 3d spacetime with no gravity! And, of course, here, we are all knot polynomials.

  32. Scott Says:

    Benjamin #30: Sorry, I’m going to decline to comment, especially since I have no firsthand knowledge of what things are like in the economics profession.

  33. Ilio Says:

    Johnny D. >Am I missing something?

    Maybe the inherent tension between the idea that QM apply and the idea that black hole « interior » can decohere without entanglement with Hawking radiation. If the latter is true, then BH evolution is not unitary, then QM is wrong… which would be very interesting, but then we would no longer have any reason to believe spacetime is discrete.

  34. Jump Rope Robot Says:

    Scott 32:

    It’s no coincidence that you never put your neck out for incels, or for men’s issues more generally. You “decline to comment” on any of society’s attacks on our existence. You pretend to care but you’re actually an incelphobic asshole. I hate my life.

    JRR

  35. Get Vaxxed and Save Lives Says:

    Scott, not trying to sneer here, but do you really have “nothing to say” about women being harassed in academia? You’ve taken political positions on so many issues outside your expertise, but not the sexual harassment and abuse directed against female academics? There is no “moral ambiguity” here. It’s a cut-and-dry issue of protecting women vs. misogynistic sexist assholes. Maybe during the 1930s you’d have nothing to say about segregation in the south and the KKK either, because it’s outside your expertise.

  36. Scott Says:

    Annnnnnd … comments #34 and #35, in combination, have now perfectly, unintentionally illustrated why I decided to sit this one out! 🙂

  37. Welcome to jurassic park Says:

    Scott, standing up for marginalized groups, while they’re oppressed by society, has historically been a dangerous proposition. You’re sympathetic to the incels, but you refuse to stand up for them, because you’re afraid of “social backlash.” In simple words: you’re a coward. There hasn’t been a worthwhile cause, in history, that didn’t risk social backlash. Incels are suffering every day and you do nothing—because you’re scared of r/sneerclub or twitter or whatever. It’s so pathetic. You have no spine and no integrity. Not trying to be mean, I’m just sick of this shit.

  38. Scott Says:

    Welcome to jurassic park #37: Do you still not see the problem? For most of the educated world, sexually harassed women are the relevant “marginalized group” here, the one for whom we all need to stand up courageously. And … they’re right! Sexual harassment really does derail careers and ruin lives. I know this from countless female friends and colleagues. I’m determined to give it no quarter or sympathy.

    The trouble is, the people who point to the vast ocean of suffering of the well-meaning, nice, and nerdy, who say that no civilization like ours can long survive if it treats a large and growing fraction of its male population (and a smaller fraction of its female population) as fundamentally losers, creeps, disposable, and undateable—those people are also right. I know this as well from the experience of countless friends and colleagues, and of course from my own narrow escape, in which I leaned heavily on the intellectual world to reassure me that my life had value, even as the social world seemed to be telling me that it had none.

    Granted, it might be impossible to find a solution that both

    (1) upholds female agency and autonomy, “no means no,” as the inviolable absolutes and moral triumphs of liberal modernity that they are, and also

    (2) socializes nerdy guys so that they can succeed in a brutal dating market for which nothing in their nature or upbringing prepared them, and in which they can either initiate and risk being called creeps, or else not initiate and die alone.

    But it’s definitely impossible if we don’t at least see the contours of the problem!

    And so, much like with the Israeli-Palestinian conflict, I’m determined to acknowledge the correctness of both moral claims even if doing so rips me in two. Even if one side calls me a sellout and the other side calls me a Nazi. This is the “cowardice” of which you accuse me.

  39. Andrei Says:

    Scott,

    „If you want both determinism and the appearance of QM, then you can either pay the billion-dollar price of nonlocal hidden variables, or you can pay the quadrillion-bajillion dollar price of the sort of superdeterminism that Sabine wants.”

    The problem is that with randomness you cannot have a local theory. It is logically impossible for two genuinely random events (A and B measurements in an EPR setup) to be perfectly correlated. The only way you can have randomness is to postulate than one of them is random (the one which comes first – and in a relativistic context deciding which is first is not possible) and the other one is determined by the „first” through a superluminal signal. That’s it.

    The excuse provided by the non-determinists is that such a signal cannot be used to send a message. True, but irrelevant. Such a signal is still incompatible with relativity. Like it or not, the only way to consistently combine relativity with QM is superdeterminism.

    I don’t think that Hossenfelder does a good job in promoting superdeterminism. Honestly, I don’t really understand her models. She also claims that she is not a realist, that superpositions are real, etc., which seems to be at odds with superdeterminism.

    I think the correct way to approach superdeterminism is to ask what is the minimal condition that would be required. And the answer is simple: long range interactions between the experimental parts. If such interactions between the source and detectors exist, their states would become correlated. Do we already have such theories? You bet. Classical electromagnetism is one. General relativity is another one. Fluid mechanics is another. If you measure the velocities of two interacting objects (the distance between them is arbitrary) you will see they are correlated (the vectors will be coplanar for example) even if the measurement are space-like. There is no reason to believe that Bell tests are any different.

    You may agree that accepting Maxwell’s theory or GR does not require a „quadrillion-bajillion dollar price”.

    It’s worth noting that cellular automatons are used to simulate field theories, like classical EM and GR, so such models can’t be ruled out by Bell’s theorem either. I don’t think Wolfram understood that but ’t Hooft did. His model is presented here:

    Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy

    https://arxiv.org/abs/2103.04335

    If you (or Tim Nguyen) find some time, you may try to write a rebuttal of his model.

  40. Andrei Says:

    JimV,

    “Personally I am comfortable with the notion that pure randomness exists, as long as it averages out to mostly deterministic at the macroscopic level.”

    Such pure randomness leads directly to non-locality, as EPR-Bohm setup proves.

    If you want locality then a measurement at A should not change B (A and B measurements are space-like).

    QM tells us that the state of B after the A measurement is a spin eigenstate (if A got spin UP, B would be DOWN). But if the A measurement did not change the state of B what was the state of B before the A measurement? Obviously spin-DOWN, otherwise it changed. So, B was spin-DOWN all along, the superposed/entangled state only reflected our lack of knowledge regarding the true state of the particle.

    So, locality is incompatible with randomness.

  41. Random person Says:

    mls #24: That’s very interesting thanks, but I must admit it will take me a while to understand all of that fully.

    I have thought about things like infinite regress but in a much simpler way. For example our brains seem to see things as objects and then we have transformations and interactions we can apply to those objects, like we can visualize a cube and even transform the cube into a sphere or rotate the cube or try to combine them in a way where the shapes fit, and that even this is how the brain physically works in that there are dedicated neurons and pathways in the brain that represent different objects in the world and the different mental properties of those objects.

    But the problem then with infinite regress is that we have a natural ability whenever we have a mental object or a physical object, we can always apply all the properties we can to any other mental object. Like we have no way of mentally visualizing an object that can not have something outside it, or that can’t be built of smaller objects or that can transform into something else. So in that sense, “the smallest possible object” in the universe can always contain smaller objects, and there can always be something outside the universe, so whether phycisists tell me the universe is self contained and infinite, or the smallest scales are made of small parts, or it is smooth and continuous – none of those explanations stop my brain from asking more questions about them – I can’t enivision a smallest possible object that is not made of smaller objects, and I cannot visualize a smooth and continuous “infinite small” space either. Nor a universe that is infinite or a universe that is finite. which at this time I consider a flaw of my brain and not physics or the universe. It just took me a long time to get here mentally as I spent a considerable time thinking about all of these things and only recently have I really realized the shortcomings.

    So when I posted my earlier thing about an explanation, I can feel that this is also a byproduct of how my brain thinks – however the cool thing is, science works even without this. Even if these mental concepts aren’t enough, the math works and the science and technology either works or it doesn’t because otherwise the universe would not allow us to do the things we do so I’m just going to let the professionals do their thing and stop thinking about this for a while. It’s very interesting though and thanks for your reply too.

  42. Johnny D Says:

    llio#33: The usual study of unitary evolution of black holes makes the following assumption: when the counterpart of the Hawking radiation hits the center of the black hole it doesn’t decohere. With this assumption you can imagine collecting Hawking radiation in a coherent way and get things like firewalls and HH. If the system decoheres then the collection of the radiation will be on a branch of the universe where only info consistent with the decoherence exists. There is still no info lost or lack of unitarity, it is just that harvested radiation is not entangled with the interior.

    If you assume that black holes have to have many dof and they behave as all other systems of many dof, no HH, no firewalls, no info lost, no prob. This is my assumption.

  43. mls Says:

    JimV #27

    You may find the paper about Einstein’s personal view of general relativity at the link,

    http://philsci-archive.pitt.edu/9825/1/Lehmkuhl_Einstein_Geometrization.pdf

    to be of interest.

    The SEP entry on Einstein’s philosophy of science observes that Einstein would have ascribed to a view of individuation corresponding with set diameters converging to zero. According to such a view, non-locality is counterintuitive. What is in fact the case for classical real analysis is the statement of Cantor’s theorem for nested *closed* sets of vanishing diameter.

    This, then, carries a presupposition of the classical definition for a topology. According to the first-order paradigm, topology is a higher order discipline for which advocates of the first-order paradigm claim there are no effective logical semantics.

    I suspect that Lemkuhl’s thesis about Einstein’s rejection of an interpretation of general relativity to be “geometrization” to be correct because I had once run across an anecdote of Einstein having a similar attitude toward von Neumann. Physicists remind mathematicians that physics is not mathematics with the same voracity that first-order logicians remind ordinary mathematicians to not confuse an individual, say \(a\), for the singleton {\(a\)}. This is a typical ambiguity in topology textbooks.

    Probabilities related to the mathematics of physics arise relative to the historiography of mathematics because Cantor’s transition from the study of trigonometric functions to that of transfinite arithmetic occurs with the identification of sets of uniqueness. Cantorian sets are not necessarily comprehensionalistic in the sense of logicism. William Lawvere has written about this when describing coherent toposes. When Cantor abandoned the study of sets of uniqueness, it had been resumed by the French semi-intuitionist Henri Lesbegue. It is from his work that a measure-theoretic account for probability theory arises.

    Descriptive set theory is more closely aligned with the French semi-intuitionists than is the commonly taught Zermelo-Fraenkel set theory.

    Dr. Hossenfelder questions interpretations of Bell’s theorem on the basis of the probabilistic assumption in Bell’s work. This may, in fact, be related to how the axiom of choice is responsible for non-measurable sets in classical real analysis. Correspondingly, one may study models of Zermelo-Fraenkel set theory in which every set is measurable. And, the axiom of choice may simply be denied by assuming the axiom of determinacy, which is motivated by the study of games with strategies. Along similar lines, probability is related to the status of the continuum hypothesis through Freiling’s axiom of symmetry,

    https://en.m.wikipedia.org/wiki/Freiling's_axiom_of_symmetry

    Before Dr. Hossenfelder moved to Patreon, she reprimanded a commenter on how “the law of identity” is “basic logic.” This is simply not the case. On its own, Tarski’s semantic conception of truth does not entail the necessary truth of reflexive equality statements.

    Because physics is described with mathematics, how physicists understand mathematics may very well influence the claims they make.

  44. Ilio Says:

    Scott #38, you do realize #30, 34, 35 & 37 is most likely the same person, right?

    Johnny D #42, no problem, but again I think you’re missing that this assumption actually means « ordinary QM doesn’t describe black holes ».

  45. Scott Says:

    Ilio #44:

      Scott #38, you do realize #30, 34, 35 & 37 is most likely the same person, right?

    (sigh) if anyone still doesn’t understand why I blog a lot less than previously…

  46. Johny D Says:

    llio #44: thanks for the conversation. I am sure you are correct when you say ordinary qm. I need to go through everything carefully.

  47. Adam Treat Says:

    Scott #28,

    Re: superdeterminism and “… you can pay the quadrillion-bajillion dollar price”

    How sure are you that any-and-all superdeterministic theories incur such a price? Consider the proposal by S. Hossenfelder, T.N. Palmer of IST https://arxiv.org/abs/1912.06462

    “More specifically, IST is a deterministic theory based on the assumption that the laws of physics at their most primitive derive from the geometry of a fractal set
    of trajectories, or histories, IU , in state space. States of physical reality – the space-time that
    comprises our universe and the processes which occur in space-time – are those and only those
    belonging to IU ; other states in the Euclidean space in which IU is embedded, do not correspond to states of physical reality.”

    In my very primitive understanding, what they have shown here is that a superdeterministic theory does not necessarily require immense conspiracies and it is still possible that a superdeterministic theory might be able to be written down on a single blackboard. Fractal theories produce complexity, but the mathematics can be very beautiful and simplistic, no?

    I’m assuming the “quadrillion-bajillion dollar price” you speak of involves the immense conspiracies in the initial state of the universal system that is often ascribed to such theories, but doesn’t this lone paper provide a counterpoint that such prices are not *always* required?

  48. Triceratops Says:

    Great podcast Scott! The “more technical than an average podcast, less technical than an actual course” audience is definitely not an empty set — although it seems this middle ground between popsci and grad school doesn’t get as much traction online as flashier, short-form, science communication. You did a great job striking that balance.

    I particularly liked your explanation of qubit superposition as “not AND, not OR, but a secret third thing”, framing superposition as an ontological category of its own.

    As for this comment section? I beg you to ignore the trolls, and consider the nuclear option when bottom-feeders like #34 and #35 crawl out of the muck. Your energy is better spent elsewhere — like appearing on more podcasts!

  49. Scott Says:

    Adam Treat #47: No, the Hossenfelder-Palmer thing still involves what, to my understanding, is an utterly insane conspiracy in the initial conditions … and this is not some technical problem, but inherent to what they’re trying to do. But there’s then a gigantic pile of fancy language about p-adic numbers, chaotic attractors, and phase space that they would say makes the conspiracy in the initial conditions no longer matter, because you’ve adopted such a radically new perspective, and that I would say just obscures the fact of the conspiracy. In any case, as long as all experiments continue to confirm QM, I don’t see where you go from here: the whole enterprise strikes me as scientifically a complete dead end.

  50. SR Says:

    Scott, if it’s any consolation, I feel like the critical comment #115 on your previous blog post was also likely written by the same troll. Compare the wording there (“I’m not “sneering” here, but it sounds to me like you do nothing and just get paid lol.”) with the wording here (“Scott, not trying to sneer here, but do you really have “nothing to say” about women being harassed in academia?”).

    To the troll– with the deepest respect, please go away and do something better with your life. You clearly are suffering, and I’m sorry for that, but what are you hoping to accomplish with any of this? Your comments are so caustic that any sympathy people had towards incels to begin with starts to wither away. If you just want an outlet for your anger, there are better places on the internet that could serve that function. And if you really want to find a partner, note that this kind of anger and resentment is not an attractive quality. Work on yourself instead of caring so much about society. Maybe things are unfair, and that is sad, but you still have it far better than most people worldwide today, as well as most people in history. Don’t squander that privilege.

  51. JimV Says:

    It seems to me the cause of science is incrementally advanced whenever two good scientists reach agreement on a debated point of fundamental interest. Dr. Aaronson strongly believes, as stated in this thread, that a huge price must be paid to abandon randomness in QM, and that no new predictive ability could possibly result. Dr. Hossenfelder has stated, in blog posts cited by me in a previous thread on this site, that it is conceivable that no price needs to be paid (i.e., no need for a conspiracy), and that new predictions, in the form of correlations not now known to QM, might result from a series of experiments designed to explore the possibility.

    I have been convinced that Dr. Aaronson is so firm in his conviction that he will not put any time and energy into explaining in detail to Dr. Hossenfelder how and why he finds her position incorrect. Which might well be justified by the relative importance of his other efforts. As a long time enjoyer of both blogs, though, it is discomforting to me, but I will have to live with it, along with other problems I have.

    I thank mis for his contributions and will look at his links.

  52. Scott Says:

    JimV #51: I indeed might eventually have no choice except to study the Hossenfelder/Palmer paper and blog about it in detail! But it will be a sad and thankless task. Experience has shown that one can make as much progress arguing with superdeterminists as one can with JFK conspiracy buffs: if you expect a meeting of minds, then you expect something that will not happen.

    In the meantime, the one point I need to correct you about, is that this is some sort of disagreement between “Dr. Aaronson” and “Dr. Hossenfelder.” No, it’s a disagreement between

    (a) the entire community of thousands of people who understand Bell’s Theorem—a community for which, because of this blog, I sometimes serve as reluctant spokesperson—and

    (b) well, Sabine, Tim Palmer, Gerard ‘t Hooft, and maybe 2 or 3 others.

    In other words, this is not a 1:1 debate, but 100:1 or 1000:1. Speaking for myself, if I were outnumbered 100:1 or whatever among experts, I wouldn’t even consider making my case to the wider world before I’d first convinced more experts.

  53. Adam Treat Says:

    Scott #52,

    Well, it wouldn’t be entirely thankless. I would certainly welcome (and thank!) your considered reading of Dr. Hossenfelder’s papers and it seems I’m not completely alone. In the interest of making it constructive I’d encourage you to first start with these two papers as they are more detailed (ie, they contain math haha) and the toy model paper specifically tries to address the conspiracy critique which seems to be the biggest thorn for you in superdetermistic theories.

    1) “A Toy Model for Local and Deterministic Wave-function Collapse” -> https://arxiv.org/abs/2010.01327

    2) “A Future-Input Dependent Path Integral for Quantum Mechanics” -> https://arxiv.org/abs/2110.07168

    It seems the purpose of these papers is to at least provide a framework to describe a superdetermistic theory that does not involve any grand conspiracies in the initial state of the universe. Whether she succeeds or not I have no idea, but that seems to be the claim:

    “It would be possible to quantify the predictability of our model, but this is both unnecessary and uninstructive because any such quantifier would be arbitrary anyway. We have instead compared our model to standard quantum mechanics and demonstrated that they come out in a tie. Our toy model, hence, is not any more or less fine-tuned than ordinary quantum mechanics.”

    Of course, one could ask what *benefit* does the model have to extending or making predictions *beyond* QM. The answer with this model at least is nothing whatsoever. But that’s not the point of the model. Does it succeed in its limited goals? Would love to hear your thoughts…

  54. Topologist Guy Says:

    SR:

    How do you know that this troll (assuming that there is a troll) is an “incel”? Reading the comments in question (namely 35 here, 115 on previous post) they strike me more as a SneerClub-type woke just trying to make Scott upset. Why would an “incel” troll be attacking incels and MRAs in Scott’s comment section? It doesn’t make sense.

  55. mls Says:

    JimV #51

    If the Einstein paper does instill any interest, then, at least, I ought to offer a little more guidance about what I am trying to convey.

    Relative to the standard teaching of the real numbers as a complete ordered field in the context of first-order model theory one has the result that real closed fields are decidable, but that the extension with unrestricted trigonometric functions is undecidable.

    https://en.m.wikipedia.org/wiki/Decidability_of_first-order_theories_of_the_real_numbers

    https://en.m.wikipedia.org/wiki/Richardson%27s_theorem

    For foundational studies based upon arithmetization, the reverse mathematics initiated by Harvey Friedman and Stephen Simpson provide certain weakenings of second-order arithmetic to investigate certain classical results from analysis. What are weakened in these systems are axioms relating to mathematical induction. A very weak notion of infinity is that of Koenig’s lemma.

    The system they describe with arithmetical comprehension can prove Koenig’s lemma. The system they describe with recursive comprehension cannot. By augmenting recursive comprehension with the weak Koenig’s lemma, they obtain an intermediate system. You can find a tabulation of these in the link,

    https://en.m.wikipedia.org/wiki/Reverse_mathematics

    The distinction here is the difference between the complete binary tree and the full binary tree. A full binary tree only admits finite strings.

    To be somewhat precise here, the singular notion of a tree in both cases assumes an empty string. If one considers an empty string to be a platonic abstraction, one is speaking of forests with only two trees distiguished by different symbols for roots.

    One may compare a Stern-Gerlach apparatus with the full binary tree composed of finite strings. Susskind and Friedman describe such a Stern-Gerlach experiment as their motivating example in the theoretical minimum series. And, they point out that these experiments are statistically correlated with trigonometric functions (cosines).

    Clearly, no sequence of experiments of infinite extent can be performed. So, correlation with classicality must be an “inductive generalization” in the sense of traditional logic.

    Setting aside the presumed ontology of real numbers, an epsilon-delta proof may be understood as a game between players with a winning strategy. Defensibility of a statement is different from truth grounded upon an ontology.

    I see many discussions about quantum strangeness grounded upon probability theory originating from the axiomatization using sets given by Kolmogorov. My questions arise from a different perspective.

    I am not a physicist. Without question, I could be dead wrong.

  56. SR Says:

    Topologist Guy #54: I believe it was established with reasonable certainty that there was a troll who was bothering Scott in multiple comments sections on previous blog posts, and that this troll was typing comments from both “woke” and “incel” perspectives. He admitted previously that the woke comments were trolling, but (as far as I know) never made a similar admission about the incel comments. I thus think that it is more likely that he is a genuine incel, but I could be wrong.

    I think the same troll wrote comments #34, 35, and 37 here. Note that these comments have a mix of woke and incel talking points, were all made near the same time, all are written by people with somewhat strange usernames, all contain certain stylistic similarities (note the similar use of quotation marks in all 3: “decline to comment”, “nothing to say”, “social backlash”). Note also that comment #34 here and comment #115 on the previous post share the feature that they end with the commenter signing off with their username. Combining this with the other piece of evidence noted in my previous comment, and the recent history of this blog, I am reasonably certain that these were all written by the same troll from before.

    As to your question of why an incel would write trolling comments from a woke perspective, here is my best guess. Scott is predisposed to disagree with certain kinds of woke rhetoric, namely that which attacks Enlightenment values or demonizes nerds. The incel recognizes this, and so riles up Scott by writing such fake woke screeds. His hope is that if he does this enough, Scott will become totally disillusioned with defending progressive values and will think that incels are more threatened and deserving of his compassion and advocacy.

    This is not far-fetched, btw. I have seen several instances of conservatives on Twitter delighting in making up fake woke talking points for amusement, deception, or in order to point out the conclusion of (what they perceive to be) a slippery slope.

  57. Topologist Guy Says:

    SR:

    Interesting points. Re-reading the relevant comments, I think you’re probably correct in inferring that they’re penned by the same individual. But I also recall that the incel troll who bothered Scott a couple months back made amends with him and promised not to troll the blog again. Didn’t he post here apologizing and begging forgiveness? It just seems more likely to me that this is somebody else. Surely checking the IP addresses could settle this?

    Scott:

    Have you considered an IP ban to deal with some of these more persistent trolls?

  58. SR Says:

    Topologist Guy #57: It’s certainly possible that it is a different individual from the one who trolled the blog a few months ago. This was probably not the best use of time, but I am incorrigibly nosy when it comes to certain things so I tracked down the sequence of events:

    * The original troll did apologize for his behavior initially (https://scottaaronson.blog/?p=6546#comment-1941644)
    * He afterwards started trolling again (e.g. https://scottaaronson.blog/?p=6576#comment-1942273)
    * He semi-apologized a second time after that (https://scottaaronson.blog/?p=6576#comment-1942310)
    * He then requested that Scott spend more time on incel issues despite his admitted earlier promise not to comment on the blog (https://scottaaronson.blog/?p=6635#comment-1942381)
    * He then apologized fully (https://scottaaronson.blog/?p=6645#comment-1942551).

    The fact that he broke his promises twice before makes me think there is still a reasonable chance that he was responsible for subsequent trolling as well. But I could be wrong.

    Funnily, we also had a similar exchange on a post last month, where I claimed a commenter there was the same as the original troll (https://scottaaronson.blog/?p=6778#comment-1943438). I stick by that analysis, although I am not as confident in its accuracy as I am in the comment I made above. In addition to the points listed there, note that the troll on that post and the troll at the first link above both claim to be 27 yrs old. The other details are slightly different (e.g. one claims to be a virgin; the other to have had one experience with a prostitute) but that might just be a difference in what is considered to “count”.

    Finally, the current troll was also the one who commented here (https://scottaaronson.blog/?p=6813#comment-1943968) by again comparing the wording (“I really don’t want to come off as a “sneerer” here”) with what I pointed out above in comment #50. I suspect the trolling in between that and the previous post was then also the current troll.

    Combine all of this with the presumably-low base rate of self-identified incels who read TCS blogs. I would then estimate the true number of trolls over the past few months is somewhere between 1 and 3. If I had to give my personal probabilities, I would put a 75% chance on 1, 20% on 2, 4% on 3, and 1% on 4+.

  59. Johnny D Says:

    The future: A giant solar panel orbits the sun. In it’s
    very cold shadow exists an enormous computational structure. The structure is made of a cubic lattice made of tubes of grids of scc. The interior of this cubic structure fits a gizillion silicon chips and controllers.

    The tubes themselves are grids of scc all controled locally, but which supports anyons that are not tied down but float through the global structure. These anyons are qubits that are for all practical purposes eternal (as any local hardware failure can be fixed and doesn’t disturb the qubits) and, although only so many can be in any area of the grid, they are in an all to all interaction topology.

    The structure contains gazillions of anionic qubits. What world can be simulated??? Someone prompt gtp3 to write a science fiction story inside this world!

    Thank you google quantum ai et al!!!

  60. LK2 Says:

    Dear Scott, in a previous comment, I stated that I did not understand your “wasting” this year at OpenAI.
    Well, after reading (not even all) your transcript here, I must say that now I understand.
    And you were right. Apologies.
    LK.

  61. fred Says:

    The difference between existence and simulation is in the eye of the beholder, whether the conscious observer is inside or outside the system.

    As soon as we create a game of life type of simulation that’s large enough to implement consciousness (assuming consciousness is the result of information processing), then, for those creatures there’s no simulation, only reality.

    Another partial illustration of this is when you (in the US) and your friend (in Europe) both put a VR headset, and then play a game of virtual ping-pong with one another (with haptic feedback that recreates the nuances of hitting the ball). There’s an immersion threshold beyond which the two brains get fooled, and the simulation becomes reality.

  62. fred Says:

    Scott, I really enjoyed listening to you in this discussion.
    I think that because you’re a professor and you’re so used to writing papers, when you write things down, it tends to be a bit too compressed for the general public.
    But when you’re talking (and given enough time), you’re really great at explaining things even better (also little touches of humor and sarcasm come through very nicely). Really excellent, thanks!

  63. fred Says:

    Scott #23

    “they treat it as not essentially different from the “randomness” that makes you you rather than some other person, which presumably you don’t have a problem with”

    But if you imagine for a moment that, tomorrow morning, I’d wake up as you and you’d wake up as me, it would be a NO-OP because, as you, I would only have access to your memories, and, as me, you would only have access to my memories, so we wouldn’t notice anything different. And for all we know, that swapping is constantly happening, a million times a second, between all the conscious beings in the universe (maybe across space, time, and metaverse branch).
    Of course it’s all depend on what we mean by “me” and “you”. But it’s clear that the idea of the existence of some sort of core identity (aka a soul) being “me” independently of my brain would be ruled out in that picture.
    Another interpretation is that there’s really only one universal consciousness (we’re everyone all at once… or we’re constantly being reincarnated into everyone, every millisecond, but we have amnesia).
    An analogy is a CPU+registers that runs a single thread of execution (the consciousness), and multiple processes can run (apparently) independently with their own state and memory and (apparently) simultaneously, but in reality they all take turn to run on the CPU and its registers for a very tiny time slice, and during that time the CPU really thinks it “is” that process.

  64. WA Says:

    Adam Treat #53:

    I would also love to see Scott go at these papers, but I sympathize with his reluctance. I haven’t read the papers but from what I understand about superdeterminism it can be painful to argue against. People need to agree on certain axioms before engaging in scientific debate, such as to assume that there are indeed consistent laws of nature that we can deduce things about by experimenting. Something superdeterminism has at its heart seems to go against this axiom. Maybe this is an oversimplification on my part, but I reject this view for the same reason I would reject the view of someone who believes there are no consistent laws of physics and the universe is a simulation that makes us think there are laws without actually adhering to any.

  65. Scott Says:

    WA #64: Exceedingly well-said; thanks very much!

  66. manorba Says:

    regarding prof. Hossenfelder’s SDeterminism, i’ve always been captured by 2 things:

    1) The long journey of moving goalposts from “the measurement problem” to the “non linearity of the waveform collapse” to finally superdeterminism. It seemed to me at the time that some pious person took the job of explaining her what was wrong in her theory so she adapted it along the way. the only sure thing: “probabilities bad”.
    2) Ain’t superdeterminism what you come up with if you look at QM with relativistic eyes? After all there’s only the block spacetime with its lightcones and trajectories.

  67. manorba Says:

    by the way Scott may i ask you one thing about the google project?

    Why is the supremacy part so important?

    Not to belittle it, actually the contrary! to me the most relevant thing is that we’re actually able to create qbits, programmable gates, and now even working circuits with them. Using old nomenclature i fell like we’re still in the proof of concept stage, and i would rather insist on how big these kind of accomplishments are and what are the hurdles, and relevant roadmap, that still wait us ahead.
    Even more so when you teach me that every classical counterproject is possible only because sycamore used only 50 qbits, which is not even a tiny fraction of what is needed to do any practical computation.
    So, at the end even the supremacy thing is just a matter of scaling up right? well, and the error correction thing too…

  68. mls Says:

    WA #64

    First, let me agree with your empathy toward Dr. Aaronson’s preference to avoid this issue.

    With that said, Dr. Hossenfelder has correctly stated that compatibilism is a philosophical position which has not been excluded in philosophical discourse. Moreover, she correctly ascribes the common interpretation as arising from an influential philosophy of the late 19th century and early 20th century called positivism.

    Is positivism conspiratorial?

    I have studied the continuum question for over 35 years. In contrast to what Dr. Aaronson had learned (or verified) in consultation with Timothy Chow and others with similar views, I learned that “mathematics” is characterized by “paradigms” with incompatible presuppositions. Needless to say, the influence of positivism is ubiquitous across the rhetoric defending the first-order paradigm.

    I have no objection to the first-order paradigm as a way of studying mathematics. But, I have great reservations about its foundational claims.

    Do you believe physics to be explanatory? Positivism rejects your belief. The outcome of physical investigation is lawfulness. Explanations are dispensable epistemology.

    “Shut up and calculate!”

    Are causality and lawfulness the same? It is true that a physicist seeks to describe invariant phenomena with statements referred to as laws. Of course, they also reserve the right to amend them. By contrast, whatever constitutes identity in time for relations between phenomena would seem to be immutable by assumption — an assumption ground in the experience of human agents through which it becomes a conviction.

    Causality and lawfulness appear different to me, at least.

    Now, the exact relationship between mathematics and physics is not quite clear to me (please avoid correcting an agnostic with beliefs I have been questioning for 35 years). From what I can gather, physics had not entirely been touched by the program to make mathematics “more logical.” Consequently, the use of differential equations in physics need not have the same import as in those mathematical paradigms which address their use.

    If you justify differentials with the first-order paradigm (positivistically defended), you must tell me about all of those non-standard real numbers that I “need to understand” because I cannot “really” understand quantum mechanics if I do not “know the math.”

    What about, Lawvere’s smooth infinitesimal analysis? Damn, physicists appear to reason classically, and, they do their calculations according to the arithmetic of a complete ordered field. So, the mathematics of physics cannot be based on Lawvere’s work.

    Let us ignore the things mathematicians have tried to do in support of physics.

    Recently, I have been illustrating Cartesian products of orthomodular lattices because every Boolean lattice is an orthomodular lattice. “Squaring” a given lattice in this way reproduces the given lattice as a suborder comprised of “identity pairs.” Naturally, the constructions generate other interesting suborders.

    Now, I think of these suborders as “internal relations” associated with a “point.” There is no such a thing in physics. So, I look at what might be comparable in the physics lexicon.

    Physics has “virtual particles.” Does positivism admit an ontology of objects with no empirical evidence of existence? No. Does positivism admit “mathematical artifacts” which exist only for the sake of “explanation”? No.

    Can you see how arguing incoherently with views grounded in a dubious philosophy also appears conspiratorial?

    Now, I am an Aristotelian brute. When investigating “virtual particles” recently, I learned of virtual photons and the classification of “on shell” and “off shell” distinctions. If I concede that the mathematics of physics is fundamentally different from the attempts which had been made on behalf of physics, this situation appears very similar to the “gaps” in rational numbers which are “filled” through completion by Dedekind cuts or Cauchy sequences to have “an ontology of points.”

    The fact of the matter, to the best that this poor soul can discern, is that you cannot defend a quantum reality without a conspiracy of “virtual particles” lacking empirical evidence.

    Physics quantifies the phenomena experienced by sentient human agents. Its mathematics supports the defensibility of its causal deductions. It does not make them “true” because the associated ontology is indeterminate (and, probably, indeterminable).

    To Dr. Hossenfelder’s credit, she is painfully aware of the need to identify testable hypotheses.

    To Dr. Aaronson’s credit, there is no reason to engage in the debate.

  69. 🤯 Says:

    Did you go to grad school when you were 19 yo ? 🤯

  70. Scott Says:

    #69: Yes, long story.

  71. manorba Says:

    mls #68 Says: The fact of the matter, to the best that this poor soul can discern, is that you cannot defend a quantum reality without a conspiracy of “virtual particles” lacking empirical evidence.

    oh we have empirical evidence! you have to add integrals for the loops otherwise your calculation will turn out wrong.

  72. mls Says:

    manorba #71

    I have studied mathematics for a long time. Some children cannot learn the arithmetic of fractions and others can. That the latter can multiply and divide fractions does not mean that everything they write and say are true. The ability to calculate is neither an explanation nor a proof.

    I knew, when I wrote that statement, that referring to virtual particles is problematic. Search carefully enough and you will find Neumaier’s attempts to correct perceptions. Yes, a virtual particle is “merely” a component of the system of path integrals needed to do a calculation. Of course, the fundamental dispute between category theory and first-order set theory is precisely that the morphisms described as “arrows” are not extensionally grounded by individuals. This creates problems for naive conceptions of truth taken to be correpondence of words with objects.

    If category theorists can assert an ontology of “arrows,” why can Wikipedia writers not assert an ontology of Feynman diagram components without being called ignorant by the academic community?

    Now, the problem of speaking about virtual particles is also seen with the discussion of vacuum energy. On that subject, I have actully found meaningful references.

    When you look up “vacuum expectation value” at ncatlab, the link,

    https://ncatlab.org/nlab/show/quantum+probability+theory

    actually explains that the mathematics involved AXIOMATIZES expectation values and leaves the underlying probability measure IMPLICIT.

    Do you think I am impressed by “Look at me, I can calculate!” when, within your mathematics you are saying nothing at all (first, by the notion of implicitness, and, second by the recieved opinions of what it means to axiomatize).

    If you continue through certain links of that ncatlab entry to find the relationship to typical coursework in analysis, you will pass through the links,

    https://ncatlab.org/nlab/show/star-algebra

    https://ncatlab.org/nlab/show/state+on+a+star-algebra

    which will get you to the Hahn-Banach tgeorem,

    https://ncatlab.org/nlab/show/Hahn-Banach+theorem

    whose relationship speak directly to use of the axiom of choice in relation to the real numbers. And, as found in that link, the theorem can be understood to some relative weakenings (that only the mathematics community seems to care about).

    The physics blogs (and books) I read speak ad nauseum about probabilities as if the word is not being used vaguely. Clearly, if it is effectively axiomatized to speak of vacuum expectation values and classically understood when describing experiments, the word is not being used coherently.

    “Look at how smart I am (but do not notice that what I say is incoherent)!” Is that how one is to understand physics?

    Apparently, there is a construction relating the links above to language commonly used by physicists. It is the GNS construction,

    https://ncatlab.org/nlab/show/Gelfand-Naimark-Segal+construction

    I will gladly admit that it will take some time for me to work through the implications of the mathematics in these links.

    Thank you, Dr. Aaronson. The liberality with which manage this blog is the reason I have found these links.

    And, manorba, I respect the Galilean tradition that we must learn by actually interacting with our experience of an external world. If physicists do not thnk they need to answer to mathematicians, they are sorely mistaken. What mathematicians do is to try and understand which uses of numbers and geometry are sound and which are not. That, in turn, leads to difficult questions about logic.

    Math belief, logic belief, and science belief are not fundamentally different from theological belief in the eyes of an agnostic.

  73. manorba Says:

    mls #72 Says: Math belief, logic belief, and science belief are not fundamentally different from theological belief in the eyes of an agnostic.

    well, in the eyes of another agnostic there is a “rule zero” of research: “put away your beliefs”.
    Some ten years ago i was talking to a math graduate friend about how i discovered that complex numbers are so important in electromagnetism (my science education is an adult endeavour, i studied political science in university) and when we got to the point where you discard the complex part at some point in the calculations he literally got mad. And that is only one small example. And step functions…
    I understand that, but physics aims at describing phenomenons. If the description fits the data i can see why for a physicist formal mathematical correctness takes a step back (“someone else will take care of it” 😉 ). That’s what happened with QFTs right? first came the feynman diagrams but it took some years to give everything a solid formal background by Weinberg mainly (volume 1 of his textbook).
    to me the relation between math and physics is akin to that between grammar and literature. You need the former to do the latter but it doesn’t end there.
    On the topic of virtual particles, we even know situations where they can become real (ie. when a pair is created right near the schwarzchild radius and one goes inside the other outside, is it correct?) but to me the strongest point is the concept of bare couplings. How can you have them w/out virtual particles?

  74. shaoyu Says:

    Scott, fantastic video with Tim. I totally believe both of you did a great service to the world! Thank you so much!
    Just two tiny suggestions, I really hoped you and Tim could have talked a bit about Bell’s inequality, and the current experimental landscape on realizing quantum computing. : )

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.