Just two tiny suggestions, I really hoped you and Tim could have talked a bit about Bell’s inequality, and the current experimental landscape on realizing quantum computing. : ) ]]>

well, in the eyes of another agnostic there is a “rule zero” of research: “put away your beliefs”.

Some ten years ago i was talking to a math graduate friend about how i discovered that complex numbers are so important in electromagnetism (my science education is an adult endeavour, i studied political science in university) and when we got to the point where you discard the complex part at some point in the calculations he literally got mad. And that is only one small example. And step functions…

I understand that, but physics aims at describing phenomenons. If the description fits the data i can see why for a physicist formal mathematical correctness takes a step back (“someone else will take care of it” 😉 ). That’s what happened with QFTs right? first came the feynman diagrams but it took some years to give everything a solid formal background by Weinberg mainly (volume 1 of his textbook).

to me the relation between math and physics is akin to that between grammar and literature. You need the former to do the latter but it doesn’t end there.

On the topic of virtual particles, we even know situations where they can become real (ie. when a pair is created right near the schwarzchild radius and one goes inside the other outside, is it correct?) but to me the strongest point is the concept of bare couplings. How can you have them w/out virtual particles?

I have studied mathematics for a long time. Some children cannot learn the arithmetic of fractions and others can. That the latter can multiply and divide fractions does not mean that everything they write and say are true. The ability to calculate is neither an explanation nor a proof.

I knew, when I wrote that statement, that referring to virtual particles is problematic. Search carefully enough and you will find Neumaier’s attempts to correct perceptions. Yes, a virtual particle is “merely” a component of the system of path integrals needed to do a calculation. Of course, the fundamental dispute between category theory and first-order set theory is precisely that the morphisms described as “arrows” are not extensionally grounded by individuals. This creates problems for naive conceptions of truth taken to be correpondence of words with objects.

If category theorists can assert an ontology of “arrows,” why can Wikipedia writers not assert an ontology of Feynman diagram components without being called ignorant by the academic community?

Now, the problem of speaking about virtual particles is also seen with the discussion of vacuum energy. On that subject, I have actully found meaningful references.

When you look up “vacuum expectation value” at ncatlab, the link,

https://ncatlab.org/nlab/show/quantum+probability+theory

actually explains that the mathematics involved AXIOMATIZES expectation values and leaves the underlying probability measure IMPLICIT.

Do you think I am impressed by “Look at me, I can calculate!” when, within your mathematics you are saying nothing at all (first, by the notion of implicitness, and, second by the recieved opinions of what it means to axiomatize).

If you continue through certain links of that ncatlab entry to find the relationship to typical coursework in analysis, you will pass through the links,

https://ncatlab.org/nlab/show/star-algebra

https://ncatlab.org/nlab/show/state+on+a+star-algebra

which will get you to the Hahn-Banach tgeorem,

https://ncatlab.org/nlab/show/Hahn-Banach+theorem

whose relationship speak directly to use of the axiom of choice in relation to the real numbers. And, as found in that link, the theorem can be understood to some relative weakenings (that only the mathematics community seems to care about).

The physics blogs (and books) I read speak ad nauseum about probabilities as if the word is not being used vaguely. Clearly, if it is effectively axiomatized to speak of vacuum expectation values and classically understood when describing experiments, the word is not being used coherently.

“Look at how smart I am (but do not notice that what I say is incoherent)!” Is that how one is to understand physics?

Apparently, there is a construction relating the links above to language commonly used by physicists. It is the GNS construction,

https://ncatlab.org/nlab/show/Gelfand-Naimark-Segal+construction

I will gladly admit that it will take some time for me to work through the implications of the mathematics in these links.

Thank you, Dr. Aaronson. The liberality with which manage this blog is the reason I have found these links.

And, manorba, I respect the Galilean tradition that we must learn by actually interacting with our experience of an external world. If physicists do not thnk they need to answer to mathematicians, they are sorely mistaken. What mathematicians do is to try and understand which uses of numbers and geometry are sound and which are not. That, in turn, leads to difficult questions about logic.

Math belief, logic belief, and science belief are not fundamentally different from theological belief in the eyes of an agnostic.

]]>oh we have empirical evidence! you have to add integrals for the loops otherwise your calculation will turn out wrong.

]]>First, let me agree with your empathy toward Dr. Aaronson’s preference to avoid this issue.

With that said, Dr. Hossenfelder has correctly stated that compatibilism is a philosophical position which has not been excluded in philosophical discourse. Moreover, she correctly ascribes the common interpretation as arising from an influential philosophy of the late 19th century and early 20th century called positivism.

Is positivism conspiratorial?

I have studied the continuum question for over 35 years. In contrast to what Dr. Aaronson had learned (or verified) in consultation with Timothy Chow and others with similar views, I learned that “mathematics” is characterized by “paradigms” with incompatible presuppositions. Needless to say, the influence of positivism is ubiquitous across the rhetoric defending the first-order paradigm.

I have no objection to the first-order paradigm as a way of studying mathematics. But, I have great reservations about its foundational claims.

Do you believe physics to be explanatory? Positivism rejects your belief. The outcome of physical investigation is lawfulness. Explanations are dispensable epistemology.

“Shut up and calculate!”

Are causality and lawfulness the same? It is true that a physicist seeks to describe invariant phenomena with statements referred to as laws. Of course, they also reserve the right to amend them. By contrast, whatever constitutes identity in time for relations between phenomena would seem to be immutable by assumption — an assumption ground in the experience of human agents through which it becomes a conviction.

Causality and lawfulness appear different to me, at least.

Now, the exact relationship between mathematics and physics is not quite clear to me (please avoid correcting an agnostic with beliefs I have been questioning for 35 years). From what I can gather, physics had not entirely been touched by the program to make mathematics “more logical.” Consequently, the use of differential equations in physics need not have the same import as in those mathematical paradigms which address their use.

If you justify differentials with the first-order paradigm (positivistically defended), you must tell me about all of those non-standard real numbers that I “need to understand” because I cannot “really” understand quantum mechanics if I do not “know the math.”

What about, Lawvere’s smooth infinitesimal analysis? Damn, physicists appear to reason classically, and, they do their calculations according to the arithmetic of a complete ordered field. So, the mathematics of physics cannot be based on Lawvere’s work.

Let us ignore the things mathematicians have tried to do in support of physics.

Recently, I have been illustrating Cartesian products of orthomodular lattices because every Boolean lattice is an orthomodular lattice. “Squaring” a given lattice in this way reproduces the given lattice as a suborder comprised of “identity pairs.” Naturally, the constructions generate other interesting suborders.

Now, I think of these suborders as “internal relations” associated with a “point.” There is no such a thing in physics. So, I look at what might be comparable in the physics lexicon.

Physics has “virtual particles.” Does positivism admit an ontology of objects with no empirical evidence of existence? No. Does positivism admit “mathematical artifacts” which exist only for the sake of “explanation”? No.

Can you see how arguing incoherently with views grounded in a dubious philosophy also appears conspiratorial?

Now, I am an Aristotelian brute. When investigating “virtual particles” recently, I learned of virtual photons and the classification of “on shell” and “off shell” distinctions. If I concede that the mathematics of physics is fundamentally different from the attempts which had been made on behalf of physics, this situation appears very similar to the “gaps” in rational numbers which are “filled” through completion by Dedekind cuts or Cauchy sequences to have “an ontology of points.”

The fact of the matter, to the best that this poor soul can discern, is that you cannot defend a quantum reality without a conspiracy of “virtual particles” lacking empirical evidence.

Physics quantifies the phenomena experienced by sentient human agents. Its mathematics supports the defensibility of its causal deductions. It does not make them “true” because the associated ontology is indeterminate (and, probably, indeterminable).

To Dr. Hossenfelder’s credit, she is painfully aware of the need to identify testable hypotheses.

To Dr. Aaronson’s credit, there is no reason to engage in the debate.

]]>Why is the supremacy part so important?

Not to belittle it, actually the contrary! to me the most relevant thing is that we’re actually able to create qbits, programmable gates, and now even working circuits with them. Using old nomenclature i fell like we’re still in the proof of concept stage, and i would rather insist on how big these kind of accomplishments are and what are the hurdles, and relevant roadmap, that still wait us ahead.

Even more so when you teach me that every classical counterproject is possible only because sycamore used only 50 qbits, which is not even a tiny fraction of what is needed to do any practical computation.

So, at the end even the supremacy thing is just a matter of scaling up right? well, and the error correction thing too…

1) The long journey of moving goalposts from “the measurement problem” to the “non linearity of the waveform collapse” to finally superdeterminism. It seemed to me at the time that some pious person took the job of explaining her what was wrong in her theory so she adapted it along the way. the only sure thing: “probabilities bad”.

2) Ain’t superdeterminism what you come up with if you look at QM with relativistic eyes? After all there’s only the block spacetime with its lightcones and trajectories.