Thank you for the comment.

Ideas about consciousness and agency in computation or the limits of mathematics are way above my pay level. They are interesting, yes, but I work at the “gate-level of reality” 🙂

I would say that, as best I understand the idea of Turing universality, numbers are the symbols we use for computation or they are equivalent to any symbols that may be used. As to what a number “really is” or what “real-world numbers are” again that’s too hard for me. I just know that in the classical model I can only put a “1” or a “0” down for each memory or input location whether we call those symbols or numbers.

]]>Sorry if this is a repeat. I thought I replied to your reply but maybe I missed hitting submit. I’ll try to remember and rehash here.

*I’ll confess to being less impressed than you are by all your “evidence” that the brain implements QM. Linear algebra and complex numbers are both pretty ubiquitous in science and engineering, for reasons having nothing to do with QM (e.g., for complex numbers, the fact that eiθ=cosθ+isinθ concisely summarizes all the trig identities…).*

As someone who has worked on DSP applications, I understand Euler’s formula. I have a T-shirt with the statement case when $$ \theta = \pi $$

I wouldn’t say that I am completely “impressed” by the evidence, more like “disturbed”. I definitely subscribe to the ECREE standard. I actually think that this hypothesis is weak in several aspects – especially in the details on normalization – kind of important – but the evidence is not zero.

What does impress me about synaptic-dendritic encoding is that it is encoding an amplitude (complex number) that is meaningful to the computational operation of the neuron. And if we move that synapse back and forth on the dendrite then what we are doing is changing the amplitude that we choose to encode. The representation of orthogonal states of course makes this a discrete model of computation and not analog. So, this is not at all like an analog device where we can talk about input using Euler’s formula and complex numbers.

Yes, also we can look at the physics inside a semiconductor device and find quantum systems and classical systems that involve complex numbers. But, the key here is that those systems are just *supporting* the operational definition of the device. The quantum things going on inside a transistor have *nothing to do with* the computational class of the transistor relative to the computer because what I am doing with a transistor is essentially giving someone a place to write down numbers, in that case only a “0” or a “1”. The placement of the synapse, at least to my simple eyes, appears to be a way to give a hardware designer the means to *write down a complex number* that is then fed into whatever operator the dendritic morphology is programmed to process. Again, at this level of a computer, it all boils down to “What kind of numbers do you allow me to write down at the input locations?”

In the same way, I have no interest in the atomic-scale quantum mechanics going on inside a neuron or inside the brain. That is just part of any physical device. The critical feature we care about is at the level that is computationally relevant to the input and output of the device. I’m aware of the Penrose/Hameroff proposal but that never made *any* sense to me because I can’t see how quantum effects at the atomic scale have anything to do with the computational level of the brain.

*In general, whenever someone points to some apparently classical thing and claims that it’s “just like QM,” my immediate questions include:*

Yes, I had the same questions. Everyone knows the brain is a classical model of computation, right?

*– Can the thing violate a Bell inequality?*

There are cognitive scientists who appear to be under the impression that they have data showing Bell inequality violation for human cognition. I don’t see anything wrong with the math … but maybe someone else here does?

Here’s just a random search result

https://arxiv.org/abs/2102.03847

Not all of this research is of the same caliber (not all research in any field is). But, to me what they’ve got going for them is they appear to explain long-standing cognitive fallacies and effects that long refused to be explained by classical models. This next paper is a survey and also gives some criticism of the approach.

https://pubmed.ncbi.nlm.nih.gov/34546804/

*– Is measurement of the thing an inherently destructive operation?*

We need some mechanism for measurement operators. Two candidates, so far as I can tell: small-scale axon terminal arbor pruning and synaptic inhibition of dendritic synapses. What I understand we want is an operator to project onto a subspace spanned by the orthogonal states. The picture is of the destruction of the previously preserved superposition of amplitudes across all possible states. The latter synaptic inhibition is the more interesting computational option to me as it suggests suppressing coherent superpositions

https://royalsocietypublishing.org/doi/10.1098/rsta.2018.0107

So, yes, the projection operators would be inherently destructive of the encoded superposition of phases.

The thing about the superposition of amplitudes in the neural model is you can actually take out your microelectrode and measure the amplitudes in orthogonal groups of coherent neurons. When a physicist is asking me to accept that the spin states of an electron are in a superposition he’s just showing me numbers on a scratchpad. Yes, yes, he can run the experiment and we can see that the results match the model. But still, where *exactly* is the universe keeping track of these amplitudes … ?

The reaction that the quantum brain hypothesis appears to get is one of incredulity or scorn. Which is funny … because that is usually the way that I feel when a physicist asks me to accept that an electron is keeping track of spin amplitudes, Hilbert space, unitary evolution, and has measurement operators at the ready just in case a physicist wanders along, sets up an experiment, and asks for its state 😉

*– Does the thing need 2n parameters to describe a system with n components?*

On the one hand, if we think of the cognitive science results above as just “tests on some unknown systems” imagining that no one told us where the data came from, then we should conclude that the data show a violation of Bell inequalities and thus requiring a model with 2n parameters (so far as we know).

Think of it the way it is physically realized. One part of the neuronal group has amplitude for one state and another part of the group has amplitude for another state. The key is that there must be superposition across all possible states that relies on interference. Occupying *both states 0 and 1 means that there are non-zero amplitudes in those parts of the neuronal group for which the squared magnitude of those amplitudes would represent the probability of the system being in that state. Yes, if we want to write this down then we are writing down 2n parameters for n computational units. I mean, we are just recording a complex number here and another over there and requiring the hardware interference architecture to maintain the Hilbert state space.
And then also there is no “trivial lower limit” on the number of computational units required in a device in order for us to realize quantum computing. The requirement is that we can operate over the interference of amplitudes.*

*– Can the thing factor integers in polynomial time?*

Do we have quantum computers yet that can factor integers? 🙂 I know I know we are on the way. Even once we have non-trivial quantum computers will they be able to handle *any size* integer input?

It seems entirely possible to have trivial realizations of a class of computation that are not powerful enough to realize every conceivable algorithm or input size due to arbitrary memory or the functional or architectural details of the computer. As you know, it is possible to realize Turing universality with something as trivial as the DigiComp II. Of course, who wants to implement 18×18 DSP multipliers using the DigiComp II model?!

We can have a special purpose hardware architecture that is not optimized for running other types of applications. So, a specialized processor may be less than ideal for other applications. The brain may simply not have the functional architecture for Shor’s algorithm or quantum modular exponentiation. Maybe the brain is only designed with functional blocks for things that were of evolutionary importance: phase estimation, period finding, amplitude amplification, or maybe even the QFT … and those functional blocks are configured over particular receptive fields for which they make sense? But, I think the question of “Why did the brain evolve these features that appear to be aiming at QT?” is an interesting question.

Look, Scott, I’m down with ECREE. And I feel like I selected a bad way to try to get to my central concern related to the question in this thread.

Maybe there is a way to forget the quantum brain hypothesis and think of my concern in a more general way.

Let’s say we have a finite universe simulated with (which as far as I can tell is the same thing as “running on”) computational class X. Now let’s have a finite system within that universe that is a physical realization of computational class Y. What must be true about the relationship between X and Y in order for Y to “understand” that X is the operating system of the universe? Is this *always* possible? Is there a computability/complexity reason why or why not? Is there some “minimum complexity distance” that must exist between X and Y in order for Y to be successful in learning X?

Earlier in our sub-thread you seemed to say in effect, “Well we don’t know but we can try.” Seem like maybe we could do better at saying something about our chances?

How about that. I should have asked that first instead of going off the brain deep end 🙂

]]>- classical dynamical laws must be deterministic and reversible (each state has only one predecessor and one successor). This corresponds to the conservation of information.

What would’ve been wrong with that choice?

Wouldn’t that be super boring if the amount of information stays constant?

Everettian QM is *also* deterministic and reversible! And it’s just one illustration of why deterministic, reversible theories can be *far* from boring: even if information stays constant, it can take billions of years for the *consequences* of given information to manifest themselves!

Having said that, I appreciate that QBists take the question seriously and think about it, and I’m actually partial to the idea of making a distinction between “this velociraptor existed” and “this velociraptor had some definite quantum state.” I explored such a distinction myself in my Ghost in the Quantum Turing Machine essay—or rather, I *did* ascribe a “state” to the velociraptor; it’s just that the exact polarization of a photon hitting the velociraptor’s tail could be in the sort of state that I called a “freebit.”

As an addendum let me direct your attention to the Mathoverflow link,

This is a discussion among category theorists who conclude that groups could be foundational. But, the debate over category-theoretic foundations and comprehensionalist set theories is treacherous territory.

The syntactic methods I described do not emphasize the fact that the 4-dimensional vector space over GF(2) is an instance of a 16-element group with signature,

a^2 = b^2 = c^2 = d^2 = e

Unfortunately, everyone who wishes to declare a foundation for mathematics forgets that it is a claim of how people different from themselves study mathematics. No amount of philosophical rhetoric will reconcile “the arithmetization of mathematics” with “group theory can be a foundation for mathematics.”

I do not declare paradigmatic approaches different from what I do to be invalid. I am just trying to understand in an environment of non-cooperating actors.

]]>Would you agree that dinosaur fossils, let’s say, were really there in the ground for 65+ million years before anyone dug them up, records of a vanished world that actually existed—that none of this was “constructed by the act of observation”?

In order to answer this, I can’t avoid bringing QBism into the discussion, because my personal approach to `why the quantum’ is a kind of QBism-phenomenology hybrid. Your point is similar to an objection that people often make against QBism, namely: if quantum states are just in the minds of agents, then how can the QBist explain the existence of things that were around before agents? The logic runs something like this:

(1) If something physically exists, then it must exist in some `state’ (eg a classical distribution on phase space or a quantum state);

(2) QBism says that “quantum states” (and probabilities) are epistemic and subjective: they are beliefs of rational decision-making agents;

(3) There were no agents in the distant past, eg when the Earth was forming, or when dinosaurs roamed the Earth (assuming none of them were smart enough to count as agents);

(4) Therefore by (2)&(3) there were no quantum states in the distant past;

(5) Therefore by (1)&(4) nothing existed in the distant past before agents.

QBism accepts (2-4), but rejects (1), hence also rejects the conclusion (5). To the QBist, the word “state” has a different meaning than the conventional one. Here are roughly the two definitions:

Conventional: A “state” is a description of a system, that tells us what its properties are and how it “really is”.

QBist: A “state” is a set of probabilities \( P_a(x) \) assigned by some agent, that tells us how likely the agent thinks it is that they will get result “x”, if they were to take action “a” on the system. Consequently, the QBist definition does not tie the existence of a system to its needing to have a state; a system presumably must exist in order to have a state, but if there’s no-one around to assign it a state, that doesn’t have to imply that the system doesn’t exist.

So, what precisely is the nature of something that existed before agents were around to observe it? Officially, QBism mostly leaves this issue open, and individual QBists tend to give different answers. Chris Fuchs has shared this quote with me (source here) that gives a “Whiteheadian” answer:

Even a seemingly solid and permanent object is an event; or, better, a multiplicity and a series of events. Whitehead gives the example of Cleopatra’s Needle on the Victoria Embankment in London. Now, we know, of course, that this monument is not just “there.” It has a history. Its granite was sculpted by human hands, sometime around 1450 BC. It was moved from Heliopolis to Alexandria in 12 BC, and again from Alexandria to London in 1877-1878 AD. And some day, no doubt, it will be destroyed, or otherwise cease to exist. But for Whitehead, there is much more to it than that. Cleopatra’s Needle isn’t just a solid, impassive object upon which certain grand historical events – being sculpted, being moved – have occasionally supervened. Rather, it is eventful at every moment. From second to second, even as it stands seemingly motionless, Cleopatra’s Needle is actively happening. … At every instant, the mere standing-in-place of Cleopatra’s Needle is an event: a renewal, a novelty, a fresh creation.

In other words, things have a way of `getting themselves made’ without needing agents to observe them; the agents can just `hitch a ride’ on this process if they happen to be around.

My own view, as expressed in my comment #590, is a little more extreme: I think we ought to take seriously the idea that reality is founded upon observation. When we say that “dinosaurs existed in the past”, the conventional view interprets this statement as telling us `where’ dinosaurs can be found in time, as though time were a sort of container for things, like saying the cereal exists “in the back of the cupboard”. Phenomenology rejects this view of time.

To a phenomenologist, saying that “dinosaurs existed in the past” means that they exist to us now (since we can think about them and talk about them) but that their mode of appearance has a `past tense’: they will not appear to us as living things that can eat us or stomp on us, but as sets of dry skeletons ensconced in a surrounding context of geological and paleontological evidence and theories. Time and history in phenomenology is not a container of existence, it is a *mode of appearance*.

I would say, then, that a velociraptor exists at many levels of reality. At the most base-level of reality, the velociraptor just “is” a collection of dried-out bones, archeological records, and so on. As we delve into the historical meanings of these, we build up layers of the velociraptor’s “historical reality” that proceed from less to more abstract. As we go up the levels the records become more sparse and their meanings more contested, but the velociraptor becomes more alive. It will not devour me, but its existence as a living thing “in history” has very real and potentially observable consequences. For instance, by understanding the probable migration patterns of velociraptors we can predict where to dig to find more bones. In short, I’m a metaphysical pluralist about reality and about time.

As I said above, though, not all QBists would agree with me and this is still very much an topic for QBism’s “inside baseball”. For the interested, there is a good discussion of these things in this phenomenology paper about QBism.

]]>