Oh right, quantum computing

These days, I often need to remind myself that, as an undergrad, grad student, postdoc, or professor, I’ve now been doing quantum computing research for a quarter-century—i.e., well over half of the subject’s existence. As a direct result, when I feel completely jaded about a new development in QC, it might actually be exciting. When I feel moderately excited, it might actually be the most exciting thing for years.

With that in mind:

(1) Last week National Public Radio’s Marketplace interviewed me, John Martinis, and others about the current state of quantum computing. While the piece wasn’t entirely hype-free, I’m pleased to report that my own views were represented accurately! To wit:

“There is a tsunami of hype about what quantum computers are going to revolutionize,” said Scott Aaronson, a professor of computer science at the University of Texas at Austin. “Quantum computing has turned into a word that venture capitalists or people seeking government funding will sprinkle on anything because it sounds good.”

Aaronson warned we can’t be certain that these computers will in fact revolutionize machine learning and finance and optimization problems.  “We can’t prove that there’s not a quantum algorithm that solves all these problems super fast, but we can’t even prove there’s not an algorithm for a conventional computer that does it,” he said. [In the recorded version, they replaced this by a simpler but also accurate thought: namely, that we can’t prove one way or the other whether there’s a useful quantum advantage for these tasks.]

(2) I don’t like to use this blog to toot my own research horn, but on Thursday my postdoc Jason Pollack and I released a paper, entitled Discrete Bulk Reconstruction. And to be honest, I’m pretty damned excited about it. It represents about 8 months of Jason—a cosmologist and string theorist who studied under Sean Carroll—helping me understand AdS/CFT in the language of the undergraduate CS curriculum, like min-cuts on undirected graphs, so that we could then look for polynomial-time algorithms to implement the holographic mapping from boundary quantum states to the spatial geometry in the bulk. We drew heavily on previous work in the same direction, especially the already-seminal 2015 holographic entropy cone paper by Ning Bao et al. But I’d like to think that, among other things, our work represents a new frontier in just how accessible AdS/CFT itself can be made to CS and discrete math types. Anyway, here’s the abstract if you’re interested:

According to the AdS/CFT correspondence, the geometries of certain spacetimes are fully determined by quantum states that live on their boundaries — indeed, by the von Neumann entropies of portions of those boundary states. This work investigates to what extent the geometries can be reconstructed from the entropies in polynomial time. Bouland, Fefferman, and Vazirani (2019) argued that the AdS/CFT map can be exponentially complex if one wants to reconstruct regions such as the interiors of black holes. Our main result provides a sort of converse: we show that, in the special case of a single 1D boundary, if the input data consists of a list of entropies of contiguous boundary regions, and if the entropies satisfy a single inequality called Strong Subadditivity, then we can construct a graph model for the bulk in linear time. Moreover, the bulk graph is planar, it has O(N2) vertices (the information-theoretic minimum), and it’s “universal,” with only the edge weights depending on the specific entropies in question. From a combinatorial perspective, our problem boils down to an “inverse” of the famous min-cut problem: rather than being given a graph and asked to find a min-cut, here we’re given the values of min-cuts separating various sets of vertices, and need to find a weighted undirected graph consistent with those values. Our solution to this problem relies on the notion of a “bulkless” graph, which might be of independent interest for AdS/CFT. We also make initial progress on the case of multiple 1D boundaries — where the boundaries could be connected via wormholes — including an upper bound of O(N4) vertices whenever a planar bulk graph exists (thus putting the problem into the complexity class NP).

(3) Anand Natarajan and Chinmay Nirkhe posted a preprint entitled A classical oracle separation between QMA and QCMA, which makes progress on a problem that’s been raised on this blog all the way back to its inception. A bit of context: QMA, Quantum Merlin-Arthur, captures what can be proven using a quantum state with poly(n) qubits as the proof, and a polynomial-time quantum algorithm as the verifier. QCMA, or Quantum Classical Merlin-Arthur, is the same as QMA except that now the proof has to be classical. A fundamental problem of quantum complexity theory, first raised by Aharonov and Naveh in 2002, is whether QMA=QCMA. In 2007, Greg Kuperberg and I introduced the concept of quantum oracle separation—that is, a unitary that can be applied in a black-box manner—in order to show that there’s a quantum oracle relative to which QCMA≠QMA. In 2015, Fefferman and Kimmel improved this, to show that there’s a “randomized in-place” oracle relative to which QCMA≠QMA. Natarajan and Nirkhe now remove the “in-place” part, meaning the only thing still “wrong” with their oracle is that it’s randomized. Derandomizing their construction would finally settle this 20-year-old open problem (except, of course, for the minor detail of whether QMA=QCMA in the “real,” unrelativized world!).

(4) Oh right, the Google group reports the use of their superconducting processor to simulate non-abelian anyons. Cool.

55 Responses to “Oh right, quantum computing”

  1. unresolved_kharma Says:

    I thought you would also share some thoughts on the new preprint by Huang, Chen and Preskill https://arxiv.org/abs/2210.14894
    From my naive point of view it seems a very deep result, and I’m quite surprised that this is possible. I’d be interested to know the opinion of someone who’s deep into the field like you, Scott.

  2. mls Says:

    Since your “surprising” result involved \(K_{3,3}\) graphs, I wonder if you might find anything useful in Erica Flapan’s work on Moebius ladders and topological symmetry groups. The work begins with Simon,


    and is continued with Flapan,


    Moebius ladders become “definite” in the sense of an intrinsic difference between “rungs” and “siderail” when one has 4 or more rungs.

    I found this work in my metamathematical investigations. As \(K_{3,3}\) is a subgraph of \(K_6\), it also relates to Kummer configurations through the work of Assmus and Salwach.

    Anyway, I need a little more of different mathematics to fully appreciate your paper. But, I take notice when papers involving quantum systems overlap with my interests. I hope your paper will be well received.

  3. Mitchell Porter Says:

    What about this claim from Ireland’s top university, of evidence of quantum computing in the brain? (Mentioned by “fred” a few weeks ago.) Is there a good critical discussion anywhere?

    I am philosophically inclined towards the idea of entanglement having a role in consciousness, but skeptical of experimental claims, but this came from a well-regarded institution…

    They seem to be claiming an MRI signal of entanglement among ionic spins, that is produced in association with particular conscious events, and they posit that it’s a side effect of an unknown entangling process that’s relevant for cognition.

  4. Nick Says:

    Love these QC updates! And the exposition alone in this AdS/CFT paper is awesome and appreciated. I for one understand this topic much better now. Effort put in to making things like this as accessible as can be are a great service. Thanks

  5. Scott Says:

    Mitchell Porter #3: Which claim? Link to an actual paper (not popular article)? Also, universities never claim things—individuals do! 🙂

  6. Mitchell Porter Says:

    Scott: found the paper on arxiv: https://arxiv.org/abs/1806.07998

  7. Aspect Says:

    Scott #5
    I think they mean this one:

  8. Scott Says:

    unresolved_kharma #1: I saw that new paper from Caltech, of course, and we discussed it in our quantum group meeting at UT this morning. And I look forward to hearing much more about it when Hsin-Yuan visits UT in less than a month!

    I don’t want to comment in detail before having actually studied the paper. For now, I’ll simply say: I’m a huge fan of the general direction of “approximate learning” of unknown quantum states and processes (having worked on that direction myself since 2006!), and excited to see what the new work does in that direction. On the other hand, one needs to be careful in interpreting the statements of results in this area. Sometimes the “headline result,” as summarized in the abstract—even 100% accurately—comes with caveats that (when fully spelled out) seriously limit the class of learning problems for which the procedure is useful in practice.

  9. Scott Says:

    Mitchell Porter #3, #6, Aspect #7: OK, thanks. I just looked through the paper.

    It’s hard to find words to describe the degree of skepticism that I think needs to be applied to claims of this kind. It’s like reading a paper about alien abductions: even if you can’t pinpoint any specific error, the Bayesian implausibility barrier that the paper needs to overcome is still astronomical.

    In this instance, one thing that leapt out at me immediately was the complete absence of any non-brain “control.” I.e., how do I know that exactly the same sort of entanglement modulation isn’t possible using material from the liver, or even from a dead fish? If it was, that would refute the claim that there’s anything brain-specific here—it would just be that entanglement is pervasive if you look at the right scale!

    Anyway, happy to have comments from anyone who understands more.

  10. Hsin-Yuan Huang Says:

    unresolved_kharma #1, Scott #8:

    I will be talking about the learning quantum circuit & process result next week at the Quantum Colloquium: https://simons.berkeley.edu/events/quantum-colloquium-learning-predict-arbitrary-quantum-processes

    The talk will start with a classical version of the problem, which is known to be hard. I will then show that the quantum problem we consider is somehow much easier because of the quantumness in the input states.

    I am quite surprised by this phenomenon! And I think we only scratched the surface of something deeper within.

  11. Zeb Says:

    I’m confused by the statement of Theorem 4.4 of your paper on discrete bulk reconstruction. Are you saying that every graph which can be embedded on the punctured torus can also be embedded on the punctured sphere? (I don’t even see a clear connection between geodesics and min-cuts for graphs on the punctured torus.)

  12. Scott Says:

    Zeb #11: Thanks for the question. When we say “embeddable,” we mean embeddable onto exactly the same surface as the original graph. The new property is the O(N4) vertices.

  13. Richard M Bacon Says:

    “. . . how do I know that exactly the same sort of entanglement modulation isn’t possible using material from the liver, or even from a dead fish?”

    Does that matter? Assume not only that it’s possible, but that it’s ubiquitous. Are the processes arising in say, a liver or dead fish, indefinite enough to be affected by entanglement modulation? Are those arising in the brain?

  14. Clint Says:

    Scott #9 (and #3):

    I agree and (respectfully) disagree.

    I agree the paper cited is unconvincing. It does not even define “consciousness” (who has?)… but then concludes “consciousness is non-classical.” What exactly then are we talking about that is non-classical???

    I’m a computer engineer. I was taught that quantum effects are at work (necessary) in the transistors/gates of classical microprocessors. The periodic lattice and quantum transport go right into the transistor equation. It would be trivial then to “discover” that there are quantum “signals” correlated with the operational states of a classical microprocessor – including its consciousness I suppose 🙂 But, seriously, for this reason, I think any proposed “black box” test that a system is “quantum computing” has fatal loopholes due to just the fact that maybe the entire universe and thus any subsystem of the universe contains “quantum signals” … if that is the proposed test condition.

    That being said …

    I find it troubling that nowhere (to my knowledge) does the brain store or use classical bits as input to computational operators.

    The operators (gates) of the brain are the dendrites. The inputs to these operators are not classical bits but rather … complex numbers (i.e. amplitudes). The simple way to prove this to yourself is to replace the amplitudes at dendrites with ones and zeros … and see what that gets you … Neuroscientists have known this since at least the 1960’s (best I can tell). These amplitudes can represent positive or negative complex numbers. And, no, it is not just that “there are complex numbers involved in the mathematical representation of the physics happening …” It is rather that the operator (the dendrite) is looking for input to be in the form of a complex number. Change the phase of the input and the computational result is different.

    Dendrites implement various mathematical or logical operations on the amplitudes (NOT gates, XOR gates, FTs, … see Christof Koch Biophysics of Computation and London and Hausser “Dendritic Computation”). The result of that operation then determines whether or not amplitudes are turned on at subsequent dendritic inputs. But that “turn on” information is not at all the limit of the information received by the downstream dendrite in the circuit – the downstream dendrite not only gets the “turn on” information but also gets the amplitude and phase information … the extra information that comes with a complex number. And it uses that information.

    It gets more interesting when we find that the brain associates amplitudes over receptive fields to represent possible states of “something”. This association would be, of course, a vector of complex numbers representing the state of “something”. Neuroscientists call these amplitudes “bumps” computed over networks of excitatory and inhibitory neurons. The key is that these “bumps” are the result of the interference of amplitudes being purposefully computed by the device. These “amplitude bumps” can be seen in many neural architectures including in the “computational substrate” of entorhinal grid cells just as they can be seen in the “computational substrate” of atoms trapped in an optical lattice.

    Christof Koch points out that (despite all the non-linear subsystems involved) neural operators appear to be naturally linear. Recent studies find normalization appears to be canonical in the brain. The entire “purpose” of the brain appears to be to represent probabilities of states – to be a “predictive engine”. Mechanisms for projection operators are well known. And architectures/mechanisms for the formation of products of state spaces also appear to be present.

    Here’s the thing … as a computer engineer … If I were given a functioning brain slice and asked, “What kind of computer do we have here?” I would say … Well, it’s nothing like a classical computer because you don’t even have storage of classical bits or input to your operators (gates) in the form of classical bits. And it is clearly not an analog computer because we have discrete representations of states.

    So … I don’t know … does anyone here know what kind of discrete computer represents states in vectors of amplitudes, uses the interference of those amplitudes in its (linear) operators, projects outcomes onto subspaces of the state spaces, and overall appears to be good for representing probabilities of states and maybe efficiently solving period and phase finding problems??

    Notice at no point are we concerned with “consciousness” or the brain being some kind of “supercomputer” or even having specialized processors for implementing every possible known quantum algorithm … The question is just: Does it satisfy the postulates?

    Therefore, I respectfully disagree that “the Bayesian implausibility barrier … is astronomical”. I would agree that the barrier is (closer to) astronomical IF the claim were instead that the brain uses atomic-scale devices to encode quantum states (like say in electron spins or something) … although still not impossible. But the quantum postulates do not require a quantum computer to use atomic scale “waves or particles”. They only require a device to:

    (1) Represent discrete information in vectors of amplitudes (complex numbers)
    (where their norm is interpreted as a probability)
    (2) Unitary operators (linear, preserves inner product lengths/angles, phase relations)
    (3) Projection operators
    (4) Products of state spaces

    Those are not “astronomical” barriers.

    Now if someone wants to add MORE hardware design specs or required postulates to the quantum postulates … things like

    (5) The logical device encoding a state vector must be less than 1nm
    (6) The device must have enough memory or specialized hardware for modular exponentiation so we can perform Shor’s algorithm

    Then … fine … maybe closer to astronomical … but THEN we are no longer talking about the usual quantum postulates 🙂

    The quantum postulates do not present an astronomical barrier to Natural (or other) innovation. Nor do they rule out finding a “Quantum Digi-Comp II” limping along between our ears.

    Finally (…) the premise that “quantum = mysterious” is overdone. The postulates are simple. Just like the premise that “consciousness = mysterious” is overdone. It is likely that “quantum computers” may be worse than classical computers in many comparisons. Specific realizations of a quantum computer (just like particular instances of a classical computer) may have very limited functionality due to arbitrary details of configuration, hardware architecture, or programming. And I don’t know about you but I’m not sure if I’m conscious of much … most of the time …

    Kind regards to all and thanks for the great posts!

  15. Scott Says:

    Richard M Bacon #13: Again, if this sort of thing were ubiquitous, it would make a mockery of the idea that we’re learning anything about consciousness in particular (as claimed in the abstract). We’d just be observing some mundane biochemistry.

    Stepping back, this paper either represents the biggest advance in neuroscience for a century, or it can be debunked to the extent it claims anything interesting. I admit the possibility of the former but would bet on the latter at 100:1 odds.

  16. fred Says:

    If the brain activity was so obviously quantum at a basic level, then you’d think it would be also really easy to interfere with all this coherence/entanglement by bombarding it with particles/radiation. Then the subject would be able to report any subjective change in his consciousness or objectively perform differently at various intensive cognitive tasks.

  17. Clint Says:

    Scott #15:

    Interesting that your brain reported a probability encoded for the state vector

    $$ \vert TruthValueOfQuantumBrainProposal \rangle $$

    I know I know … you can come up with probabilities with your classical brain … but still 🙂

  18. Enkki Says:

    @Clint 14

    The entire “purpose” of the brain appears to be to represent probabilities of states – to be a “predictive engine”

    does anyone here know what kind of discrete computer represents states in vectors of amplitudes, uses the interference of those amplitudes in its (linear) operators, projects outcomes onto subspaces of the state spaces, and overall appears to be good for representing probabilities of states and maybe efficiently solving period and phase finding problems??

    (1) Represent discrete information in vectors of amplitudes (complex numbers)
    (where their norm is interpreted as a probability)
    (2) Unitary operators (linear, preserves inner product lengths/angles, phase relations)
    (3) Projection operators
    (4) Products of state spaces

    You raise a very interesting question that is new to me and maybe deserves to be investigated as a computation paradigm. Question, is what I quoted above discussed in Koch or are these your own conclusions? Being a mathematician, I am curious if the above has ever been written about.

  19. WA Says:

    @Clint 14:

    ”does anyone here know what kind of discrete computer represents states in vectors of amplitudes…”

    A quantum computer certainly does not represent states in terms of amplitudes. I think that representation is what we use to describe it. If the brain stored and manipulated wave functions as 2^n complex amplitudes it would be very inefficient as a quantum computer.

  20. Clint Says:

    Hi Enkki #18:

    First, I don’t at all subscribe to the proposal that the brain having a quantum model would necessarily say anything at all about “consciousness” except in a very trivial way as being the model that all of the brain’s computations run on. I have an extremely skeptical view of “consciousness” as something “special” or “magical”. My favorite theory from the cognitive scientists is that it is just a form of short term memory the brain puts together after it decides whatever it decides or experiences whatever it experiences. My brain suspends the “consciousness thread” every single night for 6 to 8 hours of maintenance functions … so clearly my “consciousness” is not in control. From a purely computational standpoint I don’t see “consciousness” as being especially interesting or complex either.

    Second, I subscribe to the viewpoint that “quantum computation” is easy to understand, mathematically simple, and has nothing (necessarily) to do with “atomic physics”. My middle school daughter understood quantum computation after about 30 minutes of explanation. I credit Scott with enlightening me that quantum computation is basically a probability theory and then reading some others (Pitowsky, etc. …). However … atomic physics … is definitely NOT easy to understand or mathematically simple … and God bless those who have to do it.

    Third, I subscribe to the “universal” view of computation that it can probably be found (even if in some weird or trivial way) almost everywhere you look. This view is presented well by Moore and Mertens. The idea can be conveyed by understanding that even “trivial toy” models of computation can be universal computers with some additional configuration or architecture. But we can consider things that aren’t universal as still “computing” … even if in simple or limited degrees. The human brain may have evolved to be a “simple computer model programmed for certain kinds of problems but not so good at others” – in other words, to “be what’s it like to be human”. So, “computing” or “a computer” is a concept that quickly takes flight and escapes our very limited conceptualizations of what it “has to be”.

    I’m not a theoretician but just a “doofus hardware guy”. I got into this “quantum brain rabbit hole” when I was in graduate school in the early 2000’s after reading some article in which a neuroscientist proclaimed that the brain was a “classical computer”. I thought naively … “Oh, cool, then I can look at the neural hardware and see the bits, the gate/transistor models for bit I/O, etc.” Well … I still haven’t found those …

    Which finally gets to answering your question (…)

    Christof Koch’s book The Biophysics of Computation was good from my “computing hardware” perspective. It has references to earlier papers that developed the modeling of amplitudes in dendrites in which the inputs must be, in general, modeled as complex numbers. That is … the magnitude and phase information is what matters to the computation. Is it positive or negative amplitude? What is the phase? The synaptic/dendritic mechanisms are fantastically complex … but at the end of the day … again as a hardware guy … I’m just asking … “Look, just tell me what is the logical primitive here??” … The answer is that it is NOT a classical bit. It is a complex number input to the operator (which is the dendrite). We HAVE to tell the dendrite the complex number (amplitude) we will input. That’s how it works.

    If you’ve worked through the quantum postulates then you know that once we’ve made the “design decision” to encode information in our computer in the form of complex numbers (amplitudes) then the rest of the work we have to do is put together some specific relationships and operator architectures. Interestingly, once we’ve landed on using amplitudes with the intention of coming up with a probabilistic computer (a “predictive engine”) … the rest seems to just fall into place. Again, see Scott’s Lecture 9. (There’s also other work of Scott’s and others on WHY we should choose to use complex numbers.)

    Following the postulates … We need a state vector. Well, there is evidence all over the place in the neuroscience literature that the brain associates amplitudes for different possible (discrete) states of “something”. Then we need to be able to take inner products of those vectors. That’s just multiplication and phase operations with amplitudes. Again, see Christof Koch for the evidence that these are ubiquitous neural functions. At this point we have what is called a Hilbert space for our “state vector”. Technically finite … at least in my cortex 😉 The 2-norm of those amplitudes then is interpreted as the probability of the states. I am NOT aware of clear neuroscience evidence that the brain uses the 2-norm. However, there has been recent evidence that normalization is ubiquitous in the brain. Can we get to the 2-norm because in finite complex spaces all norms are equivalent? Or does the oscillatory nature of neural systems get us there? Or maybe Gleason’s theorem? Or eventually imaging amplitudes at the dendritic scale gets good enough that we see it’s the 2-norm? I don’t know.

    Then we need unitary operators. To me this just means “rotate the vector but don’t break the Hilbert space you just set up!” Good evidence for this appears in the brain as “amplitude bumps” programmed over receptive fields (meaning just “some input”) of (positive and negative) excitatory and inhibitory neurons. The interference of positive and negative amplitudes is the essential operational mechanism. That sort of thing just does not happen in a classical model of computation. These “bumps” can then move over “rings” or other topologies. But the key appears to be that, to my doofus understanding, that the brain can “point the state vector (the collection of amplitudes) this way or that way in the state space without destroying or distorting the underlying character of the state space. I know that’s a very non-mathematical way to say it. But it at least looks reasonable that this might be going on. Christof Koch again makes the observation in his book that it is “as if Nature intended, despite all the neural non-linearities, to produce neurons (operators) that behave in a perfectly linear manner”. That linearity, like the discreteness above, is important for us to have a “well-behaved” model of computation.

    Then we need projection operators. These are mechanisms that project the state vector onto a subspace in a way that is “destructive” of that previous state. Neural candidates for this appear to be dendritic pruning (literally cutting away connections/associations between amplitudes) or inhibition (suppressing those associations). The latter actually reminds me of descriptions in “quantum Darwinism” explaining decoherence in the model as the suppression of other possibilities … But in any event, either pruning or inhibition appear to be good candidate hardware mechanisms that I could use as a hardware designer/programmer to project a state vector onto a subspace. Of course, we could program that to be either “deterministic” or “probabilistic” using some distribution or even sampling some Natural signal. Notice we didn’t even mention “measurement” … because we don’t have to 😉 … we’re just looking for hardware devices that look like the projection postulate.

    Finally, we need to be able to put together systems as subsystems. That can be done through the tensor product of the spaces described above. And again the brain appears to have the mathematical operators available for the necessary calculations in the tensor product. So, we could spin up the hardware description of tensor products of spaces – just by multiplying and combining amplitudes which we have available to us in the neural hardware toolbox.

    So … yes, I agree, it seems worth investigating. Although I don’t know what good it would do us other than satisfying some curiosity …

    I mean … if the brain is a “classical computer” … then where exactly are the classical bits stored? input? Where are the operators accepting classical bit inputs? Why complex number inputs instead? Why the interference of amplitudes at the heart of representation of and operations on states?

    By the way, the interference of individual amplitudes (dendritic/synaptic potentials integration) in individual dendritic “cables” is an entirely classical (physics) phenomenon – classical wave interference (best I understand). What is NOT classical is the computational interference possible in the representation of the state using a vector of positive or negative complex numbers. Which is interesting …

    To my knowledge I’m not aware of anyone approaching the “quantum brain question” from this “just find the postulates realized in the hardware” approach. The only “quantum brain” proposals I’m aware of are based on trying to use quantum physics – atomic scale systems – which I see both zero evidence for and zero possibility for due to environmental decoherence.

    If this thesis is correct then I would expect neuroscientists will reply, “Yes, yes, we’ve known about excitatory and inhibitory complex amplitudes for some time now.” And I would expect the computational theorists reply, “Yes, well, we’ve known for some time now that quantum computers aren’t all powerful magical devices that solve all problems instantly …” In other words, ok, fine, your brain is a quantum computer, but it’s probably like the Digi-Comp II of quantum computers. So, no this doesn’t explain consciousness or collapse the universal wave function. It probably actually makes many things harder for you than if your brain was a classical computer. Now get back to work.

    (Scott you need a “blog post character limit” so I stop typing at some point … clearly I need to learn how to “just summarize” … or maybe this fails the “simple explanation test” …)

  21. Clint Says:

    Hi WA #19:

    I 100% agree with you that if the brain is a quantum model of computation then it is likely on the low end of the spectrum of potential quantum hardware. Certainly mine would be 🙂 But I don’t really know. The ONLY reason why I can imagine that Nature would choose this model in evolution is that it seems to be more efficient than classical at phase and period finding. And maybe primitive life forms needed an efficient model for period and phase finding in Nature??

    Remember that I see the Digi-Comp II as being a full-fledged member of the “universal classical computers” club just as much as the Cray at Oak Ridge. So, a computer being “trivial” or “a toy” or “the most inefficient and stupid example I can imagine” … wouldn’t be a disqualifying argument.

    On using amplitudes to represent states I defer to Scott and to Mike and Ike to argue that one.

    Or forgive me if I misunderstand … Maybe you are arguing an interpretational point and presenting something like the epistemic model interpretation? If so, then I’m fine agreeing with you there also … or at least not disagreeing. I don’t think that someone giving me a computational device (whether a Dell laptop or a brain slice) helps us resolve the measurement problem. From my doofus hardware point of view there is no measurement problem 😉

    Best regards!

  22. JimV Says:

    Wow, thanks to Clint for some interesting and impressive comments!

    Does this complex-computation model suggest that neural networks could be made more efficient by using complex numbers and phase calculations?

    I read somewhere about a study which concluded it takes about 1000 nodes in a conventional neural network to simulate all the properties of a single neuron. Maybe that is what it takes to approximate complex arithmetic using a network of computations involving one-dimensional (real) numbers (approximated by discrete floating-point values).

  23. Vincent Says:

    On the general theme of ‘development[s] in QC, [you] feel completely jaded about [but that] might be actually exciting’ (to someone like me, who didn’t spend decades studying QC), can I ask you something about your old post on Shor’s algorithm? (https://scottaaronson.blog/?p=208)

    To me it sounds like you have three steps:

    1) compute \(x^r \mod N\) for all \(x\) in the range from 1 to N
    2) put them all together in a long superposition
    3) apply some quantum magic to extract the multiplicative order of \(x\) mod N from this superposition.

    But… if step 1 was easy (or doable at all) then we wouldn’t need any step 2 or 3: we could just look at the outcome of our 300 gazillion computations and see which r gives outcome 1!

    So obviously I am misinterpreting something, but what? Can you create the superposition without doing the computation first? But how?

    Even creating a superposition of N quantum states that need not be computed first but are just given to me for free by nature seems to require N steps (of adding them together one by one) and therefor be infeasible. What am I missing?

  24. Scott Says:

    Vincent #23: So this is, like, the central point in all of quantum computation. Yes, you can do a computation on 300 gazillion different r’s in superposition, for the cost of only one. That’s what superposition means, in a sense. But then you’re not done!! The trouble is that, if you then look, without having done anything else, the entire superposition will collapse to a single random r. So, you’ll get no benefit over just having picked r randomly.

    So what’s the point then? It’s that, once you’ve got the computations for all the r’s in superposition, you can then cleverly try to measure in a different basis. I.e., you can try to choreograph a pattern of interference in the amplitudes, such that for an answer you want to see, the contributions from all the different r’s reinforce each other, whereas for the answers you don’t want to see, the contributions from all the different r’s interfere destructively and cancel each other out. That’s exactly what Shor’s algorithm manages to do. It’s very specific to factoring, discrete logarithms, and some other problems with similar group structure—it wouldn’t work for some arbitrary problem where you’re searching for a single r out of gazillions in the haystack. Read the post again!

  25. Vincent Says:

    Hi Scott,

    thanks for the extremely swift reply! (Isn’t it the middle of the night where you are?). I still feel however that I already knew everything you write starting at ‘the trouble is’. (Already knew from your earlier writings, that is, so thanks for that too!) My problem is with the sentence before that: “Yes, you can do a computation on 300 gazillion different r’s in superposition, for the cost of only one.” Come to think about it, two problems actually.

    The first problem is rather basic: okay, (safely) assuming that you are right and I can do that computation, how do I get the superposition in the first place? Why does adding a gazilion pure states not take gazillion additions? You sometimes speak about ‘preparing a quantum state’ but where is the time that that costs factored in?

    The second point is perhaps more philosophical. Suppose I have a superposition of all r. Then intuitively it is believable I can do something like ‘square them all’ or ‘multiply all of them by 17 mod N’ or whatever in this magical simultaneous quantum way, because this computation is just something I can do to any r that is part of some suitable algebraic structure (e.g. a ring). So yes I believe I can ‘represent’ elements of arbitrary rings by smartly chosen vectors in some Hilbert space and then represent the algebraic manipulation of them with operators on that Hilbert space, and if I am really smart and manage to do that in a way where these operators are linear then your claim about superpositions follows immediately. So far so good.

    However, I feel that the use of the r’s here is a bit different. The operation you want to do is ‘multiply x with itself r times’. So here r is not an element of an abstract algebraic structure, it is something in the real, physical world. A quantity, the number of times I do something. 3 being more than 2 has an actual meaning here, which gets lost when I represent 3 and 2 with vectors named |3> and |2> that are sitting somewhere in a Hilbert space. So while I feel comfortable with doing, say, matrix multiplication on a superposition of matrices, raising some fixed number x to a superposition of exponents feels like a different beast. Does this distinction make sense, or am I overlooking something stupid?

  26. Clint Says:

    Hi JimV #22:

    You ask a great question!

    If there is benefit to using complex numbers in neural networks then it could be the case that the brain is a classical model of computation but exploiting that complex computation benefit. Just because the brain encodes information in complex numbers does not at all (of course) prove that it is a quantum model of computation.

    There is some exploration in that direction for complex valued neural networks. And I’m sure more I’m not aware of.

    And I have no idea what the actual algorithms might be that are realized in the neural substrate. (My understanding is our imaging/testing limits at the dendritic scale still leave us guessing.) There could be some kind of evolved specialized period or phase finding algorithm in there. Or, maybe Nature painted itself into a corner when it evolved the first animals with neurons and it is “stuck using the interference model” evolved for some long ago forgotten purpose and no longer even uses anything we would consider as an advantageous “quantum algorithm”. Maybe its just a very BAD quantum model trying to implement classical logic.

    To emphasize again, if not clear in the posts above, just how weak is this claim. An analogy might be best. Let’s say someone claims that their hand is a “tunnel boring machine”. Well, how is that defined? If the definition is “a device that can dig a tunnel”. Then, yes, we have to allow the hand into the set of things that can dig a tunnel. But … come on … right?

    Well, that’s the point being made about computation. Again, see Moore and Mertens or see The Power of the Digi-Comp II for examples of full-fledged universal computer “curiosities”. Those examples surprise us with “machines” that we (most of us anyway) would have no expectation at all would be capable of being a model of computation … but that nevertheless can get over the bar. It is surprising (at least to me) what a low bar these examples reveal “computing” to be.

    Which is my original point above that … while there is a lot of excitement about “quantum computing” (and that’s fine) … we should be reminded that “computing” just as a general concept (of which quantum computing is an example) has this kind of crazy universality about it such that the actual design space for models of computation may be very large, have unexpected entry points, be full of surprises, and be filled with examples of models of computation that are “limited, inefficient, pointless, etc.”.

    If we just focus on the postulates of quantum computation … the hardware design requirements if you will … then those are not crazy or “astronomical” requirements just in themselves. As Scott has pointed out … the core “operating system” is quite simple … it’s the apps that run on it that can be complex or difficult to control.

    Thanks again for the great questions!
    Kind regards

  27. Scott Says:

    Vincent #25: Both of your questions have straightforward technical answers.

    To prepare an equal superposition of 2n strings, you can start with n qubits each in the |0⟩ state, and then apply the Hadamard gate (which maps |0⟩ to (|0⟩+|1⟩)/√2) to each of them. Each of those 2n strings can then be interpreted as a possible r, written in binary notation. The mapping from Σr|r⟩ to Σr|r⟩|xr mod N⟩ is then done using a network of CNOT and Toffoli gates, via the same sort of algorithm that a classical computer would use—except now it’s acting on the entire superposition over r’s.

    Rather than continuing to teach my undergrad course via blog comment, 🙂 at this point I’m going to refer you to my Intro to QIS lecture notes—Chapters 19-21, and refer back to the earlier chapters as needed.

  28. Vincent Says:

    Great, thank you!

  29. Vincent Says:

    I did a bit of reading in your notes and just came back here to tell you how great this Hadamard trick is now that I finally understand it. Best use of tensor products I’ve seen all year. It probably looks totally lame and obvious to you, but for me something finally fell into place.

    For years I have watched you make arguments that sounded like my ears like ‘these stupid journalists make it sound like having a quantum computer is like having a room with one googol classical computers in it, so that if you have a huge search problem you can give each computer one instance of it and then one computer will find the answer. But what they are forgetting is that, if you do that, you still have to walk along each computer and read off their output one by one and this will still take you a googol seconds, so you gain nothing’.

    And I was thinking: yes, clever, walking and reading is indeed annoying, but why does Scott not make the much more obvious point of that getting the googol computers set up in this room in the first place will take an even more annoying amount of time, and space, and sweat and tears etc before anyone has computed anything. But now I finally see that in the quantum world, this would only take log(googol) blood, sweat and tears which is indeed comparatively negligible. A true miracle of quantum mechanics, even if for most applications it is completely useless.

    I’ll leave this here for if some reader was struggling with the same issues and go back to reading your notes on the special situations where this miracle actually can be useful… Thanks again!

  30. JimV Says:

    I hesitate to clutter this thread with somewhat off-topic persiflage, but if permitted I would like to thank Clint for his generous reply and links (which I have accessed briefly and filed for further review).

  31. Mitchell Porter Says:

    Clint –

    You seem to be missing entanglement, or superposition of product states, in your concept of quantum computing. It’s not enough to just have individual complex-valued registers, you need complex-valued superpositions of sets of register values. Something analogous to |0> |1> + |1> |0>… and that has to be different from |0> |1> – |1> |0>, which creates difficulties for any attempt to regard a quantum superposition as simply a probability distribution.

  32. Ilya Zakharevich Says:

    Clint #14 #20

    You seem to be very excited by existence of reports claiming that the functioning of neurons has some aspects reminding Quantum Mechanics. However, biology is not like CS (or Math), and having one voice claiming (and/or “proving”) something does not mean that it has any basis in reality, and/or is crucial enough to be taken into account.

    For example, a few months ago JimV claimed here (more or less) that “digital models of 200-neurons ‘brains’ of nematodes” can navigate labyrinths as well as the actual nematodes. I’m pretty sure¹⁾ that these models were 100% classical. If so, then this more or less establishes the fact that quantum effects are irrelevant for the functionality of neurons. (At least for nematodes!)

     ¹⁾ I could not find the reports from which JimV got his claim. However, OpenWorm uses the Hodgkin–Huxley model of neurons in its emulation of nematodes.

    I’m all for simplicity, and IMO the reasonable “everything is simple” a␣priori-conjecture would go along these lines:

    • No matter how complicated real neurons are, only the simplest aspects of their “internal mechanics” are needed for the correct functionality of nets made of neurons.
    • There is a number \(N\) s.t. on average, one can successfully replace neurons by their digital circuit models with \(N\) gates (on average).
    • I would expect $N$ to be about 20…100 (and would guess it to be closer to 20). (Well, maybe one would need multi-input AND- or OR-gates.)
    • There is \(N’∼N\) and a reasonably simple algorithm which given a complicated “working” model of a neuron, produces a digital model with \(N’\) gates (on average) which functions “more or less as effectively” as the complicated model.
    • I presume that I’m not alone in considering the claims above. So if in the timeframe ∼10 years the conjecture is not confirmed, one may consider it refuted!

    (Indeed, the assumptions above [if true] should be relatively easy to verify now — when the needed “complicated models of a neuron” are known, and there is a test environment to check whether “those simplified models work as well as this”.)

    This provides a refutable conjecture. It may be considered as the (Bayesian) null-hypothesis to bounce experiments off. And IMO only after it is refuted the “quantum brain” claims can gain ground!

  33. JimV Says:

    Confession and apology: I got the (supposed) information on the C elegans experiment from another commenter at Back Reaction in a thread in which we were debating Lorraine Ford on whether AI was possible. The impression I got from that comment was that an artificial worm had been built with 200 neuron-emulating electronic circuits and that it had duplicated real worm behavior. I had previously read that such worms were able to navigate and memorize simple mazes placed between them and food.

    The closest to that which I could find online today is

    JIAN-XIN XU and XIN DENG https://www.worldscientific.com/doi/abs/10.1142/S0218339010003597

    “With the anatomical understanding of the neural connection of the nematode Caenorhabditis elegans (C. elegans), its chemotaxis behaviors are investigated in this paper through the association with the biological nerve connections. The chemotaxis behaviors include food attraction, toxin avoidance and mixed-behaviors (finding food and avoiding toxin concurrently). Eight dynamic neural network (DNN) models, two artificial models and six biological models, are used to learn and implement the chemotaxis behaviors of C. elegans. The eight DNN models are classified into two classes with either single sensory neuron or dual sensory neurons. The DNN models are trained to learn certain switching logics according to different chemotaxis behaviors using real time recurrent learning algorithm (RTRL). First we show the good performance of the two artificial models in food attraction, toxin avoidance and the mixed-behaviors. …”

    I think I read about the 1000-node study in Wired, but the quickest source I found today is

    New Study Finds a Single Neuron Is a Surprisingly Complex Little Computer
    Jason Dorrier, Sept. 2021 https://singularityhub.com/2021/09/12/new-study-finds-a-single-neuron-is-a-surprisingly-complex-little-computer/

    “In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

    In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

    Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI. …”

    Like early neural networks (I’m thinking of the one that was trained to interpret written decimal digits and always returned a best guess to random squiggles) (sorry, no source link) my brain takes what little information it has and produces a best guess, which it considers solid information until it learns better.

    As always, I leave it to moderators’ judgement as to whether this comment is worth publishing.

  34. fred Says:

    It seems to me that a good grasp of both theoretical and experimental QM is required to really understand well Quantum computing (e.g. my recent confusion that the physical gadgets implementing QC gates had to also be in a state of superposition, which isn’t true… I had to read actual papers about how those gates are realized to understand it).

    One thing I’ve noticed is that the MIT QM undergrad courses (from the videos that are online) don’t spend much time talking about the actual measurement process, the emphasis is (understandably) put on computing the wave function.

    The few times measurement is brought up is to illustrate peculiarities, like manipulation of the uncertainties by repeated measurement, but the relation between operators and observables tied to actual measurement processes isn’t emphasized all that much (often a bunch of very abstract operators are conjured to manipulate the state more easily, and it’s not clear at all if this means anything from a practical/physical point of view).

    Maybe it comes from the fact that the professors for those undergrad classes are theorists and not experimentalists.
    But I guess the (MIT) undergrad students have opportunities to do more hands-on labs to understand better the role of measurement/decoherence, etc.

  35. Scott Says:

    fred #34: FWIW, the vast majority of quantum computing theorists I know have never done experimental work in quantum information. I personally need a guide on lab tours to tell me what’s the photon source and what’s the coffee machine. 🙂

    What quantum computing theorists do have, is years of experience proving theorems about models of computation. It seems empirically like that and physics experience often substitute for each other, producing the same QI insights from totally different directions.

  36. fred Says:

    Scott #35
    thanks. Interestingly I read that Peter Shor’s background was originally in applied mathematics (his PhD at MIT was on bin packing).

  37. Ilya Zakharevich Says:

    JimV #33

    A pity! I live with this particular¹⁾ “everything-is-simple” conjecture for about 20 years now²⁾. Finally, after the mentioned above few-months-old post of you I started to think that now there is a real framework³⁾ to test it!

     ¹⁾ In math, I would say that my “everything-is-simple” conjectures have ≳30% chance of success. In non-math, it seems to be (understandably?) lower…

     ²⁾ Assuming that this conjecture holds, and given the algorithms from the conjecture (and given the network topology): A digital model of the neural network of a human brain has been already in reach with the hardware of 20 years ago. (Within ∼ a couple of magnitudes of the “real-time speed”. And this assumes “sufficient — but reasonable⁴⁾ — funding.)

     ³) It should have consisted of two parts: a (successful) digital model of a nematode, and a non-trivial task (like navigating a labyrinth) which a real nematode can perform.

     ⁴⁾ “Reasonable” means “of the same order of magnitude as the current QC funding”.

    Well, this means that the timeframe in my conjecture above (#32) is not 10 years, but “10 years after Footnote (3) above becomes reality”!

  38. Clint Says:

    Hi, Mitchell Porter #31:

    Great point! This proposal is missing a lot of details!! It’s important to say that I do not support this proposal on the grounds that there is “evidence of entanglement in the brain.”

    Where is the support?

    1) The peculiar universality of computation that allows even “curious, pointless, inefficient, bad, stupid, crazy” models of computation to exist.

    2) The postulates of quantum mechanics NOT requiring logical primitives/operators to be realized in atomic scale systems but ONLY require some “linear algebra” to be realized in “some kind of hardware”.

    3) The features of the neural model of computation that (at least faintly) resemble (even if in a weird way) features that are central to the quantum model of computation: complex numbers (amplitudes) for encoding information and input to operators, interference of amplitudes, normalization, discreteness, linear operators, ability to represent states in superpositions (including for multi-qubit states), projection operators, …

    But … while all of that might sound intriguing … it should, of course, be met with the usual skepticism and demands for empirical support.

    The original point, the reason I’m taking up space in Scott’s thread (acknowledging Scott’s nearly infinite capacity for grace in allowing ideas to be expressed) … it is common to hear that “quantum computing is HARD” or “quantum computing is about quantum physics”. But the universality of computation and the simple quantum postulates don’t force that upon us … I know MAYBE the universe or physics does … but we should leave open the possibility for “curious quantum computing models” to show up.

    To emphasize, even IF we have a quantum neural model of computation … I don’t see that as ANY more incredible or amazing than if we have a classical model of computation (maybe even less remarkable in some ways). I have no idea why anyone would want to connect this to “consciousness” or “quantum gravity”. It’s just a linear algebra model. In my view, the quantum model of computation is not mysterious or magical nor would it give us some kind of superpower of cognition. If given the choice between the two I would CHOOSE myself to have a classical model of neural computation because I’m familiar with how powerful it can be and prefer classical logic.

    But, to the demand for entanglement. My first response would be … IF we’ve satisfied the quantum postulates in our device hardware description then … ALL of the consequences of the model of computation are “baked in” … although they might depend on the details of a specific configuration.

    The second response would be … I’m no expert on entanglement but my best impression of the “state of the field” is that at this point we have some proposed “measures” for entanglement and ideas about “where it comes from” and maybe this person over here will say “oh, sure we understand it perfectly” but someone else might say “our full grasp is incomplete”. So … I would just go back to the postulates and say it looks like we are allowed to encode multi-qubit states including states that are not separable. I know that sounds like weaseling out of the question but, again, I ONLY want to satisfy the postulates and not get drawn out into applications that run on the postulates. Entanglement, I would argue, is one of those (interesting) “immediate consequences” baked into the postulates that is nevertheless NOT a postulate. Again, I’m going by Mike and Ike here.

    From a hardware design point of view … to program entanglement we need a CNOT gate, right? So, I need some evidence that we might have the devices, mechanisms, or architecture for that. (I guess ignoring that multi-qubit states are usually entangled anyway, right?)

    Koch’s book mentions work using four arborized neurons to evaluate the exclusive-or (XOR) function (classical). Can we then propose using a superposition state (amplitudes) for the control input?

    Although certainly we don’t have evidence the brain actually does this, it at least looks like there might be a design path for us to generate a Bell state since the XOR is possible and representing superposition states may be possible.

    At least, maybe we can say we feel the possibility is not for sure zero the CNOT function is possible in the brain.

    But, yeah, lots of “maybes” in there …

    Let me share this. Before I read Christof Koch’s book (and some other neuroscientists like Mel) I thought (not sure where I got this from) that neurons were just sort of these dumb threshold sum operators. Koch actually says in his book that this is a “moronic” view of the computational capabilities of neurons and the brain. I was shocked to discover instead that almost any conceivable mathematical or logical operator I would want as a hardware designer could be available using computational capabilities in the dendrites, synapses, intracellular medium, etc: Phase operators, gain operators, FTs, multiplication, XOR, OR, and-NOT, weird computations of high dimensional sigmoidal curves in dendritic sub-units, … Now, does the brain really use all of those? Or when does it use them and for what operations?

    The position we are in is having some tantalizing evidence for the postulates being satisfied at the register and gate level but we have no idea if the brain has something like the power of a hardware description language for synthesis and compilation into the substrate or if the brain is just a collection of “cobbled together” less-than-optimal classical algorithms that evolution randomly selected.

    So, again, the point of the original reply was … just to argue for the position that there is reasonable, conservative, non-crackpot evidence that the possibility is “less than astronomical” from a “postulates at the hardware level” approach.

    Best regards! 🙂

  39. WA Says:

    Clint #21:

    If the computations in the brain are quantum, where are the qubits? My understanding is that it is hard to find a candidate for the qubits because the brain computations happen on large time/space scales, and at a high temperature.

    On the other hand, like you I wonder if quantumness can creep into the brain computations in a more sublte way. For example, exotic quantum behaviour at room temperature is known to be possible, like in topological quantum materials, so even though the brain is an incoherent hot mess, its thermal state could display strong quantum effects.

  40. Nerd Who Works on Game Design Says:

    Here’s the truth about working in programming or “computer science.” As I get closer and closer to turning forty, the sense of realizing how alone, unsuccessful, and clearly unlovable I truly am is becoming more and more amplified. I now know I am at the point of no return, where no woman of any age will want anything to do with me. Younger women will simply think I’m a creepy old guy, and women closer to my age that are still single are becoming increasingly rare. Seeing families and couples everywhere I go is starting to become unbearable to the point where I literally wake up not wanting to be alive. I’m becoming more and more of an emotional wreck over stupid shit. A week ago I woke up in the morning after drinking heavily and threw up a bunch of xanax and klonopin I don’t even remember taking and passed out again, and when I woke up and realized I could have died but didn’t, I literally started crying because I realized that I still have to continue existing in this brutal and uncaring world, despite the fact that I’m losing the will to, bit by bit. I would never take my own life because suicide is a sin and the chickenshit way out, but truth be told, I find myself slowly, but surely losing the will to live, and more often than not, secretly hoping that the more reckless and unhealthy aspects of my life finally catch up to me and put me out of my misery. Thanks for listening scott, it really does mean alot to me to know that there are people I can say these things out loud to and not bottle up and let eat me inside…

  41. Scott Says:

    Nerd Who Works on Game Design #40: I’m sorry, and I hope things turn around for you.

    I won’t give you the standard advice about how to turn things around since you’ve surely heard it BB(1000) times, but for what it’s worth, consider me to endorse almost all of that advice. Or rather: I both “give you permission” and urge you to redouble your efforts on Match and eHarmony and Tinder and whatever else. If you won’t do it for your sake, do it for mine.

  42. Another programming nerd Says:

    Game Nerd, and scott:

    Here’s my own experience of being a college student pursuing a degree in CS, while being an awkward nerd.

    I live by myself, I drive to my university all by myself, I eat by myself, hell I even end up talking to myself in my own apartment. I don’t get phone calls from friends (don’t really have friends), or family (my parents would call but just to scream at me about spending money; sister doesn’t call as she has everything; friends, boyfriend, involvement); cousin doesn’t call unless its for access. I’ve never really had emotional connections with girls, never slept with a girl, never got intimate with a girl, never really hang out with guys here (besides my one friend but he lives far away now), and so on. I spend most of my time reading self-improvement books and articles, watch HIMYM or any other TV series, or sleep. When I am on campus, I read books, go to class, do my homework, study, or my most favorite activity, talk to people, especially girls. It’s mainly just for fun, unless it’s an interesting conversation, then I’ll ask for their number and share how much I’d like to meet them later and get to know them. Unfortunately, none of them ever respond. (So I spiral into the same habits). Nights like Thursday-Saturday, I go out to the bars and clubs nearby by myself and try to have fun by amusing myself and those I know, as well as new people I meet. Okay, maybe I am a little needy and desperate for connections because I can’t always be comfortable being by myself all the time.

    Today, while I was in the business building restroom, I got an e-mail from Campus Police to call them as they recieved complaints against me. I went to the Campus Police Station since it’s nearby to deal with it. As much as there was fear and anxiety on my mind, I went there and asked to speak to the Sergenant responsible of the case. He made it clear that I am not in trouble and no charges or report would be officially filed, but this is more of an awareness about it. We moved to his office and talked about it. Some of the girls felt I came on too strong, didn’t show my intentions clearly, considered creepy, and questioned if I even go to the university I’ve been in for 3 years. The sergeant kind of told me that one girl thought I was following her by car which was absurd because that goes against my morals and concerned about my own conscious. We talked for an hour and within that hour, we talked about my loneliness and social isolation, where I’m from and how I’ve moved around as a kid, when my bubble broke, as well as other things. He understood that I was seeking connections and fun, and I actually cried out how this scares me now and how much I crave for fun out there. In the end, we had a good conversation in which we talked about him, myself, and how the world works. I sincerely told the sergeant to give my apologies to the girls I creeped out, and he said he would let them know that I’m not a danger and my true intentions, which was (a) confidence work, (b) learning to communicate with people, and (c) making friendships/relationships.

    The complaints make me feel like I am branded as a creep, weirdo, and an anomaly. I also wonder to myself if I’m unsafe to society since I apparently creep some of these girls out. I just can’t let it out of my mind. It is really killing me inside that I was considered this way. How do I get through this pain inside my mind?

    I feel like nerds like us are treated as second class citizens.

  43. Clint Says:

    Hi WA #39:

    Great question!

    If I may quote from Mike and Ike:

    “Quantum mechanics is easy to learn, despite its reputation as a difficult subject. The
    reputation comes from the difficulty of some applications, like understanding the structure of complicated molecules, which aren’t fundamental to a grasp of the subject; we
    won’t be discussing such applications. The only prerequisite for understanding is some
    familiarity with elementary linear algebra. Provided you have this background you can
    begin working out simple problems in a few hours, even with no prior knowledge of the

    and also from Scott:

    “But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves, or particles — then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.”

    Now, let’s imagine we are in a different branch of the multiverse, the year is 1827, and Gauss, who first called complex numbers “complex numbers” (I think), after publishing three papers on probabilities as the foundation for the Gaussian law of error propagation, decides to play around with complex numbers one Sunday afternoon for a few minutes and see if he can construct a probability theory based on using positive AND NEGATIVE complex numbers. He decides to represent states as vectors of complex numbers (amplitudes), employs unitary operators for rotating the vectors in the state space, projection operators for “measurements”, and tensor products for putting together multi-state spaces. He comments in a footnote that the 2-norm MUST be used to represent the probabilities of outcomes, that he has a proof in mind for why that is obvious and hopes to get around to publishing it at some point …

    PRESTO! In that alternative universe Gauss just “invented” or “discovered” quantum mechanics almost a hundred years before physics experiments existed that would lead physicists to “find it”.

    By the way, all credit goes to Scott for pointing out that QM COULD have been spelled out just as a mathematical exercise in probability theory with NO input from physics. That’s not the way it happened in our (branch of the multi-) universe but it COULD have happened that way.

    I say all of that in order to say …

    The postulates of quantum computation do not require or depend upon atomic-scale physical systems. Physics is an app that runs on this model.

    By the way, we should probably call this something like “The interference model” of computation or maybe “The negative amplitude” model of computation instead of the “quantum” model … leaving that for when it is applied to physics … but that’s what our history left us.

    Now, shift gears … We are back to the present. Ask any neuroscientist the following questions:

    (1) Does the brain represent the probable “state of something” as an amplitude (complex number)? Specifically, are the magnitude and the phase information essential when characterizing the input to a dendrite? For example, if we change the phase have we changed the computation? Or, is only the ON/OFF information important for the computation? [Answer: Almost certainly both magnitude and phase of inputs are essential.]

    (2) Does the brain represent orthogonal (distinguishable) “states of something” as “associated” amplitudes such that if one of the possible states has all the amplitude then the other one is inhibited (or maybe pruned off) or that they can “share” the amplitude between them with each having some percent of what would represent a “definite” outcome? In other words, the amplitudes for possible orthogonal states over receptive fields in the brain may be thought of as a vector in state space. [Answer: Possibly, although the identification, isolation, and measurement of such computationally linked amplitudes is an open area of research.]

    Now, put Gauss in the time machine and bring him forward to hear those two answers from the neuroscientists and he would say: Well that’s interesting because a vector of amplitudes in state space is … exactly the core definition of a qubit.

    Qubits are NOT DEFINED as being “matter, or energy, or waves, or particles” but as … A vector of amplitudes, a complex vector, in state space.

    Could the neural architecture support a vector of amplitudes representing a state space? I don’t think any neuroscientist would feel like that is a controversial possibility.

    A qubit is a list of amplitudes representing a vector in state space.

    The state space does further need to be able to compute an inner product. It is believed the brain is capable of multiplication and phase shift operations on lists of amplitudes (complex vectors) to output another amplitude. In fact, after reading Koch’s book you’ll come away appreciating that multiplication of amplitudes and phase shift operations are all over the place in dendritic units! And, if this is a FINITE state space (pretty sure that’s our brains) then … PRESTO we’ve got a state vector (qubit) in Hilbert space 🙂

    We don’t need to find an atomic-scale system candidate for the qubit because the brain appears capable of maintaining a vector of amplitudes in state space over receptive fields with this dendritic input architecture.

    To say again what I said above I see NO WAY for the brain to be a model of atomic-scale quantum computing. Decoherence of atomic-scale systems prevents that (best I understand). So, I agree with you 100% that there is NO WAY the brain is using entangled atoms or whatever … except in the same sense that my Pentium processor (and all other physical objects) relies on the laws of atomic physics to be what it is – so maybe for some randomness sampling … maybe. No, the computational model must take into consideration only the input/output at the device level of the logical representation. The logical representation level of the brain is way above the atomic scale.

    The real … Emperor’s New Clothes … question here is … Dude, where are the classical bits in the brain ???? I can point to where amplitudes are stored – right there at the synaptic connections and how the magnitude and phase of those amplitudes can be modified in myriad ways … But I don’t see classical bits stored or transmitted or input to operators … anywhere. If I’ve missed the news … I’m all ears.

    I actually feel much more confident in the “reality” of amplitudes representing orthogonal states in the brain than in the “reality” of amplitudes representing orthogonal spin states of an electron. At least I can show you the actual computational encoding of the former. I don’t know where the amplitudes for spin states of an electron are stored … somewhere inside the electron? God’s scratchpad 😉

    Out of respect to Scott I’m going to stop taking up his spacetime with more of me saying the same thing here. The above several posts are more than enough to lay out the argument for this “simple postulates in the hardware” proposal. Just wanted to say please no disrespect intended if I don’t reply further.

    Thank you again for the engaging discussion.

    Kindest and best regards

  44. Raoul Ohio Says:

    Fred #36:

    All this stuff is applied math.

  45. mls Says:

    Nerd who works on game design #40

    In my case, at least, biology had made everything easier by the early to middle 50’s. There are also societal reasons for this. With a little more life experience, you might realize that the expectations of a young adult may be skewed for the reality of the late thirties and early 40’s. I can recall, from my late 20’s, women speaking of “settling.” For some, that is enough; for others it is not. People are different. But, a significant other who has merely “settled” for you could bring unwanted complications into the relationship you may have hoped to have had.

    What I would suggest to you is that you make your immediate priority the halting of substance abuse. At the age of 21, a simple evasion of personal responsibility led me to realize that substance abuse was contributing to turning me into a person I did not want to be. With my youthful expectations, I asked what kind of example I would be setting for my “future children.”

    The abuse of “hard” substances ended immediately. It took more time for marijuana, beer, and cigarettes. About 10 years, in total.

    The substance abuse will make all of the psychological symptoms worse.

    I hope that your remarks about suicide continue to hold. People who commit suicide in relation to psychological symptoms do so for only one reason — to end the pain. So, again, substance abuse makes this worse. Moreover, impulse control is much harder under the influence of drugs and alcohol. The resolve you now claim may not be so resolute when you are not thinkig straight.

    If you are not without family, know that the average time for grieving is about 5 years for most people. When there is a suicide, it is about 10 years. That data is a little old. But knowing that suicide would harm the ones I loved helped me.

    I think about suicide every single day. And, I have learned to ignore those thoughts as a mere symptom with no import. There are many people with disabilities that introduce inconviences into their lives. Suicidal ideation is an “inconvenience” in a life plagued with depression. Because depression is normally episodic, relief will come of its own accord. You must simply not act on such thoughts.

    This is incredibly difficult for young people. Depression is personal. Knowledgd of this periodicity comes from surviving multiple bouts. Young people do not have this experience. They can be told, and maybe they will believe. But, they do not know from personal experience.

    On that, I can only wish you luck.

    I work in a politically incorrect workplace. I explain depression to my co-workers in terms of their “women-watching.” Whatever youthful thoughts may motivate their admiration (objectification) for a woman walking down the street, they know that acting on those thoughts will lead to divorce, humiliation in the eyes of their children, and embarrassment in the general community. So, they do not act upon those thoughts. If you think about it, you will find many situations of a similar nature. Living with suicidal ideation is similar in many respects.

    The important thing is that you do not act on suicidal thoughts. So, maximize the behaviors which will support that priority.

  46. Gabriel Says:

    I don’t accept the premise. Democracies don’t last until they elect one bad leader, even if that leader is a populist dirtbag that endangers the system. Trump was a historically bad president, but he wasn’t the only bad or corrupt president America has ever had. We can still, and probably will, recover. Yesterday was a remarkably good day for U.S. democracy, and a pretty bad day for election deniers and authoritarians!

    America has problems, but reports of its death are greatly exaggerated.

  47. Johnny D Says:

    Why do you use the word ‘simulate’ when referring to the Google et al observation of non-Abelian exchange statistics?

    Curiously that word does not appear in the paper.

  48. Scott Says:

    Johnny D #47: I use that word because I’d still insist on a conceptual distinction between creating nonabelian anyons as excitations in an “actual” condensed-matter system, versus using general-purpose superconducting qubits to “simulate” such excitations. It’s like the distinction between building a wind tunnel, versus simulating airflow in a computer.

    Why does this matter? For one thing, a central reason why people care about nonabelian anyons, is as building blocks for a universal quantum computer. But if you already need a universal quantum computer before you can see the anyons, that use is pretty much ruled out! 🙂

  49. Johnny D Says:

    Scott #48: The point of computing with anyons or suface code is the same. It is to use a large Hilbert space bubbling with interactions to create a safe place for a much smaller space of computational ‘states’ to float safely inside. In both cases, the gates performed on the safe states are modeled as topological braiding which is rich enough to preserve universality. Does this work not show that in the case of surface code, that these states and the non-Abelian operations on them exist? Certainly, for computational purposes, surface code has yielded the essential defining characteristics.

    These results seem ground breaking in the largest sense. Braiding operations exist on quantum systems!!!!!! It is not a simulation of braiding, it is braiding. Am I missing something?

  50. Johnny D Says:

    Scott #48: Sorry for the continuation. A better analogy than your wind tunnel is, does a computer do arithmetic or simulate arithmetic? Does a computer compute or simulate computation? While that maybe a rabbit hole for philosophers, I would say computers compute.

    In the case of surface code, the safe qubits exist in every sense that electrons do or the way non-Abelian anyons are hypothesized to exist. So the case seems stronger for surface code anyons than computers computing.

  51. Scott Says:

    Johnny D: Here’s a different thought experiment then. If (what I’d call) a simulation running on a quantum computer is able to bring nonabelian anyons into existence, then why not also a simulation running on a classical computer? In which case, nonabelian anyons were actually created a long time ago.

    Rather than going down this metaphysical rabbit hole, an alternative is to ask what a given advance is actually good for. In this case, the problem with using these nonabelian anyons for QC is presumably that they won’t survive for very long … because the underlying superconducting qubits that constitute them won’t survive for very long either! And if you solved that problem, then you’d already have a scalable superconducting QC and wouldn’t need the nonabelian anyons. How am I wrong?

    It’s become clear to me that we’re going to see more and more claims of “bringing something into actual physical existence” because someone simulated it on a small number of qubits in some superconducting or ion-trap system. I think we need to get into the habit, fast, of not according those claims a special metaphysical status just because the computer doing the simulation happens to be quantum rather than classical.

  52. Johnny D Says:

    Scott #51:I know that the current qcs cannot scale or preserve these states for extended periods due to the underlying below threshold hardware, but is that part of the definition of anyon?

    I am not denying the difference between simulation and actuality. A hydrogen atom simulated on a qc is not hydrogen. The defining property of hydrogen is a proton and an electron. A qc simulation would not have these.

    In the case of an anyon, the defining property is braiding statistics of quantum states. You may define anyon as specific to solid state, in which case I agree that these types of anyons are not in the qc. Isn’t the important thing for qc that someone create braiding statistics so that a necessary but not sufficient condition for this type of error corrected qc is demonstrated?

    Did the work create braiding stats that are necessessary for Google’s approach to error corrected qc? Is it a demonstration of braiding stats as soild state anyons would have or something different? If it is different, what is different?

  53. Johnny D Says:

    My take on the significance of Google Quantum AI et al observation of non-Abelian exchange stats:

    The extended Church Turing thesis says reality is in BQP. Evaluating knot polynomials is BQP complete. So it follows, if you assume ECTT that, from one point of view, reality is knot polynomials! More specifically in qc, these are knot polynomials on braids of paths of anyons moving through 3d spacetime (holographically correct). Of course all of this was just math. Noone ever implemented these types of transforms of quantum states, until now!

    Congratulations to all involved. History will view this work as of highest importance.

    In the 80s, the Jones polynomial was a lightning bolt
    that shocked the field of topology. I heard one topologist describe it as if he had look up in the night sky and a new star appeared. Even that topologist would not have imagined its phisical significance.

    I am sure a book will be written: Jones, Witten, Frohlich, Kitaev, Freedman, etc… I cannot wait to read it. Beautiful. We are all knot polynomials.

  54. Johnny D Says:

    All error correcting codes may not be equal in implementability. The power of suface code is it uses the same math as the solid state anyons that are hypothesized. In the paper under discussion, search the word emphasize. It takes you to the second page of the pdf. This sentence emphasizes the non-local character of the transforms produced. I believe this critical point differentiates surface code from many other error correction.

    Microsoft and Google take different physical approaches to accurate qc. However, the approaches use isomorphic math, topological protection. I strongly suspect the stability and accuacy of their logical qubits will be similar.

    An analogy for topological protection:

    The cells of a tree are connected in a way that can easily be seen as a mathematical graph where the vertices are the cells and the edges are the physical connections of the cells. When a modest wind blows, the tree cells adjust so that the tree sways and doesn’t break. That is, the graph is protected. The topology is preserved. If a tornado hits the tree, well so much for the topology. The tree is topologically protected naturally for winds below some energy threshold. This is the Microsoft path to qc.

    Now imagine if Boston Dynamics built a ‘tree’. It would have a trunk and branches made of small rigid pieces connected through joints who’s angle are controlled by computers that are fed stresses from sensors through out the tree. This tree would also be topologically protected from the wind below a threshold. This is the Google way to qc.

    The difference is Microsoft is creating a physical system that is protected by natural forces in the system. Google controls its system with a computerized correction mechanism. They are using the same mathematical, physical principles.

  55. Jonathan Says:

    Hi JimV #22:

    Do you happen to have a source on the simulation piece? Curious what they criterions were for a single functional neuron

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.