The Winding Road to Quantum Supremacy

Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on.

I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus.

But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.

Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more.

And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI.

And a fourth, in which I talk about whether reality is discrete or continuous.

Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started.

74 Responses to “The Winding Road to Quantum Supremacy”

  1. Andrew G Says:

    Hi Scott,

    On the QIP schedule, the speaker for your talk was listed as your co-author Xinyi Chen. While it was certainly a pleasant surprise to get a Scott Aaronson talk at this year’s QIP, I am curious why the organizers didn’t make an announcement about the change. Perhaps if that were the case, the audience would not have been able to fit in the ballroom?

    Also out of curiosity: at how many QIPs have you given talks thus far, and what’s your longest consecutive streak?

  2. Scott Says:

    Andrew #1: The schedule was finalized more than a month ago, I think. Then Xinyi decided she couldn’t come because of a paper deadline, so I was deputized to give the talk instead. I don’t think the information ever made it to the QIP organizers. It happens.

    Looking at my talks page, I see that I’ve spoken at the following QIPs: 2002, 2003, 2004, 2005, 2007, 2008, 2009, 2010. So two consecutive runs of four each. (Note that this includes rump session talks and a still-notorious after-dinner talk. Also, since QIP straddles the December/January boundary, it’s not always actually held in the listed year.)

  3. Tamás V Says:

    It seems the director has cut out the part about whether the universe is discrete or continuous, what a pity. Is there an uncut version?
    Other question: how much faith do you have in PsiQ’s 1 million qubit quantum computer in 5 years?

  4. Scott Says:

    Tamas #3: Adam seems to be posting one clip per day (in discrete chunks, you might say…). I’m sure he’ll get to the discrete vs. continuous bit soon.

    I have no faith that PsiQ or anyone else will be able to build a million-qubit QC in 5 years. I hope that they or others will someday be able to do such things, whether in 5 years, 10, 20, or however long it takes.

  5. Adam Ford Says:

    Scott: Many thanks for the interview, it was fascinating to see you in person! I will have transcripts sometime soon.

    Tamás V: my internet is slow, as is the pace of my editing – I am staggering the release of each of the interview segments. I just released a segment on AI and XRisk: https://www.youtube.com/watch?v=gi67h6v-6fc

  6. JimV Says:

    A few years ago, I brought up one of your comments (credited to you) on the subject in a thread at Dr. Coyne’s website on free will, and Dr. Coyne responded with a lecture on how Bell’s Theorem ruled out your comment (!) and accused me of rudeness for prefacing my views with “I think” or “In my opinion”. Bad memories … anyway, that may be why he seemed to be predisposed against you. (I think that was the first time anyone mentioned you on his website.)

    As it is his website and his rules, I respected his prerogatives and have never commented there since.

  7. Richard Gaylord Says:

    “My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.”. well, at least it didn’t make your mind seem out-of-shape.

  8. Jules-Pierre Mao Says:

    @Jerry Coine

    >If you [read the GIQTM paper], weigh in

    That sounds like an amusing task.

    Imho the first important step to understand SA on these matters is that he’s taking determinism much more seriously than you are. No kidding! What you care about is whether determinism is true. What he cares about is what *kind* of determinism is true and whether we can prove it. With insights from complexity theory he invites us to consider three serious candidates. First candidate is the P world. Basically that’s the good old Laplace world. Second candidate is the BPP world -basically the former plus randomness. Third and last, the BQP world -our world if it obeys quantum mechanics. [From time to time he can also talk about PSPACE and CTCs, aka what happens if we’re more serious about general relativity than about quantum mechanics, but let’s leave that for another day.] Of these three worlds, SA finds BPP kind of boring (because it’s probably impossible to separate candidate BPP from candidate P using the outcome of an experiment), predicts P will be excluded some day (all you need is construct a quantum computer, plus prove some math utterly believed to be true -piece of cake), so that leaves us with BQP.

    Now the second step is, SA doesn’t like that state of affair. Specifically, he doesn’t like one could predict what he’ll do or think before he does or thinks it himself. Fortunately, and although this conclusion is impossible to escape if we’re in a P world, there is a subtle trick that can do the job if we are in a BQP world. Namely: don’t touch the law of physics themselves, just specify that the initial state (all the way down to the big bang or before if there’s some before) includes some special qbits that the human brain (or any well designed brain, artificial or not) can later use to make its dynamic non predictable. Adding flesh on this bone idea is all what GIQTM is about.
    My two cents: it’s far from complete (how these freebits are supposed to impact brain activity in any meaningful manner?) and one can question his notion of freedom, but at least he respects law of physics as we know it. In the long run I’d bet it’s false (because if old photons traveling from deep space were to have any non trivial effects on our brains, why don’t we notice anything each time we mess with the quantum states of the brain using an MRI scanner?), but even if false his attempt would still teach us something non trivial: one can respect both the law of physics and some interesting definition for freedom.

  9. ppnl Says:

    I have participated in several of the free will debates at WEIT. His argument seems to be that since our brains are deterministic we can’t be held accountable bad acts like murder. True enough given determinism. But then he argues that we should, therefore, structure our justice system around this fact. He rejects compatibilism yet his argument here is a compatibilist argument.

    If determinism holds then society is no more liable for its bad acts such as the election of Trump than the individual is for murder. If determinism holds then what we do, what society does and what the human species does was set by the initial conditions of the big bang. The universe may as well be a movie. There is no “ought”, there cannot be any “should”. There are only the consequences of the initial state of the big bang.

    He is a compatibilist but a compatibilist on the societal level. This highlights the ironic fact that we have little choice but to act as if we have free will.

    And the wild card is the fact that we experience the movie. Who ordered that?

    As for quantum mechanics I think it is clear that some observables of the universe simply cannot be in a particular state until observed. Thus causality disolves at that level. Given Bell’s inequality how can you preserve causality?

  10. Vadim Kosoy Says:

    Hi Scott!

    In your interview with Adam Ford, you say that you don’t work on AI safety because, you wouldn’t know how to test that you are making progress. However, it seems like the same can be said about quantum computers? Neither quantum computers nor superintelligent AI exist at present, and yet nothing impedes you from working on the former. Of course, quantum computers do not have the issue of “having to make it work from the first try”, like you said in the interview, but for current theoretical research it seems irrelevant since they don’t exist anyway.

    Also, you said that humanity never succeeded before at doing something from the first try, except for examples like the Apollo program where the individual components could be tested up front. However, it seems plausible that for AGI you will also have some ability to test individual components up front. Clearly, not every code segment in a putative AGI will be an AGI onto itself.

  11. Scott Says:

    Vadim #10:

      you say that you don’t work on AI safety because, you wouldn’t know how to test that you are making progress. However, it seems like the same can be said about quantum computers?

    I’d say that quantum computing is different because there, unlike with AI safety, for 25+ years we’ve had a very precise mathematical theory that delineates what can and can’t be done, and in some sense our job as theorists is “merely” to figure out the interesting logical consequences of that theory.

    I don’t personally work on the experimental side of QC (except indirectly, as a “theoretical consultant” to experimental groups)—but there as well, there are extremely clear metrics to tell you how much progress you’re making, for example the gate fidelities, the coherence times of the qubits, and the number of qubits you’ve successfully integrated. And we also know theoretical bounds on what gate fidelities and coherence times would suffice for universal, fault-tolerant QC, and we can see that in systems with many integrated qubits, the experimentalists are still far from those bounds but are steadily progressing toward them. In that respect, QC is more similar to (say) fusion power than to superhuman AI. It’s not an a-priori unbounded and undefined engineering target, which is the property that makes the AI problem seem so vertiginously terrifying.

    Yes, I agree that one could imagine an AI where the individual components could be tested prior to being integrated—and such a case, of course that could, should, and hopefully would be done! More concerning is the so-called “foom” scenario, where you’d have something analogous to a deep network that could simply be trained and run as a whole. In that case, there might indeed be substructure, but if so it would be emergent substructure, rather than anything the designers had engineered by hand or necessarily understood. It’s in this latter case that I don’t even really know how to begin to think about safety, though I respect the efforts now underway to try to figure out how to begin to think about it.

  12. I Says:

    I genuinely think this is the best you’ve looked for a while. Interpret that for what you will.

    Anyway, your mention the noise problem. Everyone keeps saying “100+noisy qubits=1 logical qubit”. Where do those numbers come from? Nowhere could I find a reference for this, besides “some scientists on quanta magazine said so”.

    My masters project was about modelling noise in adiabatic computation, and if we can mitigate it. Or even make it beneficial. For a single qubit with linear evolution it looks like we might be able to. Can we do something similar with gate or measurement based systems? Is it scalable?

  13. Scott Says:

    I #12: The best reference I can think of offhand is this paper by Fowler et al.—does anyone else know a better one? Briefly, though, the numbers of physical qubits per logical qubit that you see quoted are going to come from simply looking at what that number would be in the best existing fault-tolerance schemes, assuming that you’re executing a computation with such-and-such numbers of qubits and gates and you have such-and-such error!

  14. mjgeddes Says:

    Good critique of IIT, yet another theory of consciousness I think we have to consign to the scrap-heap. Such a huge volume of wrong theories about consciousness piling up! Although, if the person proposing the consciousness theory is smart, it’s at least *interesting* nonsense. No offence Scott, but I fear your own wacky ideas in GIQTM may fall into the same category 😉

    Consciousness, I’m now reasonably confident, is a symbolic language for reasoning about cause-and-effect via imagining counter-factuals; it’s purpose is communication, planning and reflection. And it works via some sort of extension of modal logic, which is in the form of a tree data-structure. I call my developing theory ‘TPTA : Temporal Perception & Temporal Action’.

    Coming to AI, and excellent comments from Scott there. Yes, I think CS=AI (Computer Science is in some sense equivalent to Artificial Intelligence). My tentative theory is that you have 2 main components to AI (1) Reasoning under uncertainty with limited resources, which about what computer science is all about, and the *extension* of that (2) Harnessing reasoning to achieve goals, which yields the algorithms comprising AI. So AI is simply a natural extension of computer science I think.

    Coming to the very difficult question of AI values, I favor an approach to multi-level modeling based on an extension of the ideas of Robin Hanson; Hanson proposed two main cognitive-modes: (far), which is top-down abstract reasoning, and (near), which is bottom-up detail-oriented reasoning. I suggested adding a third mode: (procedural), which is the actual algorithms for doing things, and I proposed that procedural mode is based on the integration or *balance* between (far) and (near) modes. This yields a recursive procedure for knowledge representation that can be applied to all cognition.

    So for values, we see that we can make a division into *near mode*, which refers to our emotions and instincts (cognitive psychology), and *far mode*, which refers to our ideals and ethical/aesthetic philosophies (axiology). Then *decision theory* would refer to *procedural mode*, the ‘middle-out’ between axiology (top) and cognitive psychology (bottom). So the role of decision theory is to integrate our emotions and instincts that come from the ‘bottom-up’, with our ideals, coming from the ‘top-down’. Thus, we see a single unified ‘stack’ of knowledge domains:

    Axiology
    Decision Theory
    Cognitive Psychology

  15. James B Says:

    Who decided this interview should be filmed outside in very bright sunlight?

  16. Joshua Zelinsky Says:

    Scott #13,

    Marginally on topic, but do we have non-trivial lower bounds on how many error correcting qubits are needed given a given error rate?

  17. Scott Says:

    mjgeddes #14: If, instead of writing a one-off and very tentative essay to explore the ideas in GIQTM, I had fixated on some specific numerical measure for how many freebits there are (even in the teeth of counterexamples showing that the measure produced absurd-seeming results); and if I’d pursued a decades-long research program around the measure with students, postdocs, etc … then maybe it would become reasonable to compare the two things.

  18. Scott Says:

    James #15: Adam and I discussed where to film and jointly decided on the park. I actually thought it worked kind of well…

  19. Tamás V Says:

    Can it be that intelligence has nothing to do with consciousness, in that the latter uses the former as a mere tool? Would be great news for the prospects of true AI, but also bad news suggesting that researcing intelligence will not get us any closer to the secrets of consciousness.

  20. I Says:

    Scott #13:

    Thanks for the reply. I should have thought to check there.

    As an aide, some one here linked to a lecture Susskind gave a little while ago, but I can’t seem to find it. It was something about how black holes volumes evolve suct that their complexity increases at the greatest rate possible.

    Does anyone have a link to that? Honestly, I think I lack the imagination to think of something like that, so I’m probably not imagining it.

  21. Vadim Kosoy Says:

    Scott #11:

    I agree that, as opposed to quantum computing, in AI safety there is still no consensus on the mathematical formalism within which the questions should be studied. In this sense, the field is, at this stage, more similar to physics than to mathematics. On the other hand, theoretical computer science does have an impressive track record of coming up with mathematical models for initially informal concepts (“algorithm”, “complexity”, “randomness”, “proof system”…)

    Regarding “a deep network that could simply be trained and run as a whole”, I don’t think it’s *that* different from science and engineering as we know it. We certainly can test the network’s training algorithm (in the sense of, search for bugs in the implementation) without running the AI “live” or feeding real data. The problem is, producing theoretical guarantees for what the network will do. As you mentioned in the interview, this is something we don’t know to do very well right now, although IMO there were some promising results recently (arxiv.org/abs/1811.04918, proceedings.mlr.press/v54/zhang17a.html).

  22. fred Says:

    Scott, great to see that you got in shape!

    About the progress of AI, the fact is that you don’t need generalized AI or some form of singularity for a group of humans to start dominating the rest of the planet by using breakthroughs in AI.

    Creating AIs that can beat humans at any games would be a huge win, and there’s been a natural progression in the classes of problems AI researchers have been tackling.
    It started by applying self learning algos to simple 2D platformer video games (with no opposing intelligence).
    Then Alpha Go used inputs from human knowledge to beat the best human player, but Alpha Go Zero was able to learn Go from scratch on its own, just by playing itself with zero human intervention, in a few hours (with smaller resources)… then the same algo worked on any board game (beating the best custom made programs that play chess for example).
    AI has also basically “solved” poker (introducing partial knowledge and psychological bluff), beating teams of the best players.
    The latest recent transition has been to apply AI to more free form strategy video games, and it was able to beat the best human players in 1 on 1.
    The next step will be to add human language to those games so that machines can learn psychological manipulation – most humans aren’t particularly good at detecting that they’re being manipulated, and at the same time the advertisement/video game/social networking industries have developed clear methods to manipulate the human mind.

    Another interesting trend is the research trying to understand why deep learning is working so well.

  23. fred Says:

    mjgeddes
    “Consciousness, I’m now reasonably confident, is a symbolic language for reasoning about cause-and-effect via imagining counter-factuals”

    You’re confusing consciousness and the content of consciousness (thoughts).
    No scientific theory can explain consciousness (as a truly emergent phenomena) because none of the atom/symbol manipulation processes you can come up with will ever require consciousness – as the subjective experience of being something, our ground level truth (the only thing I can’t doubt/deny is that I’m conscious).

    There’s no such thing as a truly emergent phenomena in science because it’s all a bottom up approach, and what we take as emergent processes are all based on the fact we’re conscious, we’re the ones projecting them onto nature.

  24. fred Says:

    I think many people confuse “free will” (an illusion) with “degrees of freedom”.

    Take two round marbles.
    Drop marble A in a smooth inclined pipe, it will describe a very nice and predictable sine path along it.
    Drop marble B on an inclined hill full of rocks of all size, it will describe a very chaotic and unpredictable path along it.
    Both systems are deterministic, but somehow marble B has “free will” while marble A hasn’t?

  25. Andrew Krause Says:

    I #20

    I believe the lecture you were thinking of was recorded in this arXiv paper, and I imagine you can find it online if the video version is more useful to you than the transcript. https://arxiv.org/abs/1810.11563

  26. Neil Says:

    I think your remarks about predictability and free will raise some interesting questions. Suppose a machine could be built that could predict someone’s future actions. That would seem to drive a stake through the heart of free will. But would it? First, the prediction would need to be withheld from the subject otherwise he or she coud just negate the prediction by taking another action. Note how this differs from predicting other events, like predicting an eclipse. Any prediction about the actions of an intelligent being could affect how it acts if the prediction is known. But what if the prediction is not known by the subject before it takes an action? Might the subject’s action now depend on whether the subject knows of the existence of the prediction machine? Perhaps it will try to outsmart the prediction machine even not knowing its prediction. After all, the subject should know itself as well as the machine does. Finally, what about prediction paradoxes, like Newcomb’s paradox?

  27. Tamás V Says:

    Let’s assume there is a computer with an AI software running on it, and an operator that gives it incentives (reward to pursue) time to time. Whenever the operator gives an incentive, the AI software will do something that is useful for him/her. Now, replace “AI software” with intelligence, and “operator” with consciousness. In this sense, intelligence is really just a tool (i.e. like an arm), and searching for consciousness within it is doomed to failure. (Ok, it may be all commonplace, I’m not following AI and consciousness research at all, I have to admit.)
    I think this goes along well with what Einstein said: “Most people say that it is the intellect which makes a great scientist. They are wrong: it is character.” Although not sure he had the exact same analogy in mind 🙂

  28. Jules-Pierre Mao Says:

    Tamás V #19,

    FiLM network may convinced you that intelligence does not requires consciousness. I doubt it’s bad news.

    https://arxiv.org/pdf/1709.07871.pdf

  29. I Says:

    Andrew #25

    Thanks, that’s exactly what I was looking for.

    By the way, since you’re a mathematician studying non-linear systems: why do strange attractors of a system twist continuously when some parameter of the system is altered? That is, whilst the system is chaotic. I’ve encountered this behaviour a couple of times in simple systems, but I don’t know why it occurs.

    Sorry that this is so off topic Scott.

  30. Tamás V Says:

    Jules-Pierre Mao #28,
    Thanks for the link, here is another one that’s even more convincing for me, although not a scientific paper:
    https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/

  31. Tamás V Says:

    Hi Scott, when you say “most fundamental description of nature”, what structure do you have in mind? Is it like a theory that has a finite number of axioms (e.g. like Newton’s laws of motion or the constancy of the speed of light in special relativity)? If yes, then my concern is that the theory could not explain its own axioms, and that would imply that other “most fundamental” descriptions would also exist, starting from different axioms about possibly very different concepts. So there would exist equally correct “most fundamental” descriptions, one where reality is discrete, one where it’s continuous, one where it’s hybrid, one where there is space, one where this is no space, etc. Or do you believe that one and only one “most fundamental” description of nature exists?

  32. Scott Says:

    Tamas #31: Sorry, no questions about the interview that would require me to re-watch it for the context.

    In general, though, yes, I agree that it’s possible that you can have two different equally correct and equally fundamental mathematical descriptions of the same physics, as long as they agree in their predictions for observed phenomena. We even have some examples where that happens (e.g., the Heisenberg, Schrödinger, and Dirac-Feynman pictures of quantum mechanics). It’s even conceivable that continuous degrees of freedom would exist in one description but not in another—although if continuous quantities were easy to eliminate with a simple change of description, many of us might prefer to do that (by contrast, any theory will involve discrete choices, like how many dimensions of space or how many particles or whatever).

    My discussion in the interview was not a-priori, but was framed around the best theories we actually have, namely quantum mechanics, quantum field theory, and quantum gravity (the latter as partially realized by, e.g., the Bekenstein-Hawking entropy calculation and AdS/CFT). As far as I know, my comments would apply regardless of how you mathematically formulate those theories.

  33. Nicholas Teague Says:

    Scott #13
    Another paper that comes to mind around same time period which was perhaps a little more mainstream was the Nature article on D-Wave, reprinted in Scientific American: “D-Wave’s Quantum Computer Courts Controversy” which included the quote “Like a rocket that requires tons of fuel to hoist a tiny payload, a gate-model quantum computer might need billions of error-correcting qubits just to get 1,000 functional qubits to do something productive.” – although now that I look at it perhaps this estimate was a little less optimistic.

  34. Tamás V Says:

    Scott #32:
    Sure, thanks for the answer, I did not mean to re-watch the interview, I thought you use that term every day anyway 🙂
    To me a more interesting question is whether a “theory that explains everything” could exist, and if yes what form it would take. Because naively, if it has an axiom, it would not be able to explain that very axiom, which would mean it’s not a theory that explains everything. (I assume this is one reason why we have to be careful and talk about a theory fully “describing” nature, as opposed to full “explaining” it.)
    Because of this, I became interested in philosophies that say “nonsenses” like things exist and don’t exist at the same time… and the people who developed those thoughts did not even know about quantum mechanics at that time (thousands of years ago). How interesting.

  35. Andrew Krause Says:

    I #25:

    There’s a lot of complexity there that I am not really an expert in (regarding strange attractors). A good PRL paper discussing some aspects of these things is “Classification of strange attractors by integers,” which discusses (only in the context of a specific subclass of attractors) how control parameters can leave the topology of a strange attractor fixed, but change geometric things, likely leading to the twisting you describe. I don’t know any reason offhand why one would expect twisting of the orbits, though presumably playing with Smale horseshoes etc can give you some intuition why such attractors themselves should always contain orbits which wind around one another. This is related to the density of unstable periodic orbits and the topological template approach which is discussed in that article.

    While this is a bit off-topic, there are of course nice connections between chaotic dynamics, predictability, free-will, and computational complexity. As always there are many things we don’t understand yet, and much work to do.

  36. mjgeddes Says:

    fred #23

    “You’re confusing consciousness and the content of consciousness (thoughts).
    No scientific theory can explain consciousness (as a truly emergent phenomena) because none of the atom/symbol manipulation processes you can come up with will ever require consciousness – as the subjective experience of being something, our ground level truth (the only thing I can’t doubt/deny is that I’m conscious).”

    It’s an open question as to whether science can explain consciousness. Let’s see how far the scientific method can take us before jumping to conclusions. I’d point to the proposed equivalence Scott mentioned ‘CS=AI’ (Computer Science = Artificial Intelligence). Let’s take that as the starting point and see what this would imply.

    If CS=AI, then the methods of comp-sci can’t be separated from cognition itself. This admittedly does lead to some strange , counter-intuitive conclusions. It implies that each ‘element of cognition’ is equivalent to a modeling method from comp-sci. And consciousness itself would be no exception. In this picture, you can’t separate consciousness from the contents of consciousness (there would be no such thing as ‘content free’ consciousness).

    So if we assume that CS=AI, we can put a modeling method from comp-sci in equivalence with each element of cognition. For example, I could propose:

    Comp-Sci Model = Element of Cognition

    Stochastic Model (Probability Theory) = Perception ?
    Cellular Automaton (Information Theory) = Optimal Action Selection ?
    Grammar Model (Modal Logic) = Planning (Reflection & Consciousness) ?

    If CS=AI, then a table like the above implies that there’s no difference between the entries on the left (comp-sci) and the entries on the right (cognition ).

    So if I’m right, there’s no difference between the correct grammar model (modal logic) and consciousness itself. If CS=AI, there’s simply no separation between modal logic (the contents of consciousness) and consciousness itself.

  37. Jalex Stark Says:

    Fred #24:

    Actually in the specific case of “marbles on hills”, the situation is worse than it appears. There are easy-to-describe hills such that newton’s equations of motion for a marble on the hill are non-deterministic!
    https://en.wikipedia.org/wiki/Norton%27s_dome

  38. fred Says:

    Jalex Stark

    Ah, yes, I read about that a while back.

    My point was more about Scott’s idea that the brain is special in that it can’t be duplicated, practically. But we also can’t build two double pendulums that would behave identically for any extended amount of time either (because of chaotic behavior), so non-duplication is a trivial fact of the physical world, not something special to brains.

    As I was saying before, I think the misunderstanding here is that many tend to conflate the content of consciousness with consciousness itself, because the self/ego gets in the way.
    Consciousness isn’t the author of its own content, those come from the brain.
    By content, I mean thoughts, intentions, volition, desires, feelings, emotions, sounds, images, pain, pleasure, the sense of self, … Consciousness is the space where those appear.
    We think that we are riding in our heads, as an observer, and this observer is the author of thoughts and some sort of permanent self. But this sense of self can’t hold up to scrutiny because everything that makes up that sense of self is observable (just like any other content of consciousness).

    That’s not to say either that there is consciousness on one side, and its content on the other. There’s no duality here, consciousness is “the knowing”, “the noticing” of whatever appears in the present moment.
    I personally think that consciousness is linked to the formation of memories. When we perceive something, it’s because the brain is actually committing it to memory.
    That’s why I think it’s a more universal property that maybe runs all the way down to atoms – all things are as conscious as we are, just that the quality/quantity of content that is perceived varies wildly.

  39. Craig Says:

    Scott,

    Are you a quantum supremacist? If so, how do you reconcile this with your left-leaning world-views?

  40. Scott Says:

    Craig #39: It’s not the term I would’ve chosen, but sure, I’m someone who actively works on the theoretical foundations of quantum supremacy experiments and who’s excited about them. And I suppose I reconcile that with my vaguely center-left worldview, more-or-less the same way I reconcile my support for the protection of right whales with my being left-handed. 😉

  41. fred Says:

    Scott,

    you often say that QM can viewed as just doing another type of probability (at least as far as building a QC goes).
    But is there in this approach a point where the quantum aspect (as far as all physical properties being discrete) of QM comes in, besides the fact that the state vectors are finite/discrete?
    I guess for QC theory all you need is something that has two states, without ever mentioning the more general puzzling aspects of quantization in physics, or the Heisenberg uncertainty principle.

    So, those aspects of QM would never bring any extra computational power?
    Can’t the quantum tunneling effect be used to help solve optimization problems? (It wouldn’t be a programmable digital machine, but more like some specialized “analog” machine I guess)

  42. Scott Says:

    fred #41: Popular accounts have failed you. Everything you mention—all of it, quantization and the Heisenberg uncertainty principle and all the rest—are just different logical consequences of the more fundamental axioms of QM, or else relate to the details of how those axioms get implemented in our universe. Those axioms are the principle of superposition, the principle of unitary evolution (i.e. the Schrödinger equation), the Born rule, and I guess the tensor product rule if you want to be pedantic. 🙂

  43. mjgeddes Says:

    fred #38

    “I personally think that consciousness is linked to the formation of memories. When we perceive something, it’s because the brain is actually committing it to memory.”

    Consciousness is closely connected to time and/or our perception of time in my view. Scott also postulated a connection to the flow of time. I’m just not so sure we need to bring in any wacky quantum stuff…

    Consciousness is surely also closely linked to language, communication and knowledge representation.

    If you combine the two links (time and language) , and stick to ordinary computer science as much as possible (assuming consciousness is just computation) , avoiding wacky theories, you’re led naturally to modal logic, specifically some sort of temporal logic. My wagons circle around something like computation tree logic (CTL). This sort of logic is how you represent the flow of time.

  44. Gerard Says:

    mjgeddes #43

    If you believe that the execution of a (classical) computation can create consciousness (ie. just so we’re clear on definitions, by consciousness, I mean subjective experience) what if that computation is carried out by a person in a room with pencil and paper ?

    If this process creates subjective experiences, what is having those experiences ? Surely not the human “computer” since the content of his experiences are clearly very different from those that would be implied by the computation itself.

    It seems to me that there are only two possible answers to these question either:

    A) Computation alone cannot produce conscious experience.

    or

    B) Making marks on a piece of paper can somehow lead to the creation of a new, completely unobservable, conscious entity.

    To me believing (B) seems very similar to believing in unseen spirits or gods.

  45. fred Says:

    Scott #42

    Thanks.
    Somehow the only class I got on QM back in 1990 at engineering school (in Brussels) was pretty weird and old school.
    The professor was taking a historical approach on it (he must have learned it in the 40’s), starting at the black box radiation conundrum, a ton of stuff about “wave packets” math, and then focusing almost only on solving the hydrogen atom model using the Schrodinger equation, jumping all sorts of steps because we didn’t have the right math knowledge yet (only engineers who specialized in physics, to work in nuke plants, later learned the necessary math tools to “work” the Schrodinger equation).
    A few years later I read the Feynman book on QM and was pretty shocked how different it was.

  46. Scott Says:

    fred #45: Yup. 🙂

  47. Adam Ford Says:

    The last section of the interview ‘On Suffering, Utopia, Radical Uncertainty & Free Will‘ is up – I hope to churn transcriptions out by the end of the week.

  48. Yoni Says:

    Hi Scott

    I liked the clip on discreet vs. continuous universe (can’t pretend I followed all of it). My question is: you describe the amplitude functions (and if I understand that correctly then also the probability function of the observable) as continuous. Is there good evidence that this is the case, or could it just as easily be that the functions are actually discreet and the continuous functions you use are just approximations to the actual underlying discreet reality? If that is correct would it then imply that some events that have a very low calculated probability are actually impossible?

  49. Yoni Says:

    Scott

    On the free will issue (and admittedly I haven’t ready your

    How could such a machine – even in theory – be possible. Surely as time moves on, we are all affected by our environment – and at the quantum level this would include quantum fluctuations both from within our brain structure, but also from without. I thought there were rules against copying quantum states (or even reading them precisely).
    You may say that the quantum fluctuations don’t have an effect on decision making, and that may be true over short time periods, but surely over time they will have an impact. If so your hypothetical machine may be able to determine my actions with high accuracy over a given period, but as you extend the period the accuracy will drop off no matter how accurate the initial inputs.
    The two potential responses I can think of are:
    1 – there is some sort of error correcting mechanism (as with computers) that filters out for quantum randomness. However to this I will respond a) is there any reason to think this is happening, and b) that would only work with a high probability that again tails off over time.
    2 – you get to keep updating the machine with the new information – but then the machine isn’t really doing the predicting.

  50. Jules-Pierre Mao Says:

    Gerard #44,

    This so-called paradox is famous and was considered solved by many almost since it first appeared. Searl discuss that solution under the name “the system reply”, and briefly explains why he’s not convinced. Notice that his counterargument *postulates* that “[The whole system] has no way to understand the meanings of the Chinese symbols from the operations of the system”. One counter-counter-answer is to *postulate* that his counterargument is made of cheese, then dismiss it because it’s then made of cheese.

    http://www.scholarpedia.org/article/Chinese_room_argument

  51. Yoni Says:

    Jules-Pierre Mao #8

    “why don’t we notice anything each time we mess with the quantum states of the brain using an MRI scanner”

    How would you expect to notice them? What sort of non-trivial effects are you talking about? Surely given enough time passing the brain’s state is significantly different post-these-effects than had they never occurred.

  52. Yoni Says:

    Jules-Pierre Mao #50

    Thanks for the link.

    It seems to me that the problem with the thought experiment is basically one of timescale. If you slow the brain down to the speed where you can regognise each individual calculation, then – at that timescale – the brain doesnt “understand” anything, it just looks like a machine. It is only at the sped-up timescale (that we perceive in real life) that we can have these feelings.

  53. Gerard Says:

    Jules-Pierre Mao #50

    The problem I have with Searle’s Chinese Room experiment is that it includes too many ill-defined anthropomorphic concepts such as “cognition”, “to think”, “to understand”, etc. This makes it difficult to understand exactly what Searle is claiming (hence the misinterpretations section in the article you cite).

    I think my argument is much clearer because it addresses only a very specific question: whether or not a computational process is a sufficient condition for the existence of subjective experience.

    Can a computer “think” ? It all depends on how you understand what is meant by “to think”. By this we usually mean those mental processes of which we are conscious (as opposed to all of the stuff our nets are doing below the level of consciousness) and we usually think of these as something that can be expressed in human language. A computer could certainly represent such “thoughts” and process them to produce other “thoughts” based on some form of deductive or probabilistic reasoning. However I don’t believe that a computer could “experience” such thoughts the way we do any more than it could experience pain.

    However when we use the word “to think” we are often thinking not only of the process of thinking but also of the fact that we experience our own thinking. When Descartes said “I think, therefore I am” I think what he really meant was “I experience myself thinking, therefore I am.”

    In my view intelligence and conscious experience are two distinct things and neither one implies the other.

    Unfortunately the words we use to discuss cognition tend to mix these two concepts and lead to an anthropomorphization of intelligence, due to the fact that our language developed millenia before we could concieve of any form of artificial intelligence.

  54. Jules-Pierre Mao Says:

    Yoni #51

    >How would you expect to notice them?

    Each time we put someone in a scan, we mess with its EM field. The only known effect is that (if you move too fast) it can activate your photoreceptors. No other effect have been attributed to MRI, despite we know we leave some energy (usually so that temperature increase is less than 0.1K). So, if you say that freebits interacts with brains, then there must be some principle reason for why it is unaffected by MRI. For example you could suppose that any effect would comes from forces other than electromagnetic (Penrose played with this idea in some -not all- of his books on the topic), or that it is based on a frequency channel too low to get noticed in an MRI scan.

    >What sort of non-trivial effects are you talking about?

    Any distortion in the result one can obtain from within versus from outside a scanner. Specifically, if one theory suggest that freebits from space help free decisions, then I would expect some predictions on, say, results to the Iowa gambling task. Either that or a principle reason for why there should be no effect.

    >Surely given enough time passing the brain’s state is significantly different post-these-effects than had they never occurred.

    Indeed, but the difference should also be significant in the sense of *neither trivial nor random*. I think that’s why SA disregarded “usual” randomness and turn to “knigthian uncertainty”. (but I’m not sure I fully understand the way he uses this notion).

  55. Jules-Pierre Mao Says:

    Yoni #52,

    That’s a very interesting though, thank you.

    Gerard #53,

    Indeed. Yes, your question is much better imho too. But I regard it as solved, because cognitive science (see Dehaene below, for examples of results that support this conclusion) agrees with AI that at least some cognitive processes can be done without consciousness (it’s impressive that the time course for reaching this conclusion was somewhat similar in both fields). So, if there’s an algorithm or a computation that produce or detect consciousness, it’s not because we rely on it for cognition purpose.
    However I don’t see how this conclusion turns into “a computer cannot experience pain”.

  56. Gerard Says:

    Jules-Pierre Mao #55

    I’m not quite sure what you’re saying here. What exactly do you regard as solved ? Because I think it’s safe to say that it’s not generally accepted that the nature of consciousness or the causal factors necessary for it’s existence is “solved”.

    That some cognitive processes do not involve consciousness seems evident but I don’t see much of a connection between that and my claim that no computational process is a sufficient causal factor for the existence of subjective experience.

    Again my claim is that no computational process alone can generate subjective experiences, because the contrary belief leads to what to me is the absurd conclusion that making marks on paper can create a new consciousness.

    Feeling pain is a subjective experience. If computers cannot have subjective experiences then they cannot feel pain.

  57. Jon K. Says:

    Hi Scott,

    You look great in the videos, especially considering the harsh lighting.

    I thought your classical computing (history, hardware, and software) analogies were very helpful in talking about where QM has been, where it is now, and your hopes for where it will be at some point in the future.

    With regard to IIT’s ideas around asking the question of whether or not the system under consideration can be decomposed without harming it’s functionality/nature/phi metric, do you see any connection to the QM distinction between quantum states that can and can’t be decomposed?

    Are classical states only cloneable (most of the time unless your computer has an issue) because the state you are interested in is actually some high-level average state, made stable though error correction or some sort of fault-tolerant redundancy engineered in at a lower level?

    I hope these questions make sense 🙂

  58. Bennett Standeven Says:

    @Gerard #56:

    I think a similar argument shows that no cognitive process is a sufficient causal factor for consciousness either. Since otherwise, an imaginary cognitive process should still generate a real consciousness. (For example, imagining Sherlock Holmes making a deduction would mean that a real Sherlock Holmes is experiencing the deduction.)

    @Jon K. #57:

    Yeah, classical states can be cloned and deleted because they are composites of many quantum states. Although their fault tolerance doesn’t need to be engineered; it’s pretty much automatic that macroscopic states will behave this way.

  59. Yoni Says:

    Jules-Pierre Mao #51

    “the difference should also be significant in the sense of *neither trivial nor random*”.

    I may be misunderstanding the point you were trying to make, but for the purposes of SA’s point on the machine being able to predict the outcome, surely even a random affect would be problematic?

  60. fred Says:

    Scott,

    Let’s say that in 5 or 10 years (or during the lifetime of your active career) there’s enough breakthroughs in CS that millions of qbits suddenly become available (and quantum supremacy is verified).

    What would you be focusing on?

    Designing new quantum algorithms? For example to probe the space between P and NP? (like Shor’s)

  61. Scott Says:

    Jon #57: There’s a clear mathematical analogy between quantum entanglement and the properties that tend to produce large values of Φ, which I even talked about in my blog posts of the subject. But entanglement is a real phenomenon that’s not only experimentally measurable but connected to other things one might care about, whereas my own view is that the supposed link between large Φ and consciousness is just a pure, 100% confusion and mistake—one that arose by obsessing about one particular example (the brain) where those two properties happen to go together, while ignoring the many examples where they don’t.

  62. Scott Says:

    fred #60: Well, I’d probably no longer focus so much on the foundations of quantum supremacy experiments. 🙂 But apart from that, my research interests would probably be surprisingly unaffected by the actual availability of the devices. It’s like, there are dozens of fundamental problems that are still open about the ultimate capabilities and limits of quantum algorithms. And while those problems will surely acquire more practical importance if someone actually builds a million-qubit QC, the latter achievement isn’t going to magically solve those problems—any more than the classical personal computer revolution of the past 40 years magically solved P vs. NP or the other fundamental open problems about classical algorithms. Some of the problems were solved within that time—but not primarily because classical computers were widely available—while many others are still open.

  63. Tom Oster Says:

    Computers will never be “smarter” than humans because computers can’t think.

    A human brain has as its final cause to think. So the fact that it thinks is something intrinsic and observer-independent. A computer is a collection of switches that has an externally-imposed meaning. The meaning of a computation is not observer-independent, unlike the human brain. So the computation of a computer is more like the bits of metal on a watchface. The watch’s metal don’t have any inherent purpose to tell time, because their function as time-keeping devices is externally imposed on the metal.

    And if computers can think, that sounds like a great excuse for not holding programmers accountable. “You see your honor, it’s not that I made a flaw in the program that calculated chemotherapy dosage, the computer had a mine of its own!” “Bailiff, take him away!”

  64. Gerard Says:

    @Tom Oster #63

    “Computers will never be smarter than humans because computers can’t think.”

    That seems a bit like saying “Cars will never be faster than humans because cars can’t run.”

  65. Scott Says:

    Tom #63: And how exactly do you decide what something’s “final cause” is? Suppose an actual human brain, or something physically indistinguishable to one, were assembled in a factory—would its “final cause” then be to think or merely to manipulate symbols?

    I sometimes ask myself how this sort of obliviousness to the enormities of a question comes coupled with such beatific confidence about the answer, but then I reflect that that question answers itself. 🙂

    Everyone: I’ve been having problems with Akismet (my moderation queue overrun with spam), and even more problems with the start of the semester and zero free time, but I’ll have a new post up soon, promise!

  66. Ajit R. Jadhav Says:

    Gerard # 64:

    1. He had the scare-quotes around the word “smarter.”

    2. The cars analogy does not fit.

    Cars undergo *physical* displacements, and running, in the primary sense of the term, refers to the *physical* aspects of a certain kind of a (living) man’s activity—one that results in physical displacements. In short (and taking the liberty to put it somewhat vaguely), running is physical.

    But thinking is not. Thinking is *primarily* a mental activity, even if, symmetrically, this activity does require certain physical apparatus (the nervous system, esp. the brain, of a *living* being), and does have certain physical (chemical, electrical etc.) correlates which go with it. In short (and vague) terms, thinking is mental.

    This difference is crucially important.

    3. A while ago, I wrote a couple of posts examining panpsychism in detail at my blog. See if interested.

    Bye for now.

    Best,

    –Ajit

  67. Tom Oster Says:

    @Gerard that sounds like a meme, not a reason. Richard Dawkins speculated that religions are memes. But now that we have observed memes on the Internet we know that religions are systematic knowledge and memes are analytic, neutral forms of cognition and so are disjoint from religion in every way.

    Computers are switches and so are observer dependent. The brain has thinking as its final cause and so is not observer dependent. A brain thinks even if nobody is looking. A computer does not compute unless someone is both looking at it and interpreting its output. This is why when a computer kills someone judges do not accept “the computer has gained self-awareness” as an excuse. The programmer still gets punished and not the computer. I sometimes wonder whether because of Silicon Valley programmers we will end up establishing computer courts, which would be an even greater farce than animal courts of the Dark Ages, thereby proving that the Dark Age peasant was more intelligent than the modern Silicon Valley Startup founder.

  68. Gerard Says:

    @Tom Oster #65

    You can call it a meme if you want but that’s irrelevant to the fact that it’s a counterexample which shows that a proposition of the form:

    Given that not q(H) implies p(H) = 0:

    p(A) < p(H) because q(H) and not q(A)

    is not a valid argument.

    As for the rest of what you say, I don’t see how it supports the claim that “Computers will never be smarter than humans”.

    How does computation being “observer dependent” prevent a computer from achieving it’s goals and thus demonstrating intelligence ?

  69. Tom Oster Says:

    And how exactly do you decide what something’s “final cause” is? Suppose an actual human brain, or something physically indistinguishable to one, were assembled in a factory—would its “final cause” then be to think or merely to manipulate symbols?

    I already told you. A final cause is observer independent. A cat sees regardless if anybody else interprets it as seeing or even if the cat is conscious of himself seeing. Artifices like a clockface are just random arrangements of bits of metal that don’t mean anything unless an observer assigns a meaning. They have no final cause. Only the human’s intention to tell time is a final cause.

    I sometimes ask myself how this sort of obliviousness to the enormities of a question comes coupled with such beatific confidence about the answer, but then I reflect that that question answers itself.

    People who are serious about computers gaining sentience should be campaigning for computer courts to be established for when they do evil. Greater than human sentience also implies moral agency.

  70. Scott Says:

    Tom #69: Your reply helps illustrate why Aristotle’s notion of “final cause” was abandoned with the rise of modern science.

    I asked you for a principled criterion to determine which physical systems have “final causes” and which don’t. In response, you simply repeated your original foot-stomping assertion that humans have “final causes” while computers are “random arrangements of bits of metal,” and that’s that. OK, but what about worms? Amoebas? Extraterrestrials? Part-organic, part-silicon cyborgs of the future? If we’re refusing to engage with hard cases, or even acknowledge their existence, then we’re not doing philosophy and are just wasting time.

    Banned from this blog for one year.

  71. mjgeddes Says:

    Yes, Scott, aka Tom, it’s a shame that those who don’t even grasp the rudiments of the scientific method can’t even get out of the starting block. And sadly, those who philosophically object to the idea that computation generates consciousness aren’t much better off unfortunately. Stuck in a fruitless philosophical quagmire, and literally not even able to leave the starting block.

    If one just begins with the assumption that computation generates consciousness as an interesting premise, and then tries to explore where this idea leads, one might just be able to achieve something glorious…

    So extending my baronet of rationality, and roaring with a mighty battle cry, I charge up the steps and storm the cognitive fortress! First in the world! OORAH! 😀
    —-
    Solution to consciousness in less than 200 words:

    ‘Consciousness is a symbolic language for modelling time (TPTA – Temporal Perception & Temporal Action)!

    There are two types of time – (1) logical time – a high-level abstract tree of the structure of a logical argument – call this an ‘argument tree’, and (2) physical time – a low-level tree showing counter-factual possibilities representing physical causality– call this a ‘grammar tree’. Both types of time are represented by ‘computation trees’ , an extension of temporal logic (a type of modal logic).

    Consciousness arises when the argument trees (representing logical time) are integrated with the grammar trees (representing physical time) to form an internal ‘self-model’ – call this a ‘narrative tree’ (or cognitive time). The argument tree lets us plan for the future (Temporal Action), the grammar tree lets us reflect on the past (Temporal Perception), and the narrative tree (the self-model) is for communicating our intentions in the present (Choice). TPTA – Temporal Perception & Temporal Action !’

  72. Joe Says:

    A couple hours of googling quantum computers died here. I started the evening hearing from a friend that 2000 qubit quantum computers were being made and tons of universities were buying them. Ended up watching your video saying quantum supremacy is a hope for the future. So disappointing.

  73. The Winding Road to Quantum Supremacy – Scott Aaronson | Science, Technology & the Future Says:

    […] His primary areas of research are quantum computing and computational complexity theory. Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than […]

  74. The Ghost in the Quantum Turing Machine – Scott Aaronson | Science, Technology & the Future Says:

    […] His primary areas of research are quantum computing and computational complexity theory. Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than […]