## Practicing the modus ponens of Twitter

I saw today that Ryan Lackey generously praised my and Zach Weinersmith’s quantum computing SMBC comic on Twitter:

Somehow this SMBC comic is the best explanation of quantum computing for non-professionals that I’ve ever found

To which the venture capitalist Matthew Ocko replied, in another tweet:

Except Scott Aaronson is a surly little troll who has literally never built anything at all of meaning. He’s a professional critic of braver people.  So, no, this is not a good explanation – anymore than Jeremy Rifkin on CRISPR would be…

Now, I don’t mind if Ocko hates me, and also hates my and Zach’s comic.  What’s been bothering me is just the logic of his tweet.  Like: what did he have in his head when he wrote the word “So”?  Let’s suppose for the sake of argument that I’m a “surly little troll,” and an ax murderer besides.  How does it follow that my explanation of quantum computing wasn’t good?  To reach that stop in proposition-space, wouldn’t one still need to point to something wrong with the explanation?

But I’m certain that my inability to understand this is just another of my many failings.  In a world where Trump is president, bitcoin is valued at $11,000 when I last checked, and the attack-tweet has fully replaced the argument, it’s obvious that those of us who see a word like “so” or “because,” and start looking for the inferential step, are merely insufficiently brave. For godsakes, I’m not even on Twitter! I’m a sclerotic dinosaur who needs to get with the times. But maybe I, too, could learn the art of the naked ad-hominem. Let me try: from a Google search, we learn that Ocko is an enthusiastic investor in D-Wave. Is it possible he’s simply upset that there’s so much excitement right now in experimental quantum computing—including “things of meaning” being built by brave people, at Google and IBM and Rigetti and IonQ and elsewhere—but that virtually none of this involves D-Wave, whose devices remain interesting from various physics and engineering standpoints, but still fail to achieve any clear quantum speedups, just as the professional critics predicted? Is he upset that the brave system-builders who are racing finally to achieve quantum computational supremacy over the next year, are the ones who actually interacted with academic researchers (sorry: surly little trolls), and listened to what they said? Who understood, for example, why scaling up to 50+ qubits only made a lot of sense once you had one or two qubits that at least behaved well enough in isolation—which, after years of heroic effort, many of these system-builders now do? How’d I do? Was there still too much argument there for the world of 2018? ### 100 Responses to “Practicing the modus ponens of Twitter” 1. Joe Tir Says: Scott, have you not learned anything at all? Your naked ad-hominem doesn’t contain any instance whatsoever of the modus ponens of Twitter! ;-P 2. Scott Says: Joe #1: You mean, I don’t explicitly state that Ocko’s substantive argument must be wrong because he’s a D-Wave funder? It would help, for that purpose, if there were a substantive argument to work with here… 3. Mark Says: I agree with you, but I’m not sure this was a time when it was worth taking the bait and responding to a shallow criticism. If all Ocko wanted was attention, this gives him that. I support engaging with one’s critics, but I think this might be below my bar for making sure you’re not just feeding the trolls. You can choose to handle this kind of thing however you like, but ignoring him might actually be the best counter argument. 4. Scott Says: Mark #3: Actually, anyone who wants new content here should probably thank Ocko, because it took his tweet to get me back into blogging-mode at all, and now that I’m here, I can post other stuff. Exactly like Ocko says, I’m a “professional critic” who often needs a prompt to get me started; even half my research papers originated when I heard something that sounded wrong to me and then tried to refute it. (The nice part is that, if the implausible-sounding conjecture you heard turns out to be true after all, and you can prove it, then that’s a paper as well…) 5. Trumpy McTrumpface Says: Yes, definately too much. Do it in 140chars, or go home. I suggest: Ocko criticises Scott Aaronson. Still bitter that he backed the wrong horse with D-Wave? Sad! 6. Curious Wavefunction Says: Sorry, your attempted tweet is more than 280 characters. It also contains an actual train of reasoning, insufficient ad hominem, a pathetic attempt to engage an attention span of more than 6.408 milliseconds and an application of the English language that fell out of favor circa 2010. So. 7. Phil Koop Says: I tweeted a link to the Preskill article and subtweeted you for credit, because I first saw it here. I got one reply, very similar to the Ocko tweet you quote. It didn’t mention anything specific in Preskill’s article, it merely denounced you and Preskill for being academics and then promoted a QC company (not D-Wave, a local one.) Now, I am a twitter nobody with hardly any real human followers. Probably the only person who read that tweet was my wife. I just tweeted the link because hey, I liked the article. Obviously, my tweet was found by a bot searching for mentions of Preskill and Aaronson. And likely the reply was generated by the bot too, though I was too lazy to verify that. So yeah, this is definitely a publicity strategy. 8. Alexandre Zani Says: What kind of world do we live in where someone who is trusted with managing half a billion dollars in capital spits naked insults in public? Let’s shut the internet down. It was a bad idea after all… 9. Scott Says: Phil Koop #7: I can’t find the tweet in question (poor Twitter skills), but by any chance was that Robert R. Tucci? He’s by far the most common perpetrator of contentless drive-by Twitter denunciations of me and/or John Preskill, replying to anybody who mentions us, combined with promotions of his own stuff—suspiciously matching your description. He used to comment here, but I guess he couldn’t take the heat of evidence and arguments. Sad! 10. Jay L Gischer Says: @Alexandre Zani #8: The ability to spit insults might well have a lot to do with why someone is trusted to manage half a billion in capital. It’s just that people like that didn’t come out in public before. 11. Jon K. Says: Ad hominem attacks suck. Booooo. Maybe he just feels like you are a “hater”? It would be best if we could all strike a zen balance between open-mindedness and skepticism. *zen gong sound* Speaking of tweeting into the void, I tweeted about a “Relativity Computer” yesterday, but I didn’t cite where I heard about this thought-provoking, yet humorous approach to fast computing. I hope that’s ok; if you had a twitter account, Scott, I would have @’d a reference to you. Surprisingly, I think that was the first tweet EVER that had a hashtag “#RelativityComputer”!?! 12. Tez Says: I thought a quantum computing comic would be a couple of panels along the lines of: Whats your business? I’m building a quantum computer! How is business? Its a bit up and down. 13. Jeremy Stanson Says: I often speak in favor of D-Wave on this blog but I am not a blind D-Wave supporter. The main thing that I support is there alternative scaling approach, which Scott disparages again here. Scott, you write of being bothered by logic (or a lack thereof) but I continue to question the logic of your assertion that perfecting 2 qubits first and then scaling is intrinsically better than scaling and then perfecting the qubits. All other things being equal, that could make sense. But when the act of scaling qubits necessarily changes the qubits themselves (i.e., when each qubit in the 2-qubit system ends up becoming quite different physically and operationally in the 50+ qubit system), then it is not certain that perfecting the 2-qubit system first is the best way to go. You need to build the 50+ qubit system in order to learn what the qubits in a 50+ qubit system look like, and that is exactly what D-Wave has been doing. That’s what I support about their approach. It is not necessarily better, but it is different and they are certainly learning an awful lot along the way. Examples: programming and controlling 50+ qubits (needs infrastructure, unavoidably adds noise, etc.), reading out 50+ qubits (ditto), cooling and shielding 50+ qubits (introduces thermal and field gradients and non-uniformities that aren’t such a problem with 2-qubits), and the list goes on and on. D-Wave has been trying to engineer solutions to all of those problems – all of the problems that arise when you scale to 50+ qubits. If they’d spent years and years perfecting a 2-qubit system they wouldn’t have the scaling capability that they have today. And they’d have nothing new compared to what all the other labs out there were already doing. I hope you can agree that both approaches have merit. That’s why I root for them – to see what else their wild scaling teaches us. The reality is that both approaches have merit and both approaches will inform how real operational QC systems are architected in the future. It isn’t wrong to take the 2-qubit -> 50+ qubit approach (though you need to be prepared for the fact that everything you tweaked and perfected in the 2-qubit system is going to radically change as you scale), but it isn’t wrong to build 50+ qubits first and then try to perfect them, since you’re going to have to do that tweaking either way. 14. David Says: Scott, you’re still expecting people to use reason. Sure, from a rational point of view, you being a troll and/or ax murderer doesn’t tell us much about your explanations of quantum computing. But you’re missing something very important-most people don’t have minds. Not what we’d think of as minds, at least. Most people have affect heuristics, and that’s it. So perceived trollishness on your part gets you a downvote, so to speak, loading you with negative affect. Then, your explanation is associated with you, and thus picks up the negative affect. There’s no logical connection here, but that only matters to people capable of logic, i.e. a very small minority. 15. AcademicLurker Says: How’d I do? Was there still too much argument there for the world of 2018? Needs more animated gifs. 16. Scott Says: Jeremy #13: What breaks the symmetry, I think, is reductionism, and the related idea of engineering by compartmentalization. Even if you fully understand the components of a system, it often happens that the system as a whole refuses to work as expected because of unanticipated interactions between the components. But at least the way human beings engineer hardware, it’s astronomically more unlikely that a system will work well despite the fact that the individual components don’t. Would anyone argue for building an airplane before we had any idea how to build wings, engines, or propellers, on the ground that the integration will affect the performance of the components anyway, so we might as well integrate first and worry about the components later? (FWIW, that’s not the approach the Wright brothers took: they first figured out how to build really good gliders, then integrated them with engines and propellers that they’d also built or had built in isolation.) 17. Max Chaplin Says: The most charitable way to read that tweet I can think of is treating the “So” as an abbreviation of “So, it’s not surprising that”. Not modus ponens, but a correlation. It still doesn’t make sense though, because the comic doesn’t even touch the level where the disagreement between Scott and D-Wave lies. 18. Jeremy Stanson Says: Scott # 16 Thanks for the response – I’m not really sure what you’re saying though. I think you’re agreeing that there is merit to D-Wave trying to understand the scaling effects before perfecting 2 mis-representational qubits? Building 2 perfect qubits first and then trying to scale is the example of reductionism / engineering by compartmentalization, right? I don’t understand how the airplane analogy applies – but that’s probably just me and my general dislike of analogies. One thing that breaks the airplane analogy is the fact that the Wright brothers didn’t have decades worth of published papers on the pieces of the system they were trying to build. Of course this is not true for D-Wave. D-Wave gets to apply learnings from published academic research while they try to plow ahead to the problems the academics haven’t gotten to yet. And they’ve uncovered things the other efforts are going to have to deal with soon too. This is why I say the fully performant system in the end, whoever builds it, is going to apply concepts/results from both approaches – but of course D-Wave gets the scaling patents for doing it first. 19. Matt Ocko Says: ‪Scott was too sloppy in this post to do the trivial research to discover I’m one of the anchor investors in Rigetti! ???? Not only that, but in fact I’m a quiet supporter of some other initiatives he admires. So, much of his post — and many of the responses in the comments/thread — are based on false premises. I.e., it ain’t about D-wave … I just think that Scott, who has never built a viable company of scale (or a sizable academic team performing tangible physical work of renown and utility) revels in tearing down the work of others to augment his stature. And ironically, both the sloppiness in responding to me and attempt to seize the wrong datum to dismiss are typical of Scott. See you folks on Twitter! 20. Joshua Zelinsky Says: Matt Ocko #19, Ok. So let’s say that Scott is sloppy and doesn’t bother with trivial research about what projects you’ve invested in. Let’s assume further that Scott revels in tearing down the work of others to augment his stature (and disregard the fact that he’s one of the most patient people in actually responding to undergrads or non-math people in explaining basic computational complexity). Can you explain how that in any way is relevant to your claimed conclusion that there’s something wrong with the comic he made with Zach? Heck, can you point to something in the comic you explicitly disagree with? 21. Passerby Says: Matt Ocko: You still haven’t provided any comments on the actual content of Scott’s criticism, so I would continue to categorize your response as ad hominem. Also, the “see you folks on Twitter” clearly implies that you are not interested in serious, substantial discussion and debate. 22. Scott Says: Matt #19: I’m glad you support some QC initiatives that try to be intellectually serious. But that fact brings us no closer to understanding what was wrong with my explanation of QC. I’m disappointed that, given the invitation, you still don’t even try to articulate an answer, resorting instead to more ad hominems. And I don’t think the situation is symmetric here. If, as sometimes happens, a business or government person says something about QC that strikes me as forehead-bangingly wrong, I never respond by saying: “have you ever published a paper in this field? even a minor one? do you know the proof that Grover’s algorithm is optimal? could you do the first problem set of my undergrad course? if not, then what standing do you have to argue with me?” Instead I try patiently to explain what’s actually wrong in what they said, or at least give them a reference. (I’ve been doing that on this blog for more than 12 years.) But why do I get the sense that hoping for a good-faith argument from you, is sort of like hoping that my 10-month-old will explain the rational basis for his tantrum? OK, but I’m always open to being proved wrong. Further comments from you will be allowed here if, and only if, they refrain from attacking people for anything they did or didn’t do outside the immediate scope of this conversation, and put forward a comprehensible object-level argument for something in our quantum computing cartoon being wrong. PS. Given that you’ve, presumably, studied my papers and found nothing positive in them, only the tearing down of others, I’ll make no attempt to defend myself. However, since you’ve also impugned the honor of everyone else in UT Austin’s rapidly-growing Quantum Information Center, which I now direct, I feel obliged to say: writing on blackboards is ‘tangible physical work,’ goddammit! Can you give a 2-hour chalk talk without sweating by the end of it? 23. Tristan Slominski Says: Scott, this reminded me of Scott Alexander’s “Conflict vs. Mistake”. With that framing and the above exchange, I perceive you as a Mistake Theorist, whereas Matthew Ocko (whom I know nothing about) I perceive as a Conflict Theorist? 24. Scott Says: Tristan #23: Yup. 🙂 I enjoyed that post, as it sets out a distinction that’s long been central to my own thinking (and surely others’), though I wouldn’t have articulated it as well. Alas, the paradox at the heart of the mistake/conflict distinction is obvious, and I believe many SSC commenters pointed it out. Namely: For civilization to function, we need mistake theorists at the wheel: people able to assume the others around them are arguing in good faith, and who figure out collaboratively how to solve common problems as scientists and engineers do (admitting error, attacking ideas rather than people, etc.). But the only way that can be achieved is first to neutralize the power of the conflict theorists, who refuse to argue in good faith, and who are unable to conceive of any social problem that isn’t a zero-sum power struggle, between good guys whose arguments need never be questioned, and bad guys whose arguments need never be answered. Or more succinctly: there really are bad guys in the world, and we can recognize them as those for whom everything boils down to good guys vs. bad guys. 25. Joshua Zelinsky Says: Scott #24, It seems like this paradox isn’t that hard to resolve from a mistake standpoint: First, the vast majority of people are more on a mistake-conflict spectrum rather than part of an absolute dichotomy. Second, it is a mistake to think that some of the conflict people aren’t right in some contexts: some people really are just bad people, and that can be orthogonal to where people are on a political spectrum. At the same time, some have more than others of some types; most neo-Nazis aren’t making a simple mistake, and I’m fine with that even if in general I’m much more inclined to see things through a mistake-lens than a conflict-lens. At the end of the day, the person who wants to engage in genocide of my people (or any people) is someone I have a fundamental and irresolvable conflict with, and the only way to resolve it is to either change their terminal values (unlikely), or to completely remove their ability to influence the universe. At the same time, there’s a serious problem if someone takes a pure conflict aspect and decides that say every Trump voter is in the same category as the Neo-Nazi, or that every time a man argues with a women online the man must be an evil Mansplainer to be treated the same way they’d treat a PUA or Redpill advocate. It is also worth noting that the Mistake-Conflict dichotomy doesn’t just show up in standard politics. If one spends enough time dealing with academic politics one can see some people with the same problem, although people are generally at least slightly more subdued about explicitly stating that someone wants a certain departmental policy because they are *evil*. 26. Scott Says: Joshua #25: I agree with everything you say. In case it wasn’t clear, I think the “mistake/conflict paradox” is a paradox only in the sense of (say) the birthday paradox, the twin paradox, or Olbers’ paradox: there’s an apparent conflict of ideas, but it’s resolvable. We might say: if someone is a conflict theorist about everyone else, then I’m a conflict theorist about them. But that’s perfectly compatible with my being a mistake theorist in other situations, and holding up mistake theory as the aspirational ideal. And speaking of mistake theory as an aspirational ideal, and separating ideas from people: as I scanned the rest of Ocko’s Twitter feed, I found a huge number of tweets that I agreed with. Indeed, I almost winced at how much agreement there was, since it would be more comfortable for me to believe that someone so obviously unwilling to surpass DH0 (name-calling) in Paul Graham’s disagreement hierarchy would be separated from me by an unbridgeable gulf in opinion-space. Oh well: in ideology-space, no less than in physical space, there can be next-door neighbors who reject all reasonable epistemic norms (and perhaps not coincidentally, despise you), and people from other continents with whom you could have the most enriching conversations of your life. Is there any more mistake-theoretic realization than that? 27. Mitchell Porter Says: “Mattocko” Kusanagi (previously known for the net.art project “Data Collective” that he runs with Marissa Mayer’s husband) is just after publicity, ignore him. 28. Ryan Lackey Says: Result for me is I found your blog and am now inspired to learn more about general quantum computing (I’ve briefly looked into it in the past w.r.t. cryptography and cryptanalysis, but 5-10 years ago it seemed like a lifetime away. Not so anymore.) A full-length “manga guide to quantum computing” would be amazing, but I’m fine with reading papers too. Thanks for being “surly”! 29. melior Says: It seems clear to me that Ocko’s actual Mistake is the silly idea that “building a viable company at scale” involves anything more meaningful than convincing other people to give you money to hire other people with worthwhile ideas. 30. Joshua Zelinsky Says: Scott #26, Yes, I figured that your thoughts were probably a close approximation to mine, but it seemed that it either was useful to a) make it explicit or b) state them on the off-chance that it wasn’t a similar viewpoint. I agree that the presence of people close in ideological or mental space to one’s self who have obviously terrible approaches to careful thinking or just being decent people is pretty unpleasant. 31. Domotor Palvolgyi Says: Don’t you see that your post lacks any logic because you’re a surly little troll? 😉 32. Scott Says: melior #29: There are some people in the corporate world (not all) who enjoy expressing contempt for academic scientists. I don’t return the favor. I have enormous respect for the skills it takes to run a successful company or build a product, especially because I don’t share them. And even within QC, the ~50-qubit frontier is already at the point where it’s almost impossible for an academic experimentalist, with a$2 or $3 million grant and students who leave as soon as they finish their theses, to compete against Google or IBM or Intel or a startup with$100 million in VC funding. Clearly there are things the corporate world can do that the academic world can’t. But conversely, getting to this point at all required things that the academic world can do but the corporate world can’t. I now do some business consulting about QC, so I have some firsthand exposure to corporate groups who think that, working alone and in secret, they can blaze past what the open research world knows about the basic theory of quantum algorithms. It is eye-popping the extent to which they can’t.

One final remark: people reading this should understand that the self-satisfied name-calling of a Matt Ocko is far from universal in Silicon Valley. Can anyone imagine Paul Graham, for example, dismissing a theoretical explanation of quantum computing, but repeatedly refusing to provide any reason other than that the scientist who wrote it hadn’t built a “viable company of scale” (nor had he tried to, or claimed to)? Likewise when I talked about quantum computing with (say) Larry and Sergey, or Patrick Collison, or Sam Altman, or Yuri Milner: their interest was not in attacking but in most efficiently absorbing what I knew. In one way, it was interesting to observe; in another, it was simply like talking to a colleague from a neighboring field.

33. Greg Says:

This is the first I’ve ever heard of Rigetti so I have no idea whether they’re actually working on anything substantial, but yeesh the marketing on their homepage comes off as hyperbolic and scammy. Someone should rewrite that shit immediately.

34. Richard Gaylord Says:

scott:
“But I’m certain that my inability to understand this is just another of my many failings.” no, it’s because the guy vis an idiot (your comic explanation – explanation in a comic format – is terrific). look at Woit’s newest blog entry (http://www.math.columbia.edu/~woit/wordpress/) for another example of the scientific lunacy going on today . and i’ve posted a comment there that goes (i’m not sure that Woit will publish it) that goes ”
“what we’ve been seeing the last few years has been a concerted campaign to avoid admitting failure by the destructive tactic of trying to change the usual conception of testable science.”. Quite right. This is precisely what happening. Your blog, along with Sabine Hossenfelder’s blog, seem to be the only professional ‘voices crying in the wilderness’ against this scientific (or rather, anti-scientific) madness. Theoretical physics has often been a refuge in times of political hysteria (as Feynman told Wolfram “You know, you and I are very lucky. Because whatever else is going on, we’ve always got our physics.”) but the field has now degenerated under the pervasive onslaught of self-promotionalism being practiced by these science pretenders who employ the pop-science media much like Trump uses Twitter. It is not clear that me that the campaigns by you and Bee can end this madness, in which case, the inevitable replacement of our species by a more reasoning one (AGI) will be well-deserved advancement. These are dark times, indeed. Thanks for continuing to fight the good fight.
Richard (naivetheorist)

35. Scott Says:

Greg #33: I had a nice visit to Rigetti recently, which convinced me that, while they have some catching up to get to where Google and IBM now are with integration of superconducting qubits, they’ve happily avoided D-Wave’s failure modes. But I should make it clear that there’s a spectrum here. In this business, even the “responsible” players haven’t been above some irresponsible hype, which I’ll try to hold them accountable for; while conversely, even D-Wave has put out papers that are worthwhile, and done systems integration that’s informing the other players who are now trying to do something similar except with enormously higher quality qubits. It’s almost as complicated as international relations, where even the linchpin of the “free and democratic world” can sometimes elect a Donald Trump, and conversely, even a Kim Jong Un can sometimes be perfectly correct, e.g. when he describes Trump as a “mentally deranged U.S. dotard.” Nevertheless one still tries to make distinctions.

36. Mateus Araújo Says:

Don’t feed the trolls, Scott. You just gave him precisely the publicity and emotional response that he was after.

37. Scott Says:

Mateus #36: It’s hard for me to win, no matter what I do. But to use another international relations analogy, it’s not as if Matt Ocko is some random terrorist troll with no return address. He’s a “state-level actor”: a managing partner at Data Collective and apparently a somewhat well-known venture capitalist (even though I hadn’t heard of him before now). Do you not think it’s worth it for all the Silicon Valley people who read this blog to see the reasoning skills he put on display here?

38. Ashley Says:

Scott,

Conflict theorists likely had survival advantages, while we were still evolving, pre-civilization. I don’t think all that is going to go away and the mistake theorists be at the wheel. Hell, conflict theorists probably have survival advantages even TODAY!

39. Michele Amoretti Says:

I feel lucky I met Scott at QIP 2018. For sure, talking with him (even for a few minutes) is much more illuminating than reading Mr. Ocko’s spiteful tweets.

40. Scott Says:

Ashley #38:

Hell, conflict theorists probably have survival advantages even TODAY!

You don’t say? 😉

41. Mateus Araújo Says:

Scott #37: You win by letting him make a fool of himself on Twitter. His own words already give him all the reputation damage he deserves.

Is your goal to increase the damage to his reputation by bringing his idiotic tweet to the attention of Silicon Valley people that might otherwise have missed it? That makes sense, but a shorter post limited to “look at this guy spewing nonsense” would be more effective.

By getting angry and engaging with the “reasoning” in his tweet you lose.

42. Scott Says:

Mateus #41: But I didn’t get angry. Trying to address the substance of what someone says is merely my compulsive tic.

There are many, many other posts on this blog where you can see what it looks like when someone says something that makes me angry, or hurt, or upset. This time, apart from my general despair over the state of civilization, I was merely amused.

43. Mateus Araújo Says:

Scott #42: But there is no substance! That is the whole point of a troll post, it is just an attempt to make you reply.

Maybe venting in private allows you to satisfy your compulsion, while denying the troll the satisfaction of engagement?

44. Frank Wilhoit Says:

Jeremy #18: What Scott is trying to say is that mean time between failures is exponential in the number of components. The entire history of engineering consists of attempts to circumvent this problem, each of which has failed. There are two approaches to mitigation: 1) increase the MTBF of each component; 2) try to reduce the exponent. Three seconds’ math will show which of those two approaches ought to have the larger potential payoff — BUT thousands of years of practice shows that approach 1 has some feasibility and approach 2 doesn’t.

45. William Hird Says:

There’s no truth to the Twitter rumor that Matthew Ocko and Lubos Motl are starting their own weblog entitled “Aspergers Optimized”.

46. Eel Gardener Says:

The logic seems solid to me: a good QM explanation is meaningful; Scott Aaronson has never built anything meaningful; a fortiori, Scott Aaronson has not made a good QM explanation.

(Logically valid given the assumptions doesn’t mean the assumptions have any evidence behind them…)

47. Scott Says:

Eel #46: Except he later clarified that for him, “building something meaningful” means “companies of scale” or “tangible physical work of renown and utility,” which seems to break your attempted syllogism, unless perhaps writing the dialogue for a cartoon counts as tangible physical work.

48. Scott Says:

William #45: I’d prefer if commenters here refrained from using “Aspergers” as a term of derision, and not only because it probably describes about half the readership of this blog, including its author. 🙂 (In any case, I’ve only noticed any sign of Asperger-like traits in one of the two people you mention.) Thanks!!

49. William Hird Says:

Scott #48
I apologize, I couldn’t resist the cheap/quick quip. And I don’t believe you have Aspergers, just being socially awkward doesn’t get you into the club. Now if you were at a big party with lots of people having a good time and you were over in a corner somewhere by yourself reading a book , now you might qualify……… 🙂

50. JimV Says:

Trying to communicate rationally with such people is laudable. but typically all you get in return is a another load of gibberish.

Progress happens by trial and error, in my experience. Those who point out the errors are just as valuable as those who do the trials. (Not that you haven’t done both.)

51. Ashley Says:

Someday there will be a mathematical theory of evolution, and I am going to prove things like conflict theorists will always be and be in charge too (there would be mathematical definitions for conflict theorists etc. too). Then we can watch such exchanges and have the pleasant satisfaction of watching science in action instead.

52. Matthew Says:

Ocko’s razor: the person enduring the fewest ad hominem attacks is correct

53. Scott Says:

Matthew #52: LOL, I might need to steal that sometime!

Whereas Occam’s Razor is the right blade for swiftly and surgically slicing away superfluous assumptions, Ocko’s Razor is preferred by those who want to randomly slash people and then run. Conversations, and fields, and communities, can be classified by which razor is the one to fear in them.

54. Phil Koop Says:

Scott #9 Yes, I think it was Robert R. Tucci.

55. Alex V Says:

Scott (cf #13), IMHO the term “theoretical explanation” does not sound nice, because of the style used in the mentioned comics and some other places – i.e., “the quantum mechanics is JUST a certain generalization of probability theory” could hardly be treated as “explanation” without certain hope on answers about “Why? How?” etc… even if you have some “black-box” equations.

56. Scott Says:

Alex V #55: The “how” is what, for example, I’d teach in my undergrad class, which is meant to be accessible to anybody familiar with vectors, matrices, and complex numbers. Doesn’t quite fit in a cartoon, especially one that people already complained was too long. 🙂

In a certain sense, nobody knows the “why.” (Though there are ideas about it, including quite recent ones, which people can study and debate after they understand the “how.”)

It’s important to realize that telling the historical story of how QM was discovered, and how it gets applied (for example) to the harmonic oscillator, the hydrogen atom, or the square-well potential, isn’t even trying to answer the “why”; it’s just giving you more “what” and “how.”

57. Alex V Says:

Scott #56: relating history, harmonic oscillator and all that – they might use experimental tests and well-known check with correspondence between old and new theories https://www.britannica.com/science/correspondence-principle , but we may not apply last one to derive quantum computer from the classical one.

58. Scott Says:

Alex #57: But correspondence between quantum and classical mechanics is relevant to “why?” only in the sense of “why, historically, did physicists come to accept QM?” It’s not relevant, or at least not obviously relevant, to the question “why, ontologically, should the world have been quantum at all, as opposed (for example) to classical all the way down?” Indeed, for the latter question, it’s going in the “wrong direction”: presumably, any answer would need to be in terms of some yet-undiscovered theory that was even deeper than QM, rather than in terms of a less deep theory (like classical mechanics).

59. Adam H Says:

I like the comic. QM can not be fundamentally mapped to classical concepts and so one must think in an ontology different from that of classical thinking. It teaches this well and complex numbers and waves are not too difficult of concepts for inquisitive people to grasp.

Opinion: I might add that this is exactly why I think MWI is a poor pedagogy for QM. The splitting of worlds is flat out wrong because it can not reproduce Born probabilities. It is an attempt to save determinism and ground QM in classical concepts (or basically make entanglement classical).

Anyway your blog is awesome(like getting the fruits of a brilliant professor without having to pay tuition).

60. Alex V Says:

Scott #58: Agree (about correspondence principle). I rather tried to describe, how “practical” people may be not very happy just about the “ontological” way of explanation in the comics, because they expect something else.

61. Atreat Says:

Scott:

“So, no, this is not a good explanation – anymore than Jeremy Rifkin on CRISPR would be”

Empathy is good. Imagine if tomorrow you woke up and discovered that Donald Trump had tweeted overnight an extremely cogent explanation of the black hole firewall paradox.

I know you’d handle it with far more grace and generosity than Ocko here, but I’d imagine your stomach might be a little unsettled 😉

62. quax Says:

The Ocko logic really doesn’t make any sense.

Just because Wagner was a jerk doesn’t mean I cannot enjoy his operas.

As to D-Wave, to quote from a recent Chapuis, Djidjev, Hahn & Rizk paper:

“…on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms…for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.”

So the question remains, are there any useful problems that map onto something that fits well into their chimera graph?

63. Scott Says:

quax: Right, the logic only makes sense under Ocko’s Razor (see comment #52).

Regarding D-Wave: of course we’ve been over this a billion times, but the core of the matter is still that we lack any convincing evidence that whatever speedups are there, compared to classical algorithms like Quantum Monte Carlo, are quantum-mechanical in nature, as opposed to resulting from building an extremely specialized chip with the Chimera graph topology and then feeding it problems that involve Ising minimization on Chimera graphs. The fact that quantum tunneling looks to be present in the 8-qubit clusters isn’t really relevant to that conclusion, since QMC (or Selby’s algorithm) can reproduce that sort of tunneling behavior classically.

If you build special-purpose hardware, even if it’s completely classical, it’s not out of the question to get even a factor-108 speedup over a desktop PC, for the special problem of simulating your hardware. Indeed, there are groups (like Yoshi Yamamoto’s group at Stanford) that might or might not be doing that right now using optical lattices that we know are completely classical, and such hardware is also extensively explored by the people who try to build “PUFs” (physically unclonable functions).

Now some people might say, a factor-108 speedup sounds so good that who cares whether it arises from quantum mechanics or not?? But, to connect to your closing question, the issue is that if the “speedup” is entirely classical and constant-factor in origin, that makes it extremely unlikely that any interesting speedup will survive once you take actually-useful problems from other domains, and encode them onto your specialized hardware. For constant-factor classical speedups are usually just about a given piece of specialized hardware being very good at simulating itself—so once you ask the hardware to do something unrelated to simulating itself, it loses its advantage.

64. mjgeddes Says:

#58

“why, ontologically, should the world have been quantum at all, as opposed (for example) to classical all the way down?”

*super-click*

The answer is to be found in categorical quantum mechanics, where dagger compact categories can serve as a type-theoretic quantum logic. Dagger commutative Frobenius algebras corresponding to

Yours, your handy ‘super-intelligence emulator’ 😀

65. Scott Says:

mjgeddes #64: That was an impressively incomprehensible word salad, even without the unfinished sentence. If God told me that the reason why the world was quantum was anything along those lines, I’d ask for a different God who could explain it better.

66. Atreat Says:

Matt Ocko #19, I’m afraid you have Scott all wrong. The person you dislike so much is a caricature in your head. A figment of your imagination. The person you describe is not Scott _at all_.

How you came up with this caricature I have no idea and it appears Scott does not know either. Presumably, you’ve never met. You throw cold water on the idea that it is related to his criticism of D-Wave, but offer nothing else as to how you came up with this caricature.

So what we are left with is a caricature of Scott from someone who does not know him – created with completely opaque motivation – insisting that a message he delivered should be dismissed, not because of anything wrong with the message mind you, but because you have this absurd caricature of the messenger.

67. mjgeddes Says:

Scott #65

Clicking….

*Super-Click*

The world is quantum mechanical because there’s no single consistent mathematical foundation that can fully characterize physics, but rather three slightly different mathematical foundations, each providing partial coverage of physics. Only 2 types of mathematics can be consistently used at once – and any pairing will fully characterize physics, but with 3 possible pairings, there are 3 slightly different consistent pictures of reality possible.

Mathematics dictates how knowledge is represented. To get a consistent knowledge representation of reality, 2 forms of math must be selected to serve as the mathematical foundation of physics (see previous paragraph), but with 3 possible types, there are 3 possible pairings…the choice of mathematical foundations must thus throw away some valid knowledge representation of reality.

The knowledge balance principle follows:

“If one has maximal knowledge, then for every system, at every time, the amount of knowledge one possesses about the ontic state of the system at that time must equal the amount of knowledge one lacks.”[1] (knowledge balance principle)

In short, the quantum mechanical nature of reality stems from the fact that reality in totality is not fully consistent or complete. That is to say, the foundations of mathematics themselves are not completely fixed.

Another way to state this is that reality has to be completely self-contained, in the sense that ‘knowledge’ can only be defined by reference to a choice of how knowledge is represented, and there’s more than one valid choice (I’ve suggested there’s 3). This is ultimately determined by the foundations of mathematics, specifically category theory, where the representation of Hilbert space is defined by the details of dagger symmetric monoidal categories.

Best guess ????

68. Scott Says:

mjgeddes #67: Based on any of those philosophical considerations, can you tell me the reason why amplitudes in QM should have been complex numbers, rather than (say) reals or quaternions?

(My first reaction to anyone claiming to explain the origin of QM is usually: “complex numbers or go home” 😉 )

69. Raoul Ohio Says:

Matthew Ocko had zero name recognition in the science and engineering world before this came up. Had anyone here ever heard of this guy? No one cares what he says.

If I was in your shoes, I would avoid giving him any free press by ignoring him.

70. asdf Says:

Scott #67, could that be because the complex numbers are what you get if you start with real-valued matrices and start repeatedly adjoining eigenvalues and making new matrices?

71. JimV Says:

P.S. I liked the cartoon very much. I hope there will be sequels.

72. Tim May Says:

Raoul Ohio #69, you ask if anyone here had ever heard of Matt Ocko before this case.

Yes, I was an investor in a company that Matt was also an investor in, and I met him at an long session in Marin County, maybe about 15 years ago or so. (The company eventually folded and some of us got about half our origiinal investment back.)

I have no idea about Matt’s recent work. He was level-headed when I met him. I gather he has done well in a bunch of later investments.

As to his views about QM, I have nothing to add.

I’ve never had a Twatter account or a Faceplant account. I don’t care for either “one line repartee” or oversharing, a la Faceplant.

PS, I also worked for Intel, including a lot of interaction with Gordon Moore in the 70s and 80s. Later, some stuff on crypto. I know some of the other contributors here.

PPS, I fully buy Scott’s analysis of the 2-norm (quantum world) vs. the 1-norm (classical world), that is the role of complex numbers, 2-norm. But I’m struggling with the role of Planck’s constant. If we start in a 2-norm with complex numbers and “dial down” Planck’s constant towards zero, how do things “change over” to a 1-norm, non-complex world?

–Tim May

73. marc Says:

1. Your comment re the lack of logic in his response is of course 100% correct.

2. The irony of a VC claiming that someone else has “never built anything at all of meaning” should not be lost on anyone. Funded other people’s ideas – sure. Maybe even given some good advice. But not “built anything of meaning”.

74. Scott Says:

Tim #72:

If we start in a 2-norm with complex numbers and “dial down” Planck’s constant towards zero, how do things “change over” to a 1-norm, non-complex world?

That’s a very interesting question to which I don’t have a full answer, but I hope the following is helpful:

The truth is that, in my conception of QM, Planck’s constant doesn’t play a very fundamental role (or rather: I follow the quantum gravity theorists in just setting ℏ = 1 🙂 ). The role of ℏ, you might say, is just to relate the quantum formalism to the units of length, time, and energy that had been defined before QM.

While QM is based on the 2-norm, it already implicitly contains classical probability theory (which is based on the 1-norm) inside of it, via the operation of taking a partial trace and looking at the reduced density matrix of a subsystem. If the density matrix is diagonal, then it just is a classical probability distribution. Even if it’s non-diagonal, the diagonal entries are still nonnegative reals summing to 1, which give you the probabilities of measurement outcomes after a standard-basis measurement.

The entire subject of decoherence theory, you might say, is about under what conditions the classical probabilistic part of QM becomes a good approximation, and we can ignore the “2-norm” part (for example: under what conditions density matrices become diagonal, in “natural” bases that don’t require performing nonlocal measurements).

And ℏ, in turn, of course plays an important role in decoherence theory, if we want to give estimates for when decoherence is likely to happen in terms of conventional units of mass, energy, length, and time. As ℏ is dialed toward zero, the conditions for decoherence to happen (again, in terms of those conventional units) become less and less stringent, and therefore the “1-norm” piece of QM becomes an approximation that’s valid in a wider and wider range of circumstances.

75. RandomOracle Says:

Scott, on the subject of quantum mechanics and probabilities, what’s your view on QBism (and specifically the idea of using SIC-POVMs to represent quantum states and get these Bayesian-like update rules)?

76. Scott Says:

RandomOracle #75: On the one hand, I love the mathematical question about SIC-POVMs and whether they exist in arbitrary dimensions. And I’m intrigued by the elegant rewriting of the quantum formalism that Chris Fuchs discovered would follow if they did exist—like, I don’t know whether that clever rewriting is actually useful for anything beyond itself, and whether it has any deeper significance, but I guess the answers are “maybe” and “maybe”! And I love the way Chris writes about these issues; he’s been one of my favorite writers about QM since I first got to know him and his work almost 20 years ago.

On the other hand, I have to confess that I recoil at the radical subjectivism inherent in the “QBist” philosophy, the refusal ever to say anything about what’s the actual state of the world. I.e., if quantum states are just personal knowledge assignments, then what are they knowledge about? And how could you treat a quantum state as just your personal knowledge assignment, with no “ontic” reality behind it, if (using some far future technology) you yourself were being manipulated in a superposition state by someone else? Or does such a scenario not even make sense? Whatever the answer, stick your damn neck out and say something about it! 😀

While the QBists themselves might not put it this way, I’d describe theirs as the interpretation that tries to be “even more Copenhagen than Copenhagen” in its radical subjectivism. It’s probably no coincidence that one of the leaders of QBism, David Mermin, is the only first-rate scientist I know who’s a fan of postmodern philosophy.

A few months ago, I had the privilege to talk with Mermin about these issues, when I visited Cornell to give the Messenger Lectures. He’s a great man, and I hope I’m 1% as sharp as he is when I’m in my mid-80s. But as I told him at the time, our conversation made me 100% confident that QBism is not for me—i.e., that the horror I feel about radical subjectivism is not because I don’t understand what the QBists are saying, but rather because I do understand what they’re saying.

To be clear, I’ll very happily absorb—and “steal” for my own purposes—any technical insights that come out of the QBist approach, just like I’ll happily steal any technical insights that come out of Many-Worlds or Bohmian mechanics or any other philosophical camp. And I hope to learn from and be inspired by Chris Fuchs and David Mermin and Carl Caves and others associated with QBism for many years to come. But that doesn’t mean I need to pray at their altar. 🙂

77. Atreat Says:

Scott #76,

“I have to confess that I recoil at the radical subjectivism inherent in the “QBist” philosophy”

I’ve known you don’t subscribe to anti-realist interpretations of QM and that you insist upon an objective reality, but as someone with almost diametrically opposite predilections I have to say I also find QBist ideas hard to stomach. This talk of agents and beliefs and so on boils down to solipsism masquerading as a QM interpretation, no? Of what does the agent consist? Of what are the agent’s “beliefs” formed?

“While the QBists themselves might not put it this way, I’d describe theirs as the interpretation that tries to be “even more Copenhagen than Copenhagen” in its radical subjectivism.”

Because it is solipsism… but wait! you indirectly seem to criticize Copenhagen here… Which leads me to ask…

Scott, what is *your* preferred interpretation of QM? I don’t think I’ve ever seen you put your cards on the table and lay out clearly what interpretation(s) you think are closest to the truth. I don’t think your ghost paper qualifies as an answer, BTW. I’ve heard you say you have deep skepticism about objective collapse theories and yet these would seemingly be right up your philosophical alley so to speak. If you had to bet on which interpretation was closest to the truth, which one would you go with?

78. RandomOracle Says:

Thanks for the detailed reply! I pretty much feel the same way, in the sense that the open problems and applications concerning SIC-POVMs (and also MUBs) are really interesting, but the subjective nature of the interpretation makes QBism less appealing.

But is this subjectivism stemming more from the Quantum or from the Bayesianism (or from putting them together)? To me, it seems like the subjectivism is inherited from the Bayesian view on probability. The quantum aspect is just showing that the mathematical framework of QM, viewed as a more general probability theory, allows one to write things using SIC-POVMs so as to get a Bayesian update rule. From then on, the problem of interpreting the quantum state as a state of knowledge (and being agnostic as to what the knowledge is about) is simply the result of going full Bayesian. Is this correct?

Related to this, isn’t the special role of the observer a problem for classical probability theory as well, under the Bayesian interpretation of probabilities? Or is it just a problem in quantum mechanics? This is something that I’ve found puzzling, given that quantum mechanics can simply be viewed as a different kind of probability theory (based on the 2-norm instead of the 1-norm). Of course, this only implies that interpretation problems of probability theory will also be interpretation problems of QM, not the other way around. But then, if this is only a problem of QM (and not of probability theory in general) what aspect of QM, in particular, is the root cause? The fact that you can create superpositions?

Btw, will your Messenger lectures be available online? 😀

79. someone Says:

@William Hird, Scott #45,48,49:

> William #45: I’d prefer if commenters here refrained from
> using “Aspergers” as a term of derision, and not only
> because it probably describes about half the readership of
> this blog, including its author.

Scott, I’m glad you called out William for using Aspergers as a joke, but the stuff about the readership makes the same mistake. It’s really unlikely that any of the readership or its author suffer from it or anything on the autism spectrum.

It’s really harmful and insulting to BOTH Aspergers/autism sufferers AND science guys to go along with that sort of prejudice. Look at the DSM definitions of autism & Aspergers and you’ll see that the vast majority of science guys are not even close to fitting the definition. People who insist on seriously claiming this trash need to be called out for the sake of everyone and the field.

80. Lou Scheffer Says:

Scott #16: You say “But at least the way human beings engineer hardware, it’s astronomically more unlikely that a system will work well despite the fact that the individual components don’t.”

You are entirely correct that this is (usually) the way humans learn to build hardware. However, devices designed the other way have a far better track record, since biology specializes in building bigger and bigger systems out of crappy devices that don’t work reliably. Neurons have not improved significantly between insects and humans. We just think better using 10^6 times more equally crappy components.

81. Scott Says:

Atreat #77: It’s no coincidence that you haven’t seen me put my cards on the table with a favored interpretation of QM. 🙂

There are interpretations (like the “transactional interpretation”) that make no sense whatsoever to me.

There are “interpretations” like dynamical collapse that aren’t interpretations at all, but new physical theories—so by all means, let’s test QM on larger on larger systems, among other reasons because it could tell us that some such theory is true or (far more likely) place new limits on it! (People are trying.)

Then there’s Bohmian mechanics, which does lay its cards on the table in a very interesting way, by proposing a particular evolution rule for hidden variables (chosen to match the predictions of QM), but which thereby opens itself up to the charge of non-uniqueness: why that rule, as opposed to a thousand other rules that someone could write down? And if they all lead to the same predictions, then how could anyone ever know which is right?

And then there are dozens of interpretations that seem to differ from one of the “main” interpretations (Many-Worlds, Copenhagen, Bohm) mostly just in the verbal patter.

But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally conceptions of what a scientific theory is supposed to do for you. Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?

Also, are the ultimate equations that govern the universe “real,” while tables and chairs are “unreal” (in the sense of being no more than fuzzy approximate descriptions of certain solutions to the equations)? Or are tables and chairs “real,” while the equations are “unreal” (in the sense of being tools invented by humans to predict the behavior of tables and chairs and whatever else, while extraterrestrials might use other tools)? Which level of reality do you care about / want to load with positive affect, and which level do you want to denigrate?

This is not like picking a race horse, in the sense that there might be no future discovery or event that will tell us who was closer to the truth. I regard it as conceivable that superintelligent AIs will still argue about the interpretation of QM … or maybe that God and the angels argue about it. 🙂

Indeed, about the only thing I can think of that might definitively settle the debate, would be the discovery of an even deeper level of description than QM—but such a discovery would “settle” the debate only by completely changing the terms of it.

I will say this in favor of Many-Worlds: it’s clearly and unequivocally the best interpretation of QM, as long as we leave ourselves out of the picture! I.e., as long as we say that the goal of physics is to give the simplest, cleanest possible mathematical description of the world that somewhere contains something that seems to correspond to observation, and we’re willing to shunt as much metaphysical weirdness as needed to those who worry about details like “wait, are we postulating a continuum of slightly different variants of me, or just an astronomically large finite number?” (Incidentally, Max Tegmark’s “mathematical multiverse” does even better than MWI by this metric.) It’s no coincidence that MWI is so popular among those who are also eliminativists about consciousness.

When I taught my undergrad quantum information course, it was striking how often I needed to resort to an MWI-like way of talking when students got confused about measurement and decoherence. (“So then we apply this unitary transformation U that entangles the system and environment, and we compute a partial trace over the environment qubits, and we see that it’s as if the system has been measured, though of course we could in principle reverse this by applying U-1 … oh shoot, have I just conceded MWI?” 🙂 )

On the other hand, when (at the TAs’ insistence) we put an optional ungraded question on the final exam that asked students their favorite interpretation of QM, we found that there was no correlation whatsoever between interpretation and grade—except that students who said they didn’t believe any interpretation at all scored noticeably higher than everyone else.

Anyway, as I said, MWI is the best interpretation if we leave ourselves out of the picture. But you object: “OK, and what if we don’t leave ourselves out of the picture? If we dig deep enough on the interpretation of QM, aren’t we ultimately also asking about the ‘hard problem of consciousness,’ much as some people try to deny that? So for example, what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis? Would it even be like anything? Or is there something about our consciousness that depends on decoherence, irreversibility, full participation in the arrow of the time, not living in an enclosed little unitary box like AdS/CFT—something that we’d necessarily destroy if we tried to set up such an experiment? If so, then wouldn’t that point to a strange sort of reconciliation of Many-Worlds with Copenhagen—where as soon as we had a superposition involving different subjective experiences, for that very reason its being a superposition would be forevermore devoid of empirical consequences, and we could treat it as just a classical probability distribution?”

I’m not sure, but The Ghost in the Quantum Turing Machine will have to stand as my last word (or rather, last many words) on the subject for now.

I guess I’ll promote this to a top-level post…

82. Mateus Araújo Says:

RandomOracle #78:

In my opinion there is only a problem with interpretation of probabilities in quantum mechanics. Without quantum mechanics, the world is fundamentally deterministic, and therefore the only kind of probabilities that can exist are subjective probabilities. And Bayesian probabilities are a perfect formalisation and interpretation of subjective probabilities. The observer role in the Bayesian framework is not mysterious at all, as the probabilities are really just the beliefs of an observer.

The problem appears because in quantum mechanics the probabilities are, as far as we can tell, objective. And making sense of objective probabilities is something that the philosophers have, again in my opinion, failed to do. There are several interpretations of probability that try to deal with them, which are unsatisfactory for various reasons.

The fundamental problem with the interpretations of objective probability that I know is that they are all classical, whereas the origin of the objective probabilities is quantum. On the other hand, most interpretations of quantum mechanics do not even try to address the question of what are these objective probabilities.

The only attempt I know of solving this problem is the Deutsch-Wallace theorem in the Many-Worlds interpretation, and that’s why I’m so interested in investigating it.

83. Scott Says:

Lou Scheffer #80:

You are entirely correct that this is (usually) the way humans learn to build hardware. However, devices designed the other way have a far better track record, since biology specializes in building bigger and bigger systems out of crappy devices that don’t work reliably. Neurons have not improved significantly between insects and humans. We just think better using 10^6 times more equally crappy components.

OK, but maintaining a quantum superposition with crappy components is a very specific challenge—harder than doing classical computation with crappy components—and we have theorems that give us some indication of just how non-crappy the components should be before it starts to work. And the difference is, D-Wave didn’t care if it was orders of magnitudes away from the known limits. Meanwhile, Google, IBM, Intel, Rigetti, and IonQ at least seem to be closing in on the limits, though it remains to be seen how much coherence they’ll be able to maintain as they scale up.

84. Scott Says:

someone #79: Well, the whole thing about a “spectrum,” is that it can really hard to know for sure whether someone is on it. I’m curious: if someone is nerdy, uncomfortable in many types of social situations, and fond of deploying logical arguments in situations where most people aren’t—and if (let’s suppose) they also rock back and forth and fidget a lot, make funny hand movements, and are hypersensitive to starchy or uncomfortable clothes—then what else could you learn about the person that would let you say with confidence that they were not on the Asperger spectrum?

85. Scott Says:

RandomOracle #78: From my standpoint, the reason why classical probability theory is so much less problematic is that, even in a world where all the particles and fields had a definite “real” configuration at any given time—indeed, even in a world that was completely classical and deterministic—it would still be easy to tell a story about why rational agents would develop and use Bayesian probability theory to manage their uncertainty. Whereas, for it to be a good idea for those agents to use quantum mechanics, something different needs to be true about the basic architecture of the world—so what is that someting?

I think video of the Messenger Lectures should be available at some point, but I don’t know where or when.

86. Michael Says:

@Scott#84- the counterargument is that overusing it trivializes what people are going through. Look at your own life- there are forms of OCD that cause boys and girls to obsess about sexually harassing or raping people.If you had known that growing up, you might have felt less alone. But you didn’t know that since everyone said “I’m so OCD” when they liked to keep their socks in the drawer a certain way.

87. Scott Says:

Michael #86: Actually, I’m not sure at all that that’s why I didn’t feel less alone, but OK, point taken. 🙂

And I have it on no less than the authority of Scott Alexander—maybe the one guy who’s infiltrated professional psychiatry while maintaining a worldview fundamentally similar to mine—that the DSM criteria for “spectrum disorders” are indeed just as arbitrary and subjective as they look to an outsider, and that psychiatrists are as confused about them as anyone else.

88. gerben Says:

The Many-Worlds and Bohmian people seem to me to be in the same game as people historically trying to interpret special relativity as an apparent effect due to contraction of atoms in order to save some pre-conceived notion of absolute time. Missing the fundamental point. In case of quantum mechanics being that it is a probabilistic theory with respect to the knowledge of an observer. Now in my view QBism == Copenhagen under the iso-morphism of replacing oracle sentences by Bohr with oracle sentences by Fuchs. But both at least take this fundamental point serious.

Which brings me too your “complex numbers or get out” question. If you accept observers (conscience, bayesian agents or whatever term you use) as a fundamental part of reality / science, or rephrasing it “There is no universe / reality if there isn’t something that experiences it”. Which is already an accepted line of reasoning in multiverse arguments, so why not on the fundamental nature of physical law. Then the question becomes what is the most general form of physical law for such an observer.

Simplifying the question of what is an observer to a Bayesian agent you can ask what is the most fundamental mathematical building block needed for a Bayesian agent. This is a question much like what is the most fundamental building block of space. Initially you think vectors but a more precise mathematical analysis reveals the existence of something more fundamental the spinor (in a way the square root of a vector) in the same way a more detailed analysis of probability theory would reveal the existence of a more fundamental norm 2 (again in a way the square root of classical probability). Now I would like to see some classification of probability theory. I suspect it’s true that norm 2 complex vector is in a definite way the most elementary representation. But maybe by letting go of some assumptions we can get even more fundamental representations which would be a promising starting point for theoretical physics.

To my mind this way of thinking resolves a lot of fundamental problems. All questions that plaque the classical notion of physics, “why is it there something?”, “who/what created it and why does that exist?”, “are we living in a simulation?”, “does it matter what “substrate/encoding” is used for the simulation?”, evaporate by placing observer/experience at the same level as reality. Which is what quantum mechanics seems to tell us.

89. Gatekeepers is back Says:

I agree with Matthew Ocko on this one.

What he -Matthew- doesn’t realize is that most academics are this way, only Scott has the guts to say in public things that most academics only say in “petit comité”. So in way, just as Trump’s is the most transparent White House in history, one could make the argument that Scott is one of the most transparent academics we have.

Keep up with the good trolling Scott!!

90. Aula Says:

Scott #85:

I think there’s some confusion here that should be cleared up. When you say you are fine with Bayesian probability in the classical case, you are pretty clearly talking about the objectivist Bayesian interpretation of probability, and I’m guessing you also have no problem with objectivist Bayesianism in the quantum realm. However, QBism specifically uses the subjectivist Bayesian interpretation (also known as “personalist” or “Dutch book” Bayesianism), and almost all of your earlier arguments against QBism are really arguments against subjectivist Bayesianism which apply equally well in a purely classical setting. Note that “Bayesian probability” by itself doesn’t imply any interpretation of probability, it just means probabilities are updated according to Bayes’ theorem. Actually it would probably be more accurate to refer to QBism as “Quantum subjectivism” because (as far as I understand) the subjective interpretation of probability is essential to the theory while the specific rules for updating probabilities are merely an implementation detail and could just as well be non-Bayesian.

91. Scott Says:

Aula #90: No, I don’t see how subjective vs. objective is relevant to my point. I was simply saying: take a deterministic cellular automaton, like Conway’s Game of Life. Let it run until intelligent beings arise in the life-world. Then it’s very easy to understand why those intelligent beings would most likely be approximate Bayesians about whatever they didn’t know. Subjective Bayesians, of course, because there’s no objective probability in their world. By contrast, it seems exceedingly unlikely to find these beings using the Born rule, unless something about their world is actually quantum.

92. someone Says:

Scott #84,87 Michael #86

> I’m curious: if someone is nerdy, uncomfortable in many
> types of social situations, and fond of deploying logical
> arguments in situations where most people aren’t—and if
> (let’s suppose) they also rock back and forth and fidget
> a lot, make funny hand movements, and are
> hypersensitive to starchy or uncomfortable clothes—
> then what else could you learn about the person that
> would let you say with confidence that they were not on
> the Asperger spectrum?

Nerdy: DEFINITELY NOT AT ALL an Asperger symptom.
uncomfortable in social situations: NOT AT ALL
deploys logical arguments: NOT AT ALL

rock & fidget, make funny hand movements: this is more like it — depends on how vigorous and uncontrollable. If they have vigorous frequent hand flapping that they cannot stop and do in front of everyone, yeah, that’s a sign.
hypersensitive to clothes: also a sign.

The thing is I’ve never ever seen a computer scientist or university prof show the last two. Never. Maybe they could hide hypersensitivity to clothes, but I never noticed someone look physically uncomfortable in their skin.
Flapping, well you can’t hide that. Come on.

The first 3: not related at all. That’s just the prejudice I’m talking about.

And about the ‘spectrum’, yeah, it’s a copout to hide the fact psychiatrists don’t know anything, as you point out. No one talks about a spectrum of pneumonia or cancer.

Don’t know who Scott Alexander is. Who is he?

93. Jeremy Stanson Says:

Scott # 83 says “D-Wave didn’t care if it was orders of magnitudes away from the known limits”

This betrays that either you have a clear bias against D-Wave, or you’re surprisingly naive! Or maybe both. It’s one thing to be skeptical of D-Wave’s claims and to be critical of their approach (those are both productive forms of engagement), but counterproductive to make sensational and baseless assertions like “D-Wave doesn’t care about noise.” I really have never understood why people who are critical of the fact that D-Wave’s qubits are noisy go on to claim that D-Wave doesn’t care about this. Of course D-Wave cares about this! They aren’t villains. They’re not trying to put one over on everybody. They’re trying to make the best system they can. I think everyone engaged in D-Wave skepticism and criticism should continue to do so, but would do well to stop assuming D-Wave are trying to cut corners and instead look at the work with the assumption that D-Wave is trying to optimize everything (including noise) throughout their development. The main difference from other projects is that, for D-Wave, scale / number of qubits carries a higher weighting in that optimization than noise – but this is fundamentally rooted in what seems to me to be a very logical and practical view: noise at 50+ qubits will be intrinsically higher than noise at ~2 qubits, so it doesn’t really matter what you can do at ~2 qubits if it’s not representative of the system at 50+ qubits. They’re building the 50+ qubit system to see what it looks like, characterize the noise in a system at that scale, and heavily iterate on that design to vastly improve the noise along the way. Compare the deisgn of their system at 128 qubits to that of ~2000 qubits today – the qubits are radically smaller. The whole system is significantly optimized. Each qubit in the 2000 qubit system sees considerably less noise than each qubit in the 128 qubit system – but it was building and testing the 128 qubit system that taught them how to do this.

“D-Wave doesn’t care about noise!” What nonsense. They’re battling noise just like everyone else – they’re just tackling the problem in a different way.

94. Scott Says:

Jeremy #93: All else equal, of course D-Wave prefers to have lower noise rather than higher noise. Especially now that Geordie Rose is gone from the company, and with him his claims like “the ideas behind gate model QC are not good ideas,” which are a thousand times more sensational than anything I’ve ever said, not that people like you would ever hold him responsible for them.

But the key problem is that D-Wave didn’t care enough about low noise to do the one thing that mattered: namely, refraining from claiming they had a “useful QC” before they got the noise low enough that they could scale to at least a few dozen qubits and clearly see that a quantum computation (in the sense of something hard to simulate classically, for reasons of interference and entanglement) was happening.

95. Aula Says:

Scott #91:

No, I don’t see how subjective vs. objective is relevant to my point.

Let me try again. I didn’t write anything about subjective (specific to each individual) versus objective (independent of any individual) probability, except once when I sloppily wrote “subjective” instead of “subjectivist”. I wrote about subjectivist versus objectivist interpretation of the meaning of the subjective probability. QBism is manifestly subjectivist, and (to repeat myself) your arguments against QBism in #76 are mostly arguments against subjectivist (not subjective!) probability. Then in #78, RandomOracle – apparently ignorant of the significant difference between “subjective” and “subjectivist” in this context – asked rather confused questions. Your response in #85, far from trying to understand and remove this confusion, seems to me like it’s not even meant to answer any particular question that was asked; I don’t think that’s very helpful, hence my comment #90.

I was simply saying

Yes, I understood (and mostly agree with) what you were saying, I just don’t get why you were saying it.

96. Ruth Kastner Says:

Hi Scott:

You might want to consider the Transactional Interpretation–it actually works. See, e.g.:
https://arxiv.org/abs/1709.09367 (published in IJQM);
and with Cramer: https://arxiv.org/abs/1711.04501

Best wishes,
RK

97. Rick Mayforth Says:

Back to the original tweet, what I see is a standard leftist Alinsky-style attack trying to discredit Scott’s work by attacking him personally. Such should never be tolerated.

98. Scott Says:

Rick #97: I agree about not tolerating personal attacks—especially about not tolerating the widespread yet idiotic presumption that they suffice to answer an argument—but I don’t see what it has to do with leftism. Ocko’s attack on me—as a pointy-headed intellectual whose criticisms of a company’s claims need not be answered if I’m not founding companies myself—echoes a familiar right-wing trope. And of course, the right, including the current “leadership” of the US, has not been entirely innocent of personal attacks, in the same sense in which Jabba the Hutt is not entirely innocent of gangsterism, gluttony, or female enslavement.

99. Kenny Says:

The really important question is whether Randall ever out-nerded Zach!

100. Jeremy Stanson Says:

Scott #94,

Whoa, I missed your reply so long ago.

I agree that D-Wave jumped the gun in their announcements about having a QC, but their motivations for doing so are obvious and the field clearly has not suffered.

My point is that “before they got the noise low enough that they could scale” is not a thing. Scaling adds noise. Always. It always will. You can’t take a system with noise X and then scale it at noise X. Two “noiseless” qubits are just as useless as 1000 noisy qubits, probably more so. This is a simple but critical insight that D-Wave had which has led them to build the most sophisticated superconducting integrated circuits ever.