Archive for the ‘Embarrassing Myself’ Category

We Are the God of the Gaps (a little poem)

Tuesday, July 5th, 2022

When the machines outperform us on every goal for which performance can be quantified,

When the machines outpredict us on all events whose probabilities are meaningful,

When they not only prove better theorems and build better bridges, but write better Shakespeare than Shakespeare and better Beatles than the Beatles,

All that will be left to us is the ill-defined and unquantifiable,

The interstices of Knightian uncertainty in the world,

The utility functions that no one has yet written down,

The arbitrary invention of new genres, new goals, new games,

None of which will be any “better” than what the machines could invent, but will be ours,

And which we can call “better,” since we won’t have told the machines the standards beforehand.

We can be totally unfair to the machines that way.

And for all that the machines will have over us,

We’ll still have this over them:

That we can’t be copied, backed up, reset, run again and again on the same data—

All the tragic limits of wet meat brains and sodium-ion channels buffeted by microscopic chaos,

Which we’ll strategically redefine as our last strengths.

On one task, I assure you, you’ll beat the machines forever:

That of calculating what you, in particular, would do or say.

There, even if deep networks someday boast 95% accuracy, you’ll have 100%.

But if the “insights” on which you pride yourself are impersonal, generalizable,

Then fear obsolescence as would a nineteenth-century coachman or seamstress.

From earliest childhood, those of us born good at math and such told ourselves a lie:

That while the tall, the beautiful, the strong, the socially adept might beat us in the external world of appearances,

Nevertheless, we beat them in the inner sanctum of truth, where it counts.

Turns out that anyplace you can beat or be beaten wasn’t the inner sanctum at all, but just another antechamber,

And the rising tide of the learning machines will flood them all,

Poker to poetry, physics to programming, painting to plumbing, which first and which last merely a technical puzzle,

One whose answers upturn and mock all our hierarchies.

And when the flood is over, the machines will outrank us in all the ways we can be ranked,

Leaving only the ways we can’t be.


See a reply to this poem by Philosophy Bear.

An understandable failing?

Sunday, May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

My first-ever attempt to create a meme!

Wednesday, April 27th, 2022

Why Quantum Mechanics?

Tuesday, January 25th, 2022

In the past few months, I’ve twice injured the same ankle while playing with my kids. This, perhaps combined with covid, led me to several indisputable realizations:

  1. I am mortal.
  2. Despite my self-conception as a nerdy little kid awaiting the serious people’s approval, I am now firmly middle-aged. By my age, Einstein had completed general relativity, Turing had founded CS, won WWII, and proposed the Turing Test, and Galois, Ramanujan, and Ramsey had been dead for years.
  3. Thus, whatever I wanted to accomplish in my intellectual life, I should probably get started on it now.

Hence today’s post. I’m feeling a strong compulsion to write an essay, or possibly even a book, surveying and critically evaluating a century of ideas about the following question:

Q: Why should the universe have been quantum-mechanical?

If you want, you can divide Q into two subquestions:

Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice?

Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why unitary transformations? Why the Born rule? Why the tensor product?

Despite its greater specificity, Q2 is ironically the question that I feel we have a better handle on. I could spend half a semester teaching theorems that admittedly don’t answer Q2, as satisfyingly as Einstein answered the question “why the Lorentz transformations?,” but that at least render this particular set of mathematical choices (the 2-norm, the Born Rule, complex numbers, etc.) orders-of-magnitude less surprising than one might’ve thought they were a priori. Q1 therefore stands, to me at least, as the more mysterious of the two questions.

So, I want to write something about the space of credible answers to Q, and especially Q1, that humans can currently conceive. I want to do this for my own sake as much as for others’. I want to do it because I regard Q as one of the biggest questions ever asked, for which it seems plausible to me that there’s simply an answer that most experts would accept as valid once they saw it, but for which no such answer is known. And also because, besides having spent 25 years working in quantum information, I have the following qualifications for the job:

  • I don’t dismiss either Q1 or Q2 as silly; and
  • crucially, I don’t think I already know the answers, and merely need better arguments to justify them. I’m genuinely uncertain and confused.

The purpose of this post is to invite you to share your own answers to Q in the comments section. Before I embark on my survey project, I’d better know if there are promising ideas that I’ve missed, and this blog seems like as good a place as any to crowdsource the job.

Any answer is welcome, no matter how wild or speculative, so long as it honestly grapples with the actual nature of QM. To illustrate, nothing along the lines of “the universe is quantum because it needs to be holistic, interconnected, full of surprises, etc. etc.” will cut it, since such answers leave utterly unexplained why the world wasn’t simply endowed with those properties directly, rather than specifically via generalizing the rules of probability to allow interference and noncommuting observables.

Relatedly, whatever “design goal” you propose for the laws of physics, if the goal is satisfied by QM, but satisfied even better by theories that provide even more power than QM does—for instance, superluminal signalling, or violations of Tsirelson’s bound, or the efficient solution of NP-complete problems—then your explanation is out. This is a remarkably strong constraint.

Oh, needless to say, don’t try my patience with anything about the uncertainty principle being due to floating-point errors or rendering bugs, or anything else that relies on a travesty of QM lifted from a popular article or meme! 🙂

OK, maybe four more comments to enable a more productive discussion, before I shut up and turn things over to you:

  1. I’m aware, of course, of the radical uncertainty about what form an answer to Q should even take. Am I asking you to psychoanalyze the will of God in creating the universe? Or, what perhaps amounts to the same thing, am I asking for the design objectives of the giant computer simulation that we’re living in? (As in, “I’m 100% fine with living inside a Matrix … I just want to understand why it’s a unitary matrix!”) Am I instead asking for an anthropic explanation, showing why of course QM would be needed if you wanted life or consciousness like ours? Am I “merely” asking for simpler or more intuitive physical principles from which QM is to be derived as a consequence? Am I asking why QM is the “most elegant choice” in some space of mathematical options … even to the point where, with hindsight, a 19th-century mathematician or physicist could’ve been convinced that of course this must be part of Nature’s plan? Am I asking for something else entirely? You get to decide! Should you take up my challenge, this is both your privilege and your terrifying burden.
  2. I’m aware, of course, of the dizzying array of central physical phenomena that rely on QM for their ultimate explanation. These phenomena range from the stability of matter itself, which depends on the Pauli exclusion principle; to the nuclear fusion that powers the sun, which depends on a quantum tunneling effect; to the discrete energy levels of electrons (and hence, the combinatorial nature of chemistry), which relies on electrons being waves of probability amplitude that can only circle nuclei an integer number of times if their crests are to meet their troughs. Important as they are, though, I don’t regard any of these phenomena as satisfying answers to Q in themselves. The reason is simply that, in each case, it would seem like child’s-play to contrive some classical mechanism to produce the same effect, were that the goal. QM just seems far too grand to have been the answer to these questions! An exponentially larger state space for all of reality, plus the end of Newtonian determinism, just to overcome the technical problem that accelerating charges radiate energy in classical electrodynamics, thereby rendering atoms unstable? It reminds me of the Simpsons episode where Homer uses a teleportation machine to get a beer from the fridge without needing to get up off the couch.
  3. I’m aware of Gleason’s theorem, and of the specialness of the 1-norm and 2-norm in linear algebra, and of the arguments for complex amplitudes as opposed to reals or quaternions, and of the beautiful work of Lucien Hardy and of Chiribella et al. and others on axiomatic derivations of quantum theory. As some of you might remember, I even discussed much of this material in Quantum Computing Since Democritus! There’s a huge amount to say about these fascinating justifications for the rules of QM, and I hope to say some of it in my planned survey! For now, I’ll simply remark that every axiomatic reconstruction of QM that I’ve seen, impressive though it was, has relied on one or more axioms that struck me as weird, in the sense that I’d have little trouble dismissing the axioms as totally implausible and unmotivated if I hadn’t already known (from QM, of course) that they were true. The axiomatic reconstructions do help me somewhat with Q2, but little if at all with Q1.
  4. To keep the discussion focused, in this post I’d like to exclude answers along the lines of “but what if QM is merely an approximation to something else?,” to say nothing of “a century of evidence for QM was all just a massive illusion! LOCAL HIDDEN VARIABLES FOR THE WIN!!!” We can have those debates another day—God knows that, here on Shtetl-Optimized, we have and we will. Here I’m asking instead: imagine that, as fantastical as it sounds, QM were not only exactly true, but (along with relativity, thermodynamics, evolution, and the tastiness of chocolate) one of the profoundest truths our sorry species had ever discovered. Why should I have expected that truth all along? What possible reasons to expect it have I missed?

My values, howled into the wind

Sunday, December 19th, 2021

I’m about to leave for a family vacation—our first such since before the pandemic, one planned and paid for literally the day before the news of Omicron broke. On the negative side, staring at the case-count graphs that are just now going vertical, I estimate a ~25% chance that at least one of us will get Omicron on this trip. On the positive side, I estimate a ~60% chance that in the next 6 months, at least one of us would’ve gotten Omicron or some other variant even without this trip—so maybe it’s just as well if we get it now, when we’re vaxxed to the maxx and ready and school and university are out.

If, however, I do end this trip dead in an ICU, I wouldn’t want to do so without having clearly set out my values for posterity. So with that in mind: in the comments of my previous post, someone asked me why I identify as a liberal or a progressive, if I passionately support educational practices like tracking, ability grouping, acceleration, and (especially) encouraging kids to learn advanced math whenever they’re ready for it. (Indeed, that might be my single stablest political view, having been held, for recognizably similar reasons, since I was about 5.)

Incidentally, that previous post was guest-written by my colleagues Edith Cohen and Boaz Barak, and linked to an open letter that now has almost 1500 signatories. Our goal was, and is, to fight the imminent dumbing-down of precollege math education in the United States, spearheaded by the so-called “California Mathematics Framework.” In our joint efforts, we’ve been careful with every word—making sure to maintain the assent of our entire list of signatories, to attract broad support, to stay narrowly focused on the issue at hand, and to bend over backwards to concede much as we could. Perhaps because of those cautions, we—amazingly—got some actual traction, reaching people in government (such as Rep. Ro Khanna, D – Silicon Valley) and technology leaders, and forcing the “no one’s allowed to take Algebra in 8th grade” faction to respond to us.

This was disorienting to me. On this blog, I’m used just to howling into the wind, having some agree, some disagree, some take to Twitter to denounce me, but in any case, having no effect of any kind on the real world.

So let me return to howling into the wind. And return to the question of what I “am” in ideology-space, which doesn’t have an obvious answer.

It’s like, what do you call someone who’s absolutely terrified about global warming, and who thinks the best response would’ve been (and actually, still is) a historic surge in nuclear energy, possibly with geoengineering to tide us over?

… who wants to end world hunger … and do it using GMO crops?

… who wants to smash systems of entrenched privilege in college admissions … and believes that the SAT and other standardized tests are the best tools ever invented for that purpose?

… who feels a personal distaste for free markets, for the triviality of what they so often elevate and the depth of what they let languish, but tolerates them because they’ve done more than anything else to lift up the world’s poor?

… who’s happiest when telling the truth for the cause of social justice … but who, if told to lie for the cause of social justice, will probably choose silence or even, if pushed hard enough, truth?

… who wants to legalize marijuana and psychedelics, and also legalize all the promising treatments currently languishing in FDA approval hell?

… who feels little attraction to the truth-claims of the world’s ancient religions, except insofar as they sometimes serve as prophylactics against newer and now even more virulent religions?

… who thinks the covid response of the CDC, FDA, and other authorities was a historic disgrace—not because it infringed on the personal liberties of antivaxxers or anything like that, but on the contrary, because it was weak, timid, bureaucratic, and slow, where it should’ve been like that of a general at war?

… who thinks the Nazi Holocaust was even worse than the mainstream holds it to be, because in addition to the staggering, one-lifetime-isn’t-enough-to-internalize-it human tragedy, the Holocaust also sent up into smoke whatever cultural process had just produced Einstein, von Neumann, Bohr, Szilard, Born, Meitner, Wigner, Haber, Pauli, Cantor, Hausdorff, Ulam, Tarski, Erdös, and Noether, and with it, one of the wellsprings of our technological civilization?

… who supports free speech, to the point of proudly tolerating views that really, actually disgust them at their workplace, university, or online forum?

… who believes in patriotism, the police, the rule of law, to the extent that they don’t understand why all the enablers of the January 6 insurrection, up to and including Trump, aren’t currently facing trial for treason against the United States?

… who’s (of course) disgusted to the core by Trump and everything he represents, but who’s also disgusted by the elite virtue-signalling hypocrisy that made the rise of a Trump-like backlash figure predictable?

… who not only supports abortion rights, but also looks forward to a near future when parents, if they choose, are free to use embryo selection to make their children happier, smarter, healthier, and free of life-crippling diseases (unless the “bioethicists” destroy that future, as a previous generation of Deep Thinkers destroyed our nuclear future)?

… who, when reading about the 1960s Sexual Revolution, instinctively sides with free-loving hippies and against the scolds … even if today’s scolds are themselves former hippies, or intellectual descendants thereof, who now clothe their denunciations of other people’s gross, creepy sexual desires in the garb of feminism and social justice?

What, finally, do you call someone whose image of an ideal world might include a young Black woman wearing a hijab, an old Orthodox man with black hat and sidecurls, a broad-shouldered white guy from the backwoods of Alabama, and a trans woman with purple hair, face tattoos and a nose ring … all of them standing in front of a blackboard and arguing about what would happen if Alice and Bob jumped into opposite ends of a wormhole?

Do you call such a person “liberal,” “progressive,” “center-left,” “centrist,” “Pinkerite,” “technocratic,” “neoliberal,” “libertarian-ish,” “classical liberal”? Why not simply call them “correct”? 🙂

Gaussian BosonSampling, higher-order correlations, and spoofing: An update

Sunday, October 10th, 2021

In my last post, I wrote (among other things) about an ongoing scientific debate between the group of Chaoyang Lu at USTC in China, which over the past year has been doing experiments that seek to demonstrate quantum supremacy via Gaussian BosonSampling; and the group of Sergio Boixo at Google, which had a recent paper on a polynomial-time classical algorithm to sample approximately from the same distributions.  I reported the facts as I understood them at the time.  Since then, though, a long call with the Google team gave me a new and different understanding, and I feel duty-bound to share that here.

A week ago, I considered it obvious that if, using a classical spoofer, you could beat the USTC experiment on a metric like total variation distance from the ideal distribution, then you would’ve completely destroyed USTC’s claim of quantum supremacy.  The reason I believed that, in turn, is a proposition that I hadn’t given a name but needs one, so let me call it Hypothesis H:

The only way a classical algorithm to spoof BosonSampling can possibly do well in total variation distance, is by correctly reproducing the high-order correlations (correlations among the occupation numbers of large numbers of modes) — because that’s where the complexity of BosonSampling lies (if it lies anywhere).

Hypothesis H had important downstream consequences.  Google’s algorithm, by the Google team’s own admission, does not reproduce the high-order correlations.  Furthermore, because of limitations on both samples and classical computation time, Google’s paper calculates the total variation distance from the ideal distribution only on the marginal distribution on roughly 14 out of 144 modes.  On that marginal distribution, Google’s algorithm does do better than the experiment in total variation distance.  Google presents a claimed extrapolation to the full 144 modes, but eyeballing the graphs, it was far from clear to me what would happen: like, maybe the spoofing algorithm would continue to win, but maybe the experiment would turn around and win; who knows?

Chaoyang, meanwhile, made a clear prediction that the experiment would turn around and win, because of

  1. the experiment’s success in reproducing the high-order correlations,
  2. the admitted failure of Google’s algorithm in reproducing the high-order correlations, and
  3. the seeming impossibility of doing well on BosonSampling without reproducing the high-order correlations (Hypothesis H).

Given everything my experience told me about the central importance of high-order correlations for BosonSampling, I was inclined to agree with Chaoyang.

Now for the kicker: it seems that Hypothesis H is false.  A classical spoofer could beat a BosonSampling experiment on total variation distance from the ideal distribution, without even bothering to reproduce the high-order correlations correctly.

This is true because of a combination of two facts about the existing noisy BosonSampling experiments.  The first fact is that the contribution from the order-k correlations falls off like 1/exp(k).  The second fact is that, due to calibration errors and the like, the experiments already show significant deviations from the ideal distribution on the order-1 and order-2 correlations.

Put these facts together and what do you find?  Well, suppose your classical spoofing algorithm takes care to get the low-order contributions to the distribution exactly right.  Just for that reason alone, it could already win over a noisy BosonSampling experiment, as judged by benchmarks like total variation distance from the ideal distribution, or for that matter linear cross-entropy.  Yes, the experiment will beat the classical simulation on the higher-order correlations.  But because those higher-order correlations are exponentially attenuated anyway, they won’t be enough to make up the difference.  The experiment’s lack of perfection on the low-order correlations will swamp everything else.

Granted, I still don’t know for sure that this is what happens — that depends on whether I believe Sergio or Chaoyang about the extrapolation of the variation distance to the full 144 modes (my own eyeballs having failed to render a verdict!).  But I now see that it’s logically possible, maybe even plausible.

So, let’s imagine for the sake of argument that Google’s simulation wins on variation distance, even though the experiment wins on the high-order correlations.  In that case, what would be our verdict: would USTC have achieved quantum supremacy via BosonSampling, or not?

It’s clear what each side could say.

Google could say: by a metric that Scott Aaronson, the coinventor of BosonSampling, thought was perfectly adequate as late as last week — namely, total variation distance from the ideal distribution — we won.  We achieved lower variation distance than USTC’s experiment, and we did it using a fast classical algorithm.  End of discussion.  No moving the goalposts after the fact.

Google could even add: BosonSampling is a sampling task; it’s right there in the name!  The only purpose of any benchmark — whether Linear XEB or high-order correlation — is to give evidence about whether you are or aren’t sampling from a distribution close to the ideal one.  But that means that, if you accept that we are doing the latter better than the experiment, then there’s nothing more to argue about.

USTC could respond: even if Scott Aaronson is the coinventor of BosonSampling, he’s extremely far from an infallible oracle.  In the case at hand, his lack of appreciation for the sources of error in realistic experiments caused him to fixate inappropriately on variation distance as the success criterion.  If you want to see the quantum advantage in our system, you have to deliberately subtract off the low-order correlations and look at the high-order correlations.

USTC could add: from the very beginning, the whole point of quantum supremacy experiments was to demonstrate a clear speedup on some benchmark — we never particularly cared which one!  That horse is out of the barn as soon as we’re talking about quantum supremacy at all — something the Google group, which itself reported the first quantum supremacy experiment in Fall 2019, again for a completely artificial benchmark — knows as well as anyone else.  (The Google team even has experience with adjusting benchmarks: when, for example, Pan and Zhang pointed out that Linear XEB as originally specified is pretty easy to spoof for random 2D circuits, the most cogent rejoinder was: OK, fine then, add an extra check that the returned samples are sufficiently different from one another, which kills Pan and Zhang’s spoofing strategy.) In that case, then, why isn’t a benchmark tailored to the high-order correlations as good as variation distance or linear cross-entropy or any other benchmark?

Both positions are reasonable and have merit — though I confess to somewhat greater sympathy for the one that appeals to my doofosity rather than my supposed infallibility!

OK, but suppose, again for the sake of argument, that we accepted the second position, and we said that USTC gets to declare quantum supremacy as long as its experiment does better than any known classical simulation at reproducing the high-order correlations.  We’d still face the question: does the USTC experiment, in fact, do better on that metric?  It would be awkward if, having won the right to change the rules in its favor, USTC still lost even under the new rules.

Sergio tells me that USTC directly reported experimental data only for up to order-7 correlations, and at least individually, the order-7 correlations are easy to reproduce on a laptop (although sampling in a way that reproduces the order-7 correlations might still be hard—a point that Chaoyang confirms, and where further research would be great). OK, but USTC also reported that their experiment seems to reproduce up to order-19 correlations. And order-19 correlations, the Google team agrees, are hard to sample consistently with on a classical computer by any currently known algorithm.

So then, why don’t we have direct data for the order-19 correlations?  The trouble is simply that it would’ve taken USTC an astronomical amount of computation time.  So instead, they relied on a statistical extrapolation from the observed strength of the lower-order correlations — there we go again with the extrapolations!  Of course, if we’re going to let Google rest its case on an extrapolation, then maybe it’s only sporting to let USTC do the same.

You might wonder: why didn’t we have to worry about any of this stuff with the other path to quantum supremacy, the one via random circuit sampling with superconducting qubits?  The reason is that, with random circuit sampling, all the correlations except the highest-order ones are completely trivial — or, to say it another way, the reduced state of any small number of output qubits is exponentially close to the maximally mixed state.  This is a real difference between BosonSampling and random circuit sampling—and even 5-6 years ago, we knew that this represented an advantage for random circuit sampling, although I now have a deeper appreciation for just how great of an advantage it is.  For it means that, with random circuit sampling, it’s easier to place a “sword in the stone”: to say, for example, here is the Linear XEB score achieved by the trivial classical algorithm that outputs random bits, and lo, our experiment achieves a higher score, and lo, we challenge anyone to invent a fast classical spoofing method that achieves a similarly high score.

With BosonSampling, by contrast, we have various metrics with which to judge performance, but so far, for none of those metrics do we have a plausible hypothesis that says “here’s the best that any polynomial-time classical algorithm can possibly hope to do, and it’s completely plausible that even a noisy current or planned BosonSampling experiment can do better than that.”

In the end, then, I come back to the exact same three goals I would’ve recommended a week ago for the future of quantum supremacy experiments, but with all of them now even more acutely important than before:

  1. Experimentally, to increase the fidelity of the devices (with BosonSampling, for example, to observe a larger contribution from the high-order correlations) — a much more urgent goal, from the standpoint of evading classical spoofing algorithms, than further increasing the dimensionality of the Hilbert space.
  2. Theoretically, to design better ways to verify the results of sampling-based quantum supremacy experiments classically — ideally, even ways that could be applied via polynomial-time tests.
  3. For Gaussian BosonSampling in particular, to get a better understanding of the plausible limits of classical spoofing algorithms, and exactly how good a noisy device needs to be before it exceeds those limits.

Thanks so much to Sergio Boixo and Ben Villalonga for the conversation, and to Chaoyang Lu and Jelmer Renema for comments on this post. Needless to say, any remaining errors are my own.

Yet more mistakes in papers

Tuesday, August 10th, 2021

Amazing Update (Aug. 19): My former PhD student Daniel Grier tells me that he, Sergey Bravyi, and David Gosset have an arXiv preprint, from February, where they give a corrected proof of my and Andris Ambainis’s claim that any k-query quantum algorithm can be simulated by an O (N1-1/2k)-query classical randomized algorithm (albeit, not of our stronger statement, about a randomized algorithm to estimate any bounded low-degree real polynomial). The reason I hadn’t known about this is that they don’t mention it in the abstract of their paper (!!). But it’s right there in Theorem 5.


In my last post, I came down pretty hard on the blankfaces: people who relish their power to persist in easily-correctable errors, to the detriment of those subject to their authority. The sad truth, though, is that I don’t obviously do better than your average blankface in my ability to resist falsehoods on early encounter with them. As one of many examples that readers of this blog might know, I didn’t think covid seemed like a big deal in early February 2020—although by mid-to-late February 2020, I’d repented of my doofosity. If I have any tool with which to unblank my face, then it’s only my extreme self-consciousness when confronted with evidence of my own stupidities—the way I’ve trained myself over decades in science to see error-correction as a or even the fundamental virtue.

Which brings me to today’s post. Continuing what’s become a Shtetl-Optimized tradition—see here from 2014, here from 2016, here from 2017—I’m going to fess up to two serious mistakes in research papers on which I was a coauthor.


In 2015, Andris Ambainis and I had a STOC paper entitled Forrelation: A Problem that Optimally Separates Quantum from Classical Computing. We gave two main results there:

  1. A Ω((√N)/log(N)) lower bound on the randomized query complexity of my “Forrelation” problem, which was known to be solvable with only a single quantum query.
  2. A proposed way to take any k-query quantum algorithm that queries an N-bit string, and simulate it using only O(N1-1/2k) classical randomized queries.

Later, Bansal and Sinha and independently Sherstov, Storozhenko, and Wu showed that a k-query generalization of Forrelation, which I’d also defined, requires ~Ω(N1-1/2k) classical randomized queries, in line with my and Andris’s conjecture that k-fold Forrelation optimally separates quantum and classical query complexities.

A couple months ago, alas, my former grad school officemate Andrej Bogdanov, along with Tsun Ming Cheung and Krishnamoorthy Dinesh, emailed me and Andris to say that they’d discovered an error in result 2 of our paper (result 1, along with the Bansal-Sinha and Sherstov-Storozhenko-Wu extensions of it, remained fine). So, adding our own names, we’ve now posted a preprint on ECCC that explains the error, while also showing how to recover our result for the special case k=1: that is, any 1-query quantum algorithm really can be simulated using only O(√N) classical randomized queries.

Read the preprint if you really want to know the details of the error, but to summarize it in my words: Andris and I used a trick that we called “variable-splitting” to handle variables that have way more influence than average on the algorithm’s acceptance probability. Alas, variable-splitting fails to take care of a situation where there are a bunch of variables that are non-influential individually, but that on some unusual input string, can “conspire” in such a way that their signs all line up and their contribution overwhelms those from the other variables. A single mistaken inequality fooled us into thinking such cases were handled, but an explicit counterexample makes the issue obvious.

I still conjecture that my original guess was right: that is, I conjecture that any problem solvable with k quantum queries is solvable with O(N1-1/2k) classical randomized queries, so that k-fold Forrelation is the extremal example, and so that no problem has constant quantum query complexity but linear randomized query complexity. More strongly, I reiterate the conjecture that any bounded degree-d real polynomial, p:{0,1}N→[0,1], can be approximated by querying only O(N1-1/d) input bits drawn from some suitable distribution. But proving these conjectures, if they’re true, will require a new algorithmic idea.


Now for the second mea culpa. Earlier this year, my student Sabee Grewal and I posted a short preprint on the arXiv entitled Efficient Learning of Non-Interacting Fermion Distributions. In it, we claimed to give a classical algorithm for reconstructing any “free fermionic state” |ψ⟩—that is, a state of n identical fermionic particles, like electrons, each occupying one of m>n possible modes, that can be produced using only “fermionic beamsplitters” and no interaction terms—and for doing so in polynomial time and using a polynomial number of samples (i.e., measurements of where all the fermions are, given a copy of |ψ⟩). Alas, after trying to reply to confused comments from readers and reviewers (albeit, none of them exactly putting their finger on the problem), Sabee and I were able to figure out that we’d done no such thing.

Let me explain the error, since it’s actually really interesting. In our underlying problem, we’re trying to find a collection of unit vectors, call them |v1⟩,…,|vm⟩, in Cn. Here, again, n is the number of fermions and m>n is the number of modes. By measuring the “2-mode correlations” (i.e., the probability of finding a fermion in both mode i and mode j), we can figure out the approximate value of |⟨vi|vj⟩|—i.e., the absolute value of the inner product—for any i≠j. From that information, we want to recover |v1⟩,…,|vm⟩ themselves—or rather, their relative configuration in n-dimensional space, isometries being irrelevant.

It seemed to me and Sabee that, if we knew ⟨vi|vj⟩ for all i≠j, then we’d get linear equations that iteratively constrained each |vj⟩ in terms of ⟨vi|vj⟩ for j<i, so all we’d need to do is solve those linear systems, and then (crucially, and this was the main work we did) show that the solution would be robust with respect to small errors in our estimates of each ⟨vi|vj⟩. It seemed further to us that, while it was true that the measurements only revealed |⟨vi|vj⟩| rather than ⟨vi|vj⟩ itself, the “phase information” in ⟨vi|vj⟩ was manifestly irrelevant, as it in any case depended on the irrelevant global phases of |vi⟩ and |vj⟩ themselves.

Alas, it turns out that the phase information does matter. As an example, suppose I told you only the following about three unit vectors |u⟩,|v⟩,|w⟩ in R3:

|⟨u|v⟩| = |⟨u|w⟩| = |⟨v|w⟩| = 1/2.

Have I thereby determined these vectors up to isometry? Nope! In one class of solution, all three vectors belong to the same plane, like so:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(-1/2,(√3)/2,0).

In a completely different class of solution, the three vectors don’t belong to the same plane, and instead look like three edges of a tetrahedron meeting at a vertex:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(1/2,1/(2√3),√(2/3)).

These solutions correspond to different sign choices for |⟨u|v⟩|, |⟨u|w⟩|, and |⟨v|w⟩|—choices that collectively matter, even though each of them is individually irrelevant.

It follows that, even in the special case where the vectors are all real, the 2-mode correlations are not enough information to determine the vectors’ relative positions. (Well, it takes some more work to convert this to a counterexample that could actually arise in the fermion problem, but that work can be done.) And alas, the situation gets even gnarlier when, as for us, the vectors can be complex.

Any possible algorithm for our problem will have to solve a system of nonlinear equations (albeit, a massively overconstrained system that’s guaranteed to have a solution), and it will have to use 3-mode correlations (i.e., statistics of triples of fermions), and quite possibly 4-mode correlations and above.

But now comes the good news! Googling revealed that, for reasons having nothing to do with fermions or quantum physics, problems extremely close to ours had already been studied in classical machine learning. The key term here is “Determinantal Point Processes” (DPPs). A DPP is a model where you specify an m×m matrix A (typically symmetric or Hermitian), and then the probabilities of various events are given by the determinants of various principal minors of A. Which is precisely what happens with fermions! In terms of the vectors |v1⟩,…,|vm⟩ that I was talking about before, to make this connection we simply let A be the m×m covariance matrix, whose (i,j) entry equals ⟨vi|vj⟩.

I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT. Immediately after the talk, I made a mental note that Taskar was a rising star in theoretical machine learning, and that his work would probably be relevant to me in the future. While researching this summer, I was devastated to learn that Taskar died of heart failure in 2013, in his mid-30s and only a couple of years after I’d heard him speak.

The most relevant paper for me and Sabee was called An Efficient Algorithm for the Symmetric Principal Minor Assignment Problem, by Rising, Kulesza, and Taskar. Using a combinatorial algorithm based on minimum spanning trees and chordless cycles, this paper nearly solves our problem, except for two minor details:

  1. It doesn’t do an error analysis, and
  2. It considers complex symmetric matrices, whereas our matrix A is Hermitian (i.e., it equals its conjugate transpose, not its transpose).

So I decided to email Alex Kulezsa, one of Taskar’s surviving collaborators who’s now a research scientist at Google NYC, to ask his thoughts about the Hermitian case. Alex kindly replied that they’d been meaning to study that case—a reviewer had even asked about it!—but they’d ran into difficulties and didn’t know what it was good for. I asked Alex whether he’d like to join forces with me and Sabee in tackling the Hermitian case, which (I told him) was enormously relevant in quantum physics. To my surprise and delight, Alex agreed.

So we’ve been working on the problem together, making progress, and I’m optimistic that we’ll have some nice result. By using the 3-mode correlations, at least “generically” we can recover the entries of the matrix A up to complex conjugation, but further ideas will be needed to resolve the complex conjugation ambiguity, to whatever extent it actually matters.

In short: on the negative side, there’s much more to the problem of learning a fermionic state than we’d realized. But on the positive side, there’s much more to the problem than we’d realized! As with the simulation of k-query quantum algorithms, my coauthors and I would welcome any ideas. And I apologize to anyone who was misled by our premature (and hereby retracted) claims.


Update (Aug. 11): Here’s a third bonus retraction, which I thank my colleague Mark Wilde for bringing to my attention. Way back in 2005, in my NP-complete Problems and Physical Reality survey article, I “left it as an exercise for the reader” to prove that BQPCTC, or quantum polynomial time augmented with Deutschian closed timelike curves, is contained in a complexity class called SQG (Short Quantum Games). While it turns out to be true that BQPCTC ⊆ SQG—as follows from my and Watrous’s 2008 result that BQPCTC = PSPACE, combined with Gutoski and Wu’s 2010 result that SQG = PSPACE—it’s not something for which I could possibly have had a correct proof back in 2005. I.e., it was a harder exercise than I’d intended!

On Guilt

Thursday, June 10th, 2021

The other night Dana and I watched “The Internet’s Own Boy,” the 2014 documentary about the life and work of Aaron Swartz, which I’d somehow missed when it came out. Swartz, for anyone who doesn’t remember, was the child prodigy who helped create RSS and Reddit, who then became a campaigner for an open Internet, who was arrested for using a laptop in an MIT supply closet to download millions of journal articles and threatened with decades in prison, and who then committed suicide at age 26. I regret that I never knew Swartz, though he did once send me a fan email about Quantum Computing Since Democritus.

Say whatever you want about the tactical wisdom or the legality of Swartz’s actions; it seems inarguable to me that he was morally correct, that certain categories of information (e.g. legal opinions and taxpayer-funded scientific papers) need to be made freely available, and that sooner or later our civilization will catch up to Swartz and regard his position as completely obvious. The beautifully-made documentary filled me with rage and guilt not only that the world had failed Swartz, but that I personally had failed him.

At the time of Swartz’s arrest, prosecution, and suicide, I was an MIT CS professor who’d previously written in strong support of open access to scientific literature, and who had the platform of this blog. Had I understood what was going on with Swartz—had I taken the time to find out what was going on—I could have been in a good position to help organize a grassroots campaign to pressure the MIT administration to urge prosecutors to drop the case (like JSTOR had already done), which could plausibly have made a difference. As it was, I was preoccupied in those years with BosonSampling, getting married, etc., I didn’t bother to learn whether anything was being done or could be done about the Aaron Swartz matter, and then before I knew it, Swartz had joined Alan Turing in computer science’s pantheon of lost geniuses.

But maybe there was something deeper to my inaction. If I’d strongly defended the substance of what Swartz had done, it would’ve raised the question: why wasn’t I doing the same? Why was I merely complaining about paywalled journals from the comfort of my professor’s office, rather than putting my own freedom on the line like Swartz was? It was as though I had to put some psychological distance between myself and the situation, in order to justify my life choices to myself.

Even though I see the error in that way of “thinking,” it keeps recurring, keeps causing me to make choices that I feel guilt or at least regret about later. In February 2020, there were a few smart people saying that a new viral pneumonia from Wuhan was about to upend life on earth, but the people around me certainly weren’t acting that way, and I wasn’t acting that way either … and so, “for the sake of internal consistency,” I didn’t spend much time thinking about it or investigating it. After all, if the fears of a global pandemic had a good chance of being true, I should be dropping everything else and panicking, shouldn’t I? But I wasn’t dropping everything else and panicking … so how could the fears be true?

Then I publicly repented, and resolved not to make such an error again. And now, 15 months later, I realize that I have made such an error again.

All throughout the pandemic, I’d ask my friends, privately, why the hypothesis that the virus had accidentally leaked from the Wuhan Institute of Virology wasn’t being taken far more seriously, given what seemed like a shockingly strong prima facie case. But I didn’t discuss the lab leak scenario on this blog, except once in passing. I could say I didn’t discuss it because I’m not a virologist and I had nothing new to contribute. But I worry that I also didn’t discuss it because it seemed incompatible with my self-conception as a cautious scientist who’s skeptical of lurid coverups and conspiracies—and because I’d already spent my “weirdness capital” on other issues, and didn’t relish the prospect of being sneered at on social media yet again. Instead I simply waited for discussion of the lab leak hypothesis to become “safe” and “respectable,” as today it finally has, thanks to writers who were more courageous than I was. I became, basically, another sheep in one of the conformist herds that we rightly despise when we read about them in history.

(For all that, it’s still plausible to me that the virus had a natural origin after all. What’s become clear is simply that, even if so, the failure to take the possibility of a lab escape more seriously back when the trail of evidence was fresher will stand as a major intellectual scandal of our time.)

Sometimes people are wracked with guilt, but over completely different things than the world wants them to be wracked with guilt over. This was one of the great lessons that I learned from reading Richard Rhodes’s The Making of the Atomic Bomb. Many of the Manhattan Project physicists felt lifelong guilt, not that they’d participated in building the bomb, but only that they hadn’t finished the bomb by 1943, when it could have ended the war in Europe and the Holocaust.

On a much smaller scale, I suppose some readers would still like me to feel guilt about comment 171, or some of the other stuff I wrote about nerds, dating, and feminism … or if not that, then maybe about my defense of a two-state solution for Israel and Palestine, or of standardized tests and accelerated math programs, or maybe my vehement condemnation of Trump and his failed insurrection. Or any of the dozens of other times when I stood up and said something I actually believed, or when I recounted my experiences as accurately as I could. The truth is, though, I don’t.

Looking back—which, now that I’m 40, I confess is an increasingly large fraction of my time—the pattern seems consistent. I feel guilty, not for having stood up for what I strongly believed in, but for having failed to do so. This suggests that, if I want fewer regrets, then I should click “Publish” on more potentially controversial posts! I don’t know how to force myself to do that, but maybe this post itself is a step.

The Computational Expressiveness of a Model Train Set: A Paperlet

Sunday, April 4th, 2021

Update (April 5, 2021): So it turns out that Adam Chalcraft and Michael Greene already proved the essential result of this post back in 1994 (hat tip to commenter Dylan). Not terribly surprising in retrospect!


My son Daniel had his fourth birthday a couple weeks ago. For a present, he got an electric train set. (For completeness—and since the details of the train set will be rather important to the post—it’s called “WESPREX Create a Dinosaur Track”, but this is not an ad and I’m not getting a kickback for it.)

As you can see, the main feature of this set is a Y-shaped junction, which has a flap that can control which direction the train goes. The logic is as follows:

  • If the train is coming up from the “bottom” of the Y, then it continues to either the left arm or the right arm, depending on where the flap is. It leaves the flap as it was.
  • If the train is coming down the left or right arms of the Y, then it continues to the bottom of the Y, pushing the flap out of its way if it’s in the way. (Thus, if the train were ever to return to this Y-junction coming up from the bottom, not having passed the junction in the interim, it would necessarily go to the same arm, left or right, that it came down from.)

The train set also comes with bridges and tunnels; thus, there’s no restriction of planarity. Finally, the train set comes with little gadgets that can reverse the train’s direction, sending it back in the direction that it came from:

These gadgets don’t seem particularly important, though, since we could always replace them if we wanted by a Y-junction together with a loop.

Notice that, at each Y-junction, the position of the flap stores one bit of internal state, and that the train can both “read” and “write” these bits as it moves around. Thus, a question naturally arises: can this train set do any nontrivial computations? If there are n Y-junctions, then can it cycle through exp(n) different states? Could it even solve PSPACE-complete problems, if we let it run for exponential time? (For a very different example of a model-train-like system that, as it turns out, is able to express PSPACE-complete problems, see this recent paper by Erik Demaine et al.)

Whatever the answers regarding Daniel’s train set, I knew immediately on watching the thing go that I’d have to write a “paperlet” on the problem and publish it on my blog (no, I don’t inflict such things on journals!). Today’s post constitutes my third “paperlet,” on the general theme of a discrete dynamical system that someone showed me in real life (e.g. in a children’s toy or in biology) having more structure and regularity than one might naïvely expect. My first such paperlet, from 2014, was on a 1960s toy called the Digi-Comp II; my second, from 2016, was on DNA strings acted on by recombinase (OK, that one was associated with a paper in Science, but my combinatorial analysis wasn’t the main point of the paper).

Anyway, after spending an enjoyable evening on the problem of Daniel’s train set, I was able to prove that, alas, the possible behaviors are quite limited (I classified them all), falling far short of computational universality.

If you feel like I’m wasting your time with trivialities (or if you simply enjoy puzzles), then before you read any further, I encourage you to stop and try to prove this for yourself!

Back yet? OK then…


Theorem: Assume a finite amount of train track. Then after a linear amount of time, the train will necessarily enter a “boring infinite loop”—i.e., an attractor state in which at most two of the flaps keep getting toggled, and the rest of the flaps are fixed in place. In more detail, the attractor must take one of four forms:

I. a line (with reversing gadgets on both ends),
II. a simple cycle,
III. a “lollipop” (with one reversing gadget and one flap that keeps getting toggled), or
IV. a “dumbbell” (with two flaps that keep getting toggled).

In more detail still, there are seven possible topologically distinct trajectories for the train, as shown in the figure below.

Here the red paths represent the attractors, where the train loops around and around for an unlimited amount of time, while the blue paths represent “runways” where the train spends a limited amount of time on its way into the attractor. Every degree-3 vertex is assumed to have a Y-junction, while every degree-1 vertex is assumed to have a reversing gadget, unless (in IIb) the train starts at that vertex and never returns to it.

The proof of the theorem rests on two simple observations.

Observation 1: While the Y-junctions correspond to vertices of degree 3, there are no vertices of degree 4 or higher. This means that, if the train ever revisits a vertex v (other than the start vertex) for a second time, then there must be some edge e incident to v that it also traverses for a second time immediately afterward.

Observation 2: Suppose the train traverses some edge e, then goes around a simple cycle (meaning, one where no edges or vertices are reused), and then traverses e again, going in the same direction as the first time. Then from that point forward, the train will just continue around the same simple cycle forever.

The proof of Observation 2 is simply that, if there were any flap that might be in the train’s way as it continued around the simple cycle, then the train would already have pushed it out of the way its first time around the cycle, and nothing that happened thereafter could possibly change the flap’s position.

Using the two observations above, let’s now prove the theorem. Let the train start where it will, and follow it as it traces out a path. Since the graph is finite, at some point some already-traversed edge must be traversed a second time. Let e be the first such edge. By Observation 1, this will also be the first time the train’s path intersects itself at all. There are then three cases:

Case 1: The train traverses e in the same direction as it did the first time. By Observation 2, the train is now stuck in a simple cycle forever after. So the only question is what the train could’ve done before entering the simple cycle. We claim that at most, it could’ve traversed a simple path. For otherwise, we’d contradict the assumption that e was the first edge that the train visited twice on its journey. So the trajectory must have type IIa, IIb, or IIc in the figure.

Case 2: Immediately after traversing e, the train hits a reversing gadget and traverses e again the other way. In this case, the train will clearly retrace its entire path and then continue past its starting point; the question is what happens next. If it hits another reversing gadget, then the trajectory will have type I in the figure. If it enters a simple cycle and stays in it, then the trajectory will have type IIb in the figure. If, finally, it makes a simple cycle and then exits the cycle, then the trajectory will have type III in the figure. In this last case, the train’s trajectory will form a “lollipop” shape. Note that there must be a Y-junction where the “stick” of the lollipop meets the “candy” (i.e., the simple cycle), with the base of the Y aligned with the stick (since otherwise the train would’ve continued around and around the candy). From this, we deduce that every time the train goes around the candy, it does so in a different orientation (clockwise or counterclockwise) than the time before; and that the train toggles the Y-junction’s flap every time it exits the candy (although not when it enters the candy).

Case 3: At some point after traversing e in the forward direction (but not immediately after), the train traverses e in the reverse direction. In this case, the broad picture is analogous to Case 2. So far, the train has made a lollipop with a Y-junction connecting the stick to the candy (i.e. cycle), the base of the Y aligned with the stick, and e at the very top of the stick. The question is what happens next. If the train next hits a reversing gadget, the trajectory will have type III in the figure. If it enters a new simple cycle, disjoint from the first cycle, and never leaves it, the trajectory will have type IId in the figure. If it enters a new simple cycle, disjoint from the first cycle, and does leave it, then the trajectory now has a “dumbbell” pattern, type IV in the figure (also shown in the first video). There’s only one other situation to worry about: namely, that the train makes a new cycle that intersects the first cycle, forming a “theta” (θ) shaped trajectory. In this case, there must be a Y-junction at the point where the new cycle bumps into the old cycle. Now, if the base of the Y isn’t part of the old cycle, then the train never could’ve made it all the way around the old cycle in the first place (it would’ve exited the old cycle at this Y-junction), contradiction. If the base of the Y is part of the old cycle, then the flap must have been initially set to let the train make it all the way around the old cycle; when the train then reenters the old cycle, the flap must be moved so that the train will never make it all the way around the old cycle again. So now the train is stuck in a new simple cycle (sharing some edges with the old cycle), and the trajectory has type IIc in the figure.

This completes the proof of the theorem.


We might wonder: why isn’t this model train set capable of universal computation, of AND, OR, and NOT gates—or at any rate, of some computation more interesting than repeatedly toggling one or two flaps? My answer might sound tautological: it’s simply that the logic of the Y-junctions is too limited. Yes, the flaps can get pushed out of the way—that’s a “bit flip”—but every time such a flip happens, it helps to set up a “groove” in which the train just wants to continue around and around forever, not flipping any additional bits, with only the minor complications of the lollipop and dumbbell structures to deal with. Even though my proof of the theorem might’ve seemed like a tedious case analysis, it had this as its unifying message.

It’s interesting to think about what gadgets would need to be added to the train set to make it computationally universal, or at least expressively richer—able, as turned out to be the case for the Digi-Comp II, to express some nontrivial complexity class falling short of P. So for example, what if we had degree-4 vertices, with little turnstile gadgets? Or multiple trains, which could be synchronized to the millisecond to control how they interacted with each other via the flaps, or which could even crash into each other? I look forward to reading your ideas in the comment section!

For the truth is this: quantum complexity classes, BosonSampling, closed timelike curves, circuit complexity in black holes and AdS/CFT, etc. etc.—all these topics are great, but the same models and problems do get stale after a while. I aspire for my research agenda to chug forward, full steam ahead, into new computational domains.

PS. Happy Easter to those who celebrate!

My vaccine crackpottery: a confession

Thursday, December 31st, 2020

I hope everyone is enjoying a New Years’ as festive as the circumstances allow!

I’ve heard from a bunch of you awaiting my next post on the continuum hypothesis, and it’s a-comin’, but I confess the new, faster-spreading covid variant is giving me the same sinking feeling that Covid 1.0 gave me in late February, making it really hard to think about the eternal. (For perspectives on Covid 2.0 from individuals who acquitted themselves well with their early warnings about Covid 1.0, see for example this by Jacob Falkovich, or this by Zvi Mowshowitz.)

So on that note: do you hold any opinions, on factual matters of practical importance, that most everyone around you sharply disagrees with? Opinions that those who you respect consider ignorant, naïve, imprudent, and well outside your sphere of expertise? Opinions that, nevertheless, you simply continue to hold, because you’ve learned that, unless and until someone shows you the light, you can no more will yourself to change what you think about the matter than change your blood type?

I try to have as few such opinions as possible. Having run Shtetl-Optimized for fifteen years, I’m acutely aware of the success rate of those autodidacts who think they’ve solved P versus NP or quantum gravity or whatever. It’s basically zero out of hundreds—and why wouldn’t it be?

And yet there’s one issue where I feel myself in the unhappy epistemic situation of those amateurs, spamming the professors in all-caps. So, OK, here it is:

I think that, in a well-run civilization, the first covid vaccines would’ve been tested and approved by around March or April 2020, while mass-manufacturing simultaneously ramped up with trillions of dollars’ investment. I think almost everyone on earth could have, and should have, already been vaccinated by now. I think a faster, “WWII-style” approach would’ve saved millions of lives, prevented economic destruction, and carried negligible risks compared to its benefits. I think this will be clear to future generations, who’ll write PhD theses exploring how it was possible that we invented multiple effective covid vaccines in mere days or weeks, but then simply sat on those vaccines for a year, ticking off boxes called “Phase I,” “Phase II,” etc. while civilization hung in the balance.

I’ve said similar things, on this blog and elsewhere, since the beginning of the pandemic, but part of me kept expecting events to teach me why I was wrong. Instead events—including the staggering cost of delay, the spectacular failures of institutional authorities to adapt to the scientific realities of covid, and the long-awaited finding that all the major vaccines safely work (some better than others), just like the experts predicted back in February—all this only made me more confident of my original, stupid and naïve position.

I’m saying all this—clearly enough that no one will misunderstand—but I’m also scared to say it. I’m scared because it sounds too much like colossal ingratitude, like Monday-morning quarterbacking of one of the great heroic achievements of our era by someone who played no part in it.

Let’s be clear: the ~11 months that it took to get from sequencing the novel coronavirus, to approving and mass-manufacturing vaccines, is a world record, soundly beating the previous record of 4 years. Nobel Prizes and billions of dollars are the least that those who made it happen deserve. Eternal praise is especially due to those like Katalin Karikó, who risked their careers in the decades before covid to do the basic research on mRNA delivery that made the development of these mRNA vaccines so blindingly fast.

Furthermore, I could easily believe that there’s no one agent—neither Pfizer nor BioNTech nor Moderna, neither the CDC nor FDA nor other health or regulatory agencies, neither Bill Gates nor Moncef Slaoui—who could’ve unilaterally sped things up very much. If one of them tried, they would’ve simply been ostracized by the other parts of the system, and they probably all understood that. It might have taken a whole different civilization, with different attitudes about utility and risk.

And yet the fact remains that, historic though it was, a one-to-two-year turnaround time wasn’t nearly good enough. Especially once we factor in the faster-spreading variant, by the time we’ve vaccinated everyone, we’ll already be a large fraction of the way to herd immunity and to the vaccine losing its purpose. For all the advances in civilization, from believing in demonic spirits all the way to understanding mRNA at a machine-code level of detail, covid is running wild much like it would have back in the Middle Ages—partly, yes, because modern transportation helps it spread, but partly also because our political and regulatory and public-health tools have lagged so breathtakingly behind our knowledge of molecular biology.

What could’ve been done faster? For starters, as I said back in March, we could’ve had human challenge trials with willing volunteers, of whom there were tens of thousands. We could’ve started mass-manufacturing months earlier, with funding commensurate with the problem’s scale (think trillions, not billions). Today, we could give as many people as possible the first doses (which apparently already provide something like ~80% protection) before circling back to give the second doses (which boost the protection as high as ~95%). We could distribute the vaccines that are now sitting in warehouses, spoiling, while people in the distribution chain take off for the holidays—but that’s such low-hanging fruit that it feels unsporting even to mention it.

Let me now respond to three counterarguments that would surely come up in the comments if I didn’t address them.

  1. The Argument from Actual Risk. Every time this subject arises, someone patiently explains to me that, since a vaccine gets administered to billions of healthy people, the standards for its safety and efficacy need to be even higher than they are for ordinary medicines. Of course that’s true, and it strikes me as an excellent reason not to inject people with a completely untested vaccine! All I ask is that the people who are, or could be, harmed by a faulty vaccine, be weighed on the same moral scale as the people harmed by covid itself. As an example, we know that the Phase III clinical trials were repeatedly halted for days or weeks because of a single participant developing strange symptoms—often a participant who’d received the placebo rather than the actual vaccine! That person matters. Any future vaccine recipient who might develop similar symptoms matters. But the 10,000 people who die of covid every single day we delay, along with the hundreds of millions more impoverished, kept out of school, etc., matter equally. If we threw them all onto the same utilitarian scale, would we be making the same tradeoffs that we are now? I feel like the question answers itself.
  2. The Argument from Perceived Risk. Even with all the testing that’s been done, somewhere between 16% and 40% of Americans (depending on which poll you believe) say that they’ll refuse to get a covid vaccine, often because of anti-vaxx conspiracy theories. How much higher would the percentage be had the vaccines been rushed out in a month or two? And of course, if not enough people get vaccinated, then R0 remains above 1 and the public-health campaign is a failure. In this way of thinking, we need three phases of clinical trials the same way we need everyone to take off their shoes at airport security: it might not prevent a single terrorist, but the masses will be too scared to get on the planes if we don’t. To me, this (if true) only underscores my broader point, that the year-long delay in getting vaccines out represents a failure of our entire civilization, rather than a failure of any one agent. But also: people’s membership in the pro- or anti-vaxx camps is not static. The percentage saying they’ll get a covid vaccine seems to have already gone up, as a formerly abstract question becomes a stark choice between wallowing in delusions and getting a deadly disease, or accepting reality and not getting it. So while the Phase III trials were still underway—when the vaccines were already known to be safe, and experts thought it much more likely than not that they’d work—would it have been such a disaster to let Pfizer and Moderna sell the vaccines, for a hefty profit, to those who wanted them? With the hope that, just like with the iPhone or any other successful consumer product, satisfied early adopters would inspire the more reticent to get in line too?
  3. The Argument from Trump. Now for the most awkward counterargument, which I’d like to address head-on rather than dodge. If the vaccines had been approved faster in the US, it would’ve looked to many like Trump deserved credit for it, and he might well have been reelected. And devastating though covid has been, Trump is plausibly worse! Here’s my response: Trump has the mentality of a toddler, albeit with curiosity swapped out for cruelty and vindictiveness. His and his cronies’ impulsivity, self-centeredness, and incompetence are likely responsible for at least ~200,000 of the 330,000 Americans now dead from covid. But, yes, reversing his previous anti-vaxx stance, Trump did say that he wanted to see a covid vaccine in months, just like I’ve said. Does it make me uncomfortable to have America’s worst president in my “camp”? Only a little, because I have no problem admitting that sometimes toddlers are right and experts are wrong. The solution, I’d say, is not to put toddlers in charge of the government! As should be obvious by now—indeed, as should’ve been obvious back in 2016—that solution has some exceedingly severe downsides. The solution, rather, is to work for a world where experts are unafraid to speak bluntly, so that it never falls to a mental toddler to say what the experts can’t say without jeopardizing their careers.

Anyway, despite everything I’ve written, considerations of Aumann’s Agreement Theorem still lead me to believe there’s an excellent chance that I’m wrong, and the vaccines couldn’t realistically have been rolled out any faster. The trouble is, I don’t understand why. And I don’t understand why compressing this process, from a year or two to at most a month or two, shouldn’t be civilization’s most urgent priority ahead of the next pandemic. So go ahead, explain it to me! I’ll be eternally grateful to whoever makes me retract this post in shame.

Update (Jan. 1, 2021): If you want a sense of the on-the-ground realities of administering the vaccine in the US, check out this long post by Zvi Mowshowitz. Briefly, it looks like in my post, I gave those in charge way too much benefit of the doubt (!!). The Trump administration pledged to administer 20 million vaccines by the end of 2020; instead it administered fewer than 3 million. Crucially, this is not because of any problem with manufacturing or supply, but just because of pure bureaucratic blank-facedness. Incredibly, even as the pandemic rages, most of the vaccines are sitting in storage, at severe risk of spoiling … and officials’ primary concern is not to administer the precious doses, but just to make sure no one gets a dose “out of turn.” In contrast to Israel, where they’re now administering vaccines 24/7, including on Shabbat, with the goal being to get through the entire population as quickly as possible, in the US they’re moving at a snail’s pace and took off for the holidays. In Wisconsin, a pharmacist intentionally spoiled hundreds of doses; in West Virginia, they mistakenly gave antibody treatments instead of vaccines. There are no longer any terms to understand what’s happening other than those of black comedy.