A grand anticlimax: the New York Times on Scott Alexander

February 13th, 2021

Updates (Feb. 14, 2021): Scott Alexander Siskind responds here.

Last night, it occurred to me that despite how disjointed it feels, the New York Times piece does have a central thesis: namely, that rationalism is a “gateway drug” to dangerous beliefs. And that thesis is 100% correct—insofar as once you teach people that they can think for themselves about issues of consequence, some of them might think bad things. It’s just that many of us judge the benefit worth the risk!

Happy Valentine’s Day everyone!


Back in June, New York Times technology reporter Cade Metz, who I’d previously known from his reporting on quantum computing, told me that he was writing a story about Scott Alexander, Slate Star Codex, and the rationalist community. Given my position as someone who knew the rationalist community without ever really being part of it, Cade wondered whether I’d talk with him. I said I’d be delighted to.

I spent many hours with Cade, taking his calls and emails morning or night, at the playground with my kids or wherever else I was, answering his questions, giving context for his other interviews, suggesting people in the rationalist community for him to talk to, in exactly the same way I might suggest colleagues for a quantum computing story. And then I spent just as much time urging those people to talk to Cade. (“How could you possibly not want to talk? It’s the New York Times!”) Some of the people I suggested agreed to talk; others refused; a few were livid at me for giving a New York Times reporter their email addresses without asking them. (I apologized; lesson learned.)

What happened next is already the stuff of Internet history: the NYT’s threat to publish Scott’s real surname; Scott deleting his blog as a way to preempt that ‘doxing’; 8,000 people, including me, signing a petition urging the NYT to respect Scott’s wish to keep his professional and blog identities separate; Scott resigning from his psychiatry clinic and starting his own low-cost practice, Lorien Psychiatry; his moving his blog, like so many other writers this year, to Substack; then, a few weeks ago, his triumphant return to blogging under his real name of Scott Siskind. All this against the backdrop of an 8-month period that was world-changingly historic in so many other ways: the failed violent insurrection against the United States and the ouster, by democratic means, of the president who incited it; the tragedy of covid and the long-delayed start of the vaccination campaign; the BLM protests; the well-publicized upheavals at the NYT itself, including firings for ideological lapses that would’ve made little sense to our remote ancestors of ~2010.

And now, as an awkward coda, the New York Times article itself is finally out (non-paywalled version here).

It could’ve been worse. I doubt it will do lasting harm. Of the many choices I disagreed with, I don’t know which were Cade’s and which his editors’. But no, I was not happy with it. If you want a feature-length, pop condensation of the rationalist community and its ideas, I preferred this summer’s New Yorker article (but much better still is the book by Tom Chivers).

The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present. I repeatedly muttered to myself, as I read: “dude, you could make anything sound shady with this exact same rhetorical toolkit!”

Without further ado, here’s a partial list of my issues:

  1. The piece includes the following ominous sentence: “But in late June of last year, when I approached Siskind to discuss the blog, it vanished.”  This framing, it seems to me, would be appropriate for some conman trying to evade accountability without ever explaining himself. It doesn’t make much sense for a practicing psychiatrist who took the dramatic step of deleting his blog in order to preserve his relationship with his patients—thereby complying with an ethical code that’s universal among psychiatrists, even if slightly strange to the rest of us—and who immediately explained his reasoning to the entire world. In the latter framing, of course, Scott comes across less like a fugitive on the run and more like an innocent victim of a newspaper’s editorial obstinacy.
  2. As expected, the piece devotes enormous space to the idea of rationalism as an on-ramp to alt-right extremism.  The trouble is, it never presents the idea that rationalism also can be an off-ramp from extremism—i.e., that it can provide a model for how even after you realize that mainstream sources are confidently wrong on some issue, you don’t respond by embracing conspiracy theories and hatreds, you respond by simply thinking carefully about each individual question rather than buying a worldview wholesale from anyone.  Nor does the NYT piece mention how Scott, precisely because he gives right-wing views more charity than some of us might feel they deserve, actually succeeded in dissuading some of his readers from voting for Trump—which is more success than I can probably claim in that department! I had many conversations with Cade about these angles that are nowhere reflected in the piece.
  3. The piece gets off on a weird foot, by describing the rationalists as “a group that aimed to re-examine the world through cold and careful thought.”  Why “cold”?  Like, let’s back up a few steps: what is even the connection in the popular imagination between rationality and “coldness”? To me, as to many others, the humor, humanity, and warmth of Scott’s writing were always among its most notable features.
  4. The piece makes liberal use of scare quotes. Most amusingly, it puts scare quotes around the phrase “Bayesian reasoning”!
  5. The piece never mentions that many rationalists (Zvi Mowshowitz, Jacob Falkovich, Kelsey Piper…) were right about the risk of covid-19 in early 2020, and then again right about masks, aerosol transmission, faster-spreading variants, the need to get vaccines into arms faster, and many other subsidiary issues, even while public health authorities and the mainstream press struggled for months to reach the same obvious (at least in retrospect) conclusions.  This omission is significant because Cade told me, in June, that the rationalist community’s early rightness about covid was part of what led him to want to write the piece in the first place (!).  If readers knew about that clear success, would it put a different spin on the rationalists’ weird, cultlike obsession with “Bayesian reasoning” and “consequentialist ethics” (whatever those are), or their nerdy, idiosyncratic worries about the more remote future?
  6. The piece contains the following striking sentence: “On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.” Well, yes, except this framing makes it sound like this is a fringe belief of some radical Silicon Valley tribe, rather than just the standard expectation of most of the billions of people who’ve used the Internet for most of its half-century of existence.
  7. Despite thousands of words about the content of SSC, the piece never gives Scott a few uninterrupted sentences in his own voice, to convey his style. This is something the New Yorker piece did do, and which would help readers better understand the wit, humor, charity, and self-doubt that made SSC so popular.  To see what I mean, read the NYT’s radically-abridged quotations from Scott’s now-classic riff on the Red, Blue, and Gray Tribes and decide for yourself whether they capture the spirit of the original (alright, I’ll quote the relevant passage myself at the bottom of this post). Scott has the property, shared by many of my favorite writers, that if you just properly quote him, the words leap off the page, wriggling free from the grasp of any bracketing explanations and making a direct run for the reader’s brain. All the more reason to quote him!
  8. The piece describes SSC as “astoundingly verbose.”  A more neutral way to put it would be that Scott has produced a vast quantity of intellectual output.  When I finish a Scott Alexander piece, only in a minority of cases do I feel like he spent more words examining a problem than its complexities really warranted.  Just as often, I’m left wanting more.
  9. The piece says that Scott once “aligned himself” with Charles Murray, then goes on to note Murray’s explosive views about race and IQ. That might be fair enough, were it also mentioned that the positions ascribed to Murray that Scott endorses in the relevant post—namely, “hereditarian leftism” and universal basic income—are not only unrelated to race but are actually progressive positions.
  10. The piece says that Scott once had neoreactionary thinker Nick Land on his blogroll. Again, important context is missing: this was back when Land was mainly known for his strange writings on AI and philosophy, before his neoreactionary turn.
  11. The piece says that Scott compared “some feminists” to Voldemort.  It didn’t explain what it took for certain specific feminists (like Amanda Marcotte) to prompt that comparison, which might have changed the coloration. (Another thing that would’ve complicated the picture: the rationalist community’s legendary openness to alternative gender identities and sexualities, before such openness became mainstream.)
  12. Speaking of feminists—yeah, I’m a minor part of the article.  One of the few things mentioned about me is that I’ve stayed in a rationalist group house.  (If you must know: for like two nights, when I was in Bay Area, with my wife and kids. We appreciated the hospitality!) The piece also says that I was “turned off by the more rigid and contrarian beliefs of the Rationalists.” It’s true that I’ve disagreed with many beliefs espoused by rationalists, but not because they were contrarian, or because I found them noticeably more “rigid” than most beliefs—only because I thought they were mistaken!
  13. The piece describes Eliezer Yudkowsky as a “polemicist and self-described AI researcher.”  It’s true that Eliezer opines about AI despite a lack of conventional credentials in that field, and it’s also true that the typical NYT reader might find him to be comically self-aggrandizing.  But had the piece mentioned the universally recognized AI experts, like Stuart Russell, who credit Yudkowsky for a central role in the AI safety movement, wouldn’t that have changed what readers perceived as the take-home message?
  14. The piece says the following about Shane Legg and Demis Hassabis, the founders of DeepMind: “Like the Rationalists, they believed that AI could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.”  This strikes me as a brilliant way to reframe a concern around AI safety as something vaguely sinister.  Imagine if the following framing had been chosen instead: “Amid Silicon Valley’s mad rush to invest in AI, here are the voices urging that it be done safely and in accord with human welfare…”

Reading this article, some will say that they told me so, or even that I was played for a fool.  And yet I confess that, even with hindsight, I have no idea what I should have done differently, how it would’ve improved the outcome, or what I will do differently the next time. Was there some better, savvier way for me to help out? For each of the 14 points listed above, were I ever tempted to bang my head and say, “dammit, I wish I’d told Cade X, so his story could’ve reflected that perspective”—well, the truth of the matter is that I did tell him X! It’s just that I don’t get to decide which X’s make the final cut, or which ideological filter they’re passed through first.

On reflection, then, I’ll continue to talk to journalists, whenever I have time, whenever I think I might know something that might improve their story. I’ll continue to rank bend-over-backwards openness and honesty among my most fundamental values. Hell, I’d even talk to Cade for a future story, assuming he’ll talk to me after all the disagreements I’ve aired here! [Update: commenters’ counterarguments caused me to change my stance on this; see here.]

For one thing that became apparent from this saga is that I do have a deep difference with the rationalists, one that will likely prevent me from ever truly joining them. Yes, there might be true and important things that one can’t say without risking one’s livelihood. At least, there were in every other time and culture, so it would be shocking if Western culture circa 2021 were the lone exception. But unlike the rationalists, I don’t feel the urge to form walled gardens in which to say those things anyway. I simply accept that, in the age of instantaneous communication, there are no walled gardens: anything you say to a dozen or more people, you might as well broadcast to the planet. Sure, we all have things we say only in the privacy of our homes or to a few friends—a privilege that I expect even the most orthodox would like to preserve, at any rate for themselves. Beyond that, though, my impulse has always been to look for non-obvious truths that can be shared openly, and that might light little candles of understanding in one or two minds—and then to shout those truths from the rooftops under my own name, and learn what I can from whatever sounds come in reply.

So I’m thrilled that Scott Alexander Siskind has now rearranged his life to have the same privilege. Whatever its intentions, I hope today’s New York Times article draws tens of thousands of curious new readers to Scott’s new-yet-old blog, Astral Codex Ten, so they can see for themselves what I and so many others saw in it. I hope Scott continues blogging for decades. And whatever obscene amount of money Substack is now paying Scott, I hope they’ll soon be paying him even more.


Alright, now for the promised quote, from I Can Tolerate Anything Except the Outgroup.

The Red Tribe is most classically typified by conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak, drinking Coca-Cola, driving SUVs, watching lots of TV, enjoying American football, getting conspicuously upset about terrorists and commies, marrying early, divorcing early, shouting “USA IS NUMBER ONE!!!”, and listening to country music.

The Blue Tribe is most classically typified by liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, reading lots of books, being highly educated, mocking American football, feeling vaguely like they should like soccer but never really being able to get into it, getting conspicuously upset about sexists and bigots, marrying later, constantly pointing out how much more civilized European countries are than America, and listening to “everything except country”.

(There is a partly-formed attempt to spin off a Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk – but for our current purposes this is a distraction and they can safely be considered part of the Blue Tribe most of the time)

… Even in something as seemingly politically uncharged as going to California Pizza Kitchen or Sushi House for dinner, I’m restricting myself to the set of people who like cute artisanal pizzas or sophsticated foreign foods, which are classically Blue Tribe characteristics.

Once we can see them, it’s too late

January 30th, 2021

[updates: here’s the paper, and here’s Robin’s brief response to some of the comments here]

This month Robin Hanson, the famous and controversy-prone George Mason University economics professor who I’ve known since 2004, was visiting economists here in Austin for a few weeks. So, while my fear of covid considerably exceeds Robin’s, I met with him a few times in the mild Texas winter in an outdoor, socially-distanced way. It took only a few minutes for me to remember why I enjoy talking to Robin so much.

See, while I’d been moping around depressed about covid, the vaccine rollout, the insurrection, my inability to focus on work, and a dozen other things, Robin was bubbling with excitement about a brand-new mathematical model he was working on to understand the growth of civilizations across the universe—a model that, Robin said, explained lots of cosmic mysteries in one fell swoop and also made striking predictions. My cloth facemask was, I confess, unable to protect me from Robin’s infectious enthusiasm.

As I listened, I went through the classic stages of reaction to a new Hansonian proposal: first, bemusement over the sheer weirdness of what I was being asked to entertain, as well as Robin’s failure to acknowledge that weirdness in any way whatsoever; then, confusion about the unstated steps in his radically-condensed logic; next, the raising by me of numerous objections (each of which, it turned out, Robin had already thought through at length); finally, the feeling that I must have seen it this way all along, because isn’t it kind of obvious?

Robin has been explaining his model in a sequence of Overcoming Bias posts, and will apparently have a paper out about the model soon the paper is here! In this post, I’d like to offer my own take on what Robin taught me. Blame for anything I mangle lies with me alone.

To cut to the chase, Robin is trying to explain the famous Fermi Paradox: why, after 60+ years of looking, and despite the periodic excitement around Tabby’s star and ‘Oumuamua and the like, have we not seen a single undisputed sign of an extraterrestrial civilization? Why all this nothing, even though the observable universe is vast, even though (as we now know) organic molecules and planets in Goldilocks zones are everywhere, and even though there have been billions of years for aliens someplace to get a technological head start on us, expanding across a galaxy to the point where they’re easily seen?

Traditional answers to this mystery include: maybe the extraterrestrials quickly annihilate themselves in nuclear wars or environmental cataclysms, just like we soon will; maybe the extraterrestrials don’t want to be found (whether out of self-defense or a cosmic Prime Directive); maybe they spend all their time playing video games. Crucially, though, all answers of that sort founder against the realization that, given a million alien civilizations, each perhaps more different from the others than kangaroos are from squid, it would only take one, spreading across a billion light-years and transforming everything to its liking, for us to have noticed it.

Robin’s answer to the puzzle is as simple as it is terrifying. Such civilizations might well exist, he says, but if so, by the time we noticed one, it would already be nearly too late. Robin proposes, plausibly I think, that if you give a technological civilization 10 million or so years—i.e., an eyeblink on cosmological timescales—then either

  1. the civilization wipes itself out, or else
  2. it reaches some relatively quiet steady state, or else
  3. if it’s serious about spreading widely, then it “maxes out” the technology with which to do so, approaching the limits set by physical law.

In cases 1 or 2, the civilization will of course be hard for us to detect, unless it happens to be close by. But what about case 3? There, Robin says, the “civilization” should look from the outside like a sphere expanding at nearly the speed of light, transforming everything in its path.

Now think about it: when could we, on earth, detect such a sphere with our telescopes? Only when the sphere’s thin outer shell had reached the earth—perhaps carrying radio signals from the extraterrestrials’ early history, before their rapid expansion started. By that point, though, the expanding sphere itself would be nearly upon us!

What would happen to us once we were inside the sphere? Who knows? The expanding civilization might obliterate us, it might preserve us as zoo animals, it might merge us into its hive-mind, it might do something else that we can’t imagine, but in any case, detecting the civilization would presumably no longer be the relevant concern!

(Of course, one could also wonder what happens when two of these spheres collide: do they fight it out? do they reach some agreement? do they merge? Whatever the answer, though, it doesn’t matter for Robin’s argument.)

On the view described, there’s only a tiny cosmic window in which a SETI program could be expected to succeed: namely, when the thin surface of the first of these expanding bubbles has just hit us, and when that surface hasn’t yet passed us by. So, given our “selection bias”—meaning, the fact that we apparently haven’t yet been swallowed up by one of the bubbles—it’s no surprise if we don’t right now happen to find ourselves in the tiny detection window!

This basic proposal, it turns out, is not original to Robin. Indeed, an Overcoming Bias reader named Daniel X. Varga pointed out to Robin that he (Daniel) shared the same idea right here—in a Shtetl-Optimized comment thread—back in 2008! I must have read Daniel Varga’s comment then, but (embarrassingly) it didn’t make enough of an impression for me to have remembered it. I probably thought the same as you probably thought while reading this post:

“Sure, whatever. This is an amusing speculation that could make for a fun science-fiction story. Alas, like with virtually every story about extraterrestrials, there’s no good reason to favor this over a hundred other stories that a fertile imagination could just as easily spin. Who the hell knows?”

This is where Robin claims to take things further. Robin would say that he takes them further by developing a mathematical model, and fitting the parameters of the model to the known facts of cosmic history. Read Overcoming Bias, or Robin’s forthcoming paper, if you want to know the details of his model. Personally, I confess I’m less interested in those details than I am in the qualitative points, which (unless I’m mistaken) are easy enough to explain in words.

The key realization is this: when we contemplate the Fermi Paradox, we know more than the mere fact that we look and look and we don’t see any aliens. There are other relevant data points to fit, having to do with the one sample of a technological civilization that we do have.

For starters, there’s the fact that life on earth has been evolving for at least ~3.5 billion years—for most of the time the earth has existed—but life has a mere billion more years to go, until the expanding sun boils away the oceans and makes the earth barely habitable. In other words, at least on this planet, we’re already relatively close to the end. Why should that be?

It’s an excellent fit, Robin says, to a model wherein there are a few incredibly difficult, improbable steps along the way to a technological civilization like ours—steps that might include the origin of life, of multicellular life, of consciousness, of language, of something else—and wherein, having achieved some step, evolution basically just does a random search until it either stumbles onto the next step or else runs out of time.

Of course, given that we’re here to talk about it, we necessarily find ourselves on a planet where all the steps necessary for blog-capable life happen to have succeeded. There might be vastly more planets where evolution got stuck on some earlier step.

But here’s the interesting part: conditioned on all the steps having succeeded, we should find ourselves near the end of the useful lifetime of our planet’s star—simply because the more time is available on a given planet, the better the odds there. I.e., look around the universe and you should find that, on most of the planets where evolution achieves all the steps, it nearly runs out the planet’s clock in doing so. Also, as we look back, we should find the hard steps roughly evenly spaced out, with each one having taken a good fraction of the whole available time. All this is an excellent match for what we see.

OK, but it leads to a second puzzle. Life on earth is at least ~3.5 billion years old, while the observable universe is ~13.7 billion years old. Forget for a moment about the oft-stressed enormity of these two timescales and concentrate on their ratio, which is merely ~4. Life on earth stretches a full quarter of the way back in time to the Big Bang. Even as an adolescent, I remember finding that striking, and not at all what I would’ve guessed a priori. It seemed like obviously a clue to something, if I could only figure out what.

The puzzle is compounded once you realize that, even though the sun will boil the oceans in a billion years (and then die in a few billion more), other stars, primarily dwarf stars, will continue shining brightly for trillions more years. Granted, the dwarf stars don’t seem quite as hospitable to life as sun-like stars, but they do seem somewhat hospitable, and there will be lots of them—indeed, more than of sun-like stars. And they’ll last orders of magnitude longer.

To sum up, our temporal position relative to the lifetime of the sun makes it look as though life on earth was just a lucky draw from a gigantic cosmic Poisson process. By contrast, our position relative to the lifetime of all the stars makes it look as though we arrived crazily, freakishly early—not at all what you’d expect under a random model. So what gives?

Robin contends that all of these facts are explained under his bubble scenario. If we’re to have an experience remotely like the human one, he says, then we have to be relatively close to the beginning of time—since hundreds of billions of years from now, the universe will likely be dominated by near-light-speed expanding spheres of intelligence, and a little upstart civilization like ours would no longer stand a chance. I.e., even though our existence is down to some lucky accidents, and even though those same accidents probably recur throughout the cosmos, we shouldn’t yet see any of the other accidents, since if we did see them, it would already be nearly too late for us.

Robin admits that his account leaves a huge question open: namely, why should our experience have been a “merely human,” “pre-bubble” experience at all? If you buy that these expanding bubbles are coming, it seems likely that there will be trillions of times more sentient experiences inside them than outside. So experiences like ours would be rare and anomalous—like finding yourself at the dawn of human history, with Hammurabi et al., and realizing that almost every interesting thing that will ever happen is still to the future. So Robin simply takes as a brute fact that our experience is “earth-like” or “human-like”; he then tries to explain the other observations from that starting point.

Notice that, in Robin’s scenario, the present epoch of the universe is extremely special: it’s when civilizations are just forming, when perhaps a few of them will achieve technological liftoff, but before one or more of the civilizations has remade the whole of creation for its own purposes. Now is the time when the early intelligent beings like us can still look out and see quadrillions of stars shining to no apparent purpose, just wasting all that nuclear fuel in a near-empty cosmos, waiting for someone to come along and put the energy to good use. In that respect, we’re sort of like the Maoris having just landed in New Zealand, or Bill Gates surveying the microcomputer software industry in 1975. We’re ridiculously lucky. The situation is way out of equilibrium. The golden opportunity in front of us can’t possibly last forever.

If we accept the above, then a major question I had was the role of cosmology. In 1998, astronomers discovered that the present cosmological epoch is special for a completely different reason than the one Robin talks about. Namely, right now is when matter and dark energy contribute roughly similarly to the universe’s energy budget, with ~30% the former and ~70% the latter. Billions of years hence, the universe will become more and more dominated by dark energy. Our observable region will get sparser and sparser, as the dark energy pushes the galaxies further and further away from each other and from us, with more and more galaxies receding past the horizon where we could receive signals from them at the speed of light. (Which means, in particular, that if you want to visit a galaxy a few billion light-years from here, you’d better start out while you still can!)

So here’s my question: is it just a coincidence that the time—right now—when the universe is “there for the taking,” potentially poised between competing spacefaring civilizations, is also the time when it’s poised between matter and dark energy? Note that, in 2007, Bousso et al. tried to give a sophisticated anthropic argument for the value of the cosmological constant Λ, which measures the density of dark energy, and hence the eventual size of the observable universe. See here for my blog post on what they did (“The array size of the universe”). Long story short, for reasons that I explain in the post, it turns out to be essential to their anthropic explanation for Λ that civilizations flourish only (or mainly) in the present epoch, rather than trillions of years in the future. If we had to count civilizations that far into the future, then the calculations would favor values of Λ much smaller than what we actually observe. This, of course, seems to dovetail nicely with Robin’s account.

Let me end with some “practical” consequences of Robin’s scenario, supposing as usual that we take it seriously. The most immediate consequence is that the prospects for SETI are dimmer than you might’ve thought before you’d internalized all this. (Even after having interalized it, I’d still like at least an order of magnitude more resources devoted to SETI than what our civilization currently spares. Robin’s assumptions might be wrong!)

But a second consequence is that, if we want human-originated sentience to spread across the universe, then the sooner we get started the better! Just like Bill Gates in 1975, we should expect that there will soon be competitors out there. Indeed, there are likely competitors out there “already” (where “already” means, let’s say, in the rest frame of the cosmic microwave background)—it’s just that the light from them hasn’t yet reached us. So if we want to determine our own cosmic destiny, rather than having post-singularity extraterrestrials determine it for us, then it’s way past time to get our act together as a species. We might have only a few hundred million more years to do so.

Update: For more discussion of this post, see the SSC Reddit thread. I especially liked a beautiful comment by “Njordsier,” which fills in some important context for the arguments in this post:

Suppose you’re an alien anthropologist that sent a probe to Earth a million years ago, and that probe can send back one high-resolution image of the Earth every hundred years. You’d barely notice humans at first, though they’re there. Then, circa 10,000 years ago (99% of the way into the stream) you begin to see plots of land turned into farms. Houses, then cities, first in a few isolated places in river valleys, then exploding across five or six continents. Walls, roads, aqueducts, castles, fortresses. Four frames before the end of the stream, the collapse of the population on two of the continents as invaders from another continent bring disease. At T-minus three frames, a sudden appearance of farmland and cities on the coasts those continents. At T-minus two frames, half the continent. At the second to last frame, a roaring interconnected network of roads, cities, farms, including skyscrapers in the cities that were just trying villas three frames ago. And in the last frame, nearly 80 percent of all wilderness converted to some kind of artifice, and the sky is streaked with the trails of flying machines all over the world.

Civilizations rose and fell, cultures evolved and clashed, and great and terrible men and women performed awesome deeds. But what the alien anthropologist sees is a consistent, rapid, exponential explosion of a species bulldozing everything in its path.

That’s what we’re doing when we talk about the far future, or about hypothetical expansionist aliens, on long time scales. We’re zooming out past the level where you can reason about individuals or cultures, but see the strokes of much longer patterns that emerge from that messy, beautiful chaos that is civilization.

Update (Jan. 31): Reading the reactions here, on Hacker News, and elsewhere underscored for me that a lot of people get off Robin’s train well before it’s even left the station. Such people think of extraterrestrial civilizations as things that you either find or, if you haven’t found one, you just speculate or invent stories about. They’re not even in the category of things that you have any serious hope to reason about. For myself, I’d simply observe that trying to reason about matters far beyond current human experience, based on the microscopic shreds of fact available to us (e.g., about the earth’s spatial and temporal position within the universe), has led to some of our species’ embarrassing failures but also to some of its greatest triumphs. Since even the failures tend to be relatively cheap, I feel like we ought to be “venture capitalists” about such efforts to reason beyond our station, encouraging them collegially and mocking them only gently.

Research (by others) proceeds apace

January 27th, 2021

At age 39, I already feel more often than not like a washed-up has-been in complexity theory and quantum computing research. It’s not intelligence that I feel like I’ve lost, so much as two other necessary ingredients: burning motivation and time. But all is not lost: I still have students and postdocs to guide and inspire! I still have the people who email me every day—journalists, high-school kids, colleagues—asking this and that! Finally, I still have this blog, with which to talk about all the exciting research that others are doing!

Speaking of blogging about research: I know I ought to do more of it, so let me start right now.

  • Last night, Renou et al. posted a striking paper on the arXiv entitled Quantum physics needs complex numbers. One’s immediate reaction to the title might be “well duh … who ever thought it didn’t?” (See this post of mine for a survey of explanations for why quantum mechanics “should have” involved complex numbers.) Renou et al., however, are interested in ruling out a subtler possibility: namely, that our universe is secretly based on a version of quantum mechanics with real amplitudes only, and that it uses extra Hilbert space dimensions that we don’t see in order to simulate complex quantum mechanics. Strictly speaking, such a possibility can never be ruled out, any more than one can rule out the possibility that the universe is a classical computer that simulates quantum mechanics. In the latter case, though, the whole point of Bell’s Theorem is to show that if the universe is secretly classical, then it also needs to be radically nonlocal (relying on faster-than-light communication to coordinate measurement outcomes). Renou et al. claim to show something analogous about real quantum mechanics: there’s an experiment—as it happens, one involving three players and two entangled pairs—for which conventional QM predicts an outcome that can’t be explained using any variant of QM that’s both local and secretly based on real amplitudes. Their experiment seems eminently doable, and I imagine it will be done in short order.
  • A bunch of people from PsiQuantum posted a paper on the arXiv introducing “fusion-based quantum computation” (FBQC), a variant of measurement-based quantum computation (MBQC) and apparently a new approach to fault-tolerance, which the authors say can handle a ~10% rate of lost photons. PsiQuantum is the large, Palo-Alto-based startup trying to build scalable quantum computers based on photonics. They’ve been notoriously secretive, to the point of not having a website. I’m delighted that they’re sharing details of the sort of thing they hope to build; I hope and expect that the FBQC proposal will be evaluated by people more qualified than me.
  • Since this is already on social media: apparently, Marc Lackenby from Oxford will be giving a Zoom talk at UC Davis next week, about a quasipolynomial-time algorithm to decide whether a given knot is the unknot. A preprint doesn’t seem to be available yet, but this is a big deal if correct, on par with Babai’s quasipolynomial-time algorithm for graph isomorphism from four years ago (see this post). I can’t wait to see details! (Not that I’ll understand them well.)

Sufficiently amusing that I had no choice

January 21st, 2021

A day to celebrate

January 20th, 2021

The reason I’m celebrating is presumably obvious to all: today is my daughter Lily’s 8th birthday! (She had a tiny Star Wars-themed party, dressed in her Rey costume.)

A second reason I’m celebrating yesterday: I began teaching (via Zoom, of course) the latest iteration of my graduate course on Quantum Complexity Theory!

A third reason: I’m now scheduled to get my first covid vaccine shot on Monday! (Texas is working through its “Phase 1b,” which includes both the over-65 and those with underlying conditions—in my case, mild type-2 diabetes.) I’d encourage everyone to do as I did: don’t lie to jump the line, but don’t sacrifice your place either. Just follow the stated rules and get vaccinated the first microsecond you can, and urge all your friends and loved ones to do the same. A crush of demand is actually good if it encourages the providers to expand their hours (they’re taking off weekends! they took off MLK Day!) and not to waste a single dose.

Anyway, people can use this thread to talk about whatever they like, but one thing that would interest me especially is readers’ experiences with vaccination: if you’ve gotten one by now, how hard did you have to look for an appointment, how orderly or chaotic was the process where you live, and what advice can you offer?

Incidentally, to the several commenters on this blog who expressed absolute certainty (as recently as yesterday) that Trump would reverse the election result and be inaugurated instead of Biden, and who confidently accused the rest of us of living in a manufactured media bubble that prevented them from seeing that: I respect that, whatever else is said about you, no one can ever again accuse you of being fair-weather friends!

Congratulations to the new President! There are difficult months ahead, but today the arc of the universe bent slightly toward sanity and goodness.

Update (Jan 21): WOOHOO! Yet another reason to celebrate: Scott Alexander is finally back in business, now blogging at Astral Codex Ten on Substack.

To all Trumpists who comment on this blog

January 6th, 2021

The violent insurrection now unfolding in Washington DC is precisely the thing you called me nuts, accused me of “Trump Derangement Syndrome,” for warning about since 2016. Crazy me, huh, always seeing brownshirts around the corner? And you called the other side violent anarchists? This is all your doing. So own it. Wallow in it. May you live the rest of your lives in shame.

Update (Jan. 7): As someone who hasn’t always agreed with BLM’s slogans and tactics, I viewed the stunning passivity of the police yesterday against white insurrectionists in the Capitol as one of the strongest arguments imaginable for BLM’s main contentions.

Distribute the vaccines NOW!

January 2nd, 2021

My last post about covid vaccines felt like shouting uselessly into the void … at least until Patrick Collison, the cofounder of Stripe and a wonderful friend, massively signal-boosted the post by tweeting it. This business is of such life-and-death urgency right now, and a shift in attitude or a hardening of resolve by just a few people reading could have such an outsized effect, that with apologies to anyone wanting me to return to my math/CS/physics lane, I feel like a second post on the same topic is called for.

Here’s my main point for today (as you might have noticed, I’ve changed the tagline of this entire blog accordingly):

Reasonable people can disagree about whether vaccination could have, or should have, started much earlier. But now that we in the US have painstakingly approved two vaccines, we should all agree about the urgent need to get millions of doses into people’s arms before they spoil! Sure, better the elderly than the young, better essential than inessential workers—but much more importantly, better today than tomorrow, and better anyone than no one!

Israel, which didn’t do especially well in earlier stages of the pandemic, is now putting the rest of the planet to shame with vaccinations. What Dana and I hear from our friends and relatives there confirms what you can read here, here, and elsewhere. Rabin Square in Tel Aviv is now a huge vaccination field site. Vaccinations are now proceeding 24/7, even on Shabbat—something the ultra-Orthodox rabbis are grudgingly tolerating under the doctrine of “pikuach nefesh” (i.e., saving a life overrides almost every other religious obligation). Israelis are receiving texts at all hours telling them when it’s their turn and where to go. Apparently, after the nurses are finished with everyone who had appointments, rather than waste whatever already-thawed supply is left, they simply go into the street and offer the extra doses to anyone passing by.

Contrast that with the historic fiasco—yes, another historic fiasco—now unfolding in the US. The Trump administration had pledged to administer 20 million vaccines (well, Trump originally said 100 million) by the end of 2020. Instead, fewer than three million were administered, with the already-glacial pace slowing even further over the holidays. Unbelievably, millions of doses are on track to spoil this month, before they can be administered. The bottleneck is now not manufacturing, it’s not supply, it’s just pure bureaucratic dysfunction and chaos, lack of funding and staff, and a stone-faced unwillingness by governors to deviate from harebrained “plans” and “guidelines” even with their populations’ survival at stake.

Famously, the CDC urged that essential workers get vaccinated before the elderly, since even though their own modeling predicted that many more people from all ethnic groups would die that way, at least the deaths would be more equitably distributed. While there are some good arguments to prioritize essential workers, an outcry then led to the CDC partially backtracking, and to many states just making up their own guidelines. But we’re now, for real, headed for a scenario where none of these moral-philosophy debates turn out to matter, since the vaccines will simply spoil in freezers (!!!) while the medical system struggles to comply with the Byzantine rules about who gets them first.

While I’d obviously never advocate such a thing, one wonders whether there’s an idealistic medical worker, somewhere in the US, who’s willing to risk jail for vaccinating people without approval, using supply that would otherwise be wasted. If anything could galvanize this sad and declining nation to move faster, maybe it’s that.


In my last post, I invited people to explain to me where I went wrong in my naïve, simplistic, doofus belief that, were our civilization still capable of “WWII” levels of competence, flexibility, and calculated risk-tolerance, most of the world could have already been vaccinated by now. In the rest of this post, I’d like to list the eight most important counterarguments to that position that commenters offered (at least, those that I hadn’t already anticipated in the post itself), together with my brief responses to them.

  1. Faster approval wouldn’t have helped, since the limiting factor was just the time needed to ramp up the supply. As the first part of this post discussed, ironically supply is not now the limiting factor, and approval even a month or two earlier could’ve provided precious time to iron out the massive problems in distribution. More broadly, though, what’s becoming obvious is that we needed faster everything: testing, approval, manufacturing, and distribution.
  2. The real risk, with vaccines, is long-term side effects, ones that might manifest only after years. What I don’t get is, if people genuinely believe this, then why are they OK with having approved the vaccines last month? Why shouldn’t we have waited until 2024, or maybe 2040? By that point, those of us who were still alive could take the covid vaccine with real confidence, at least that the dreaded side effects would be unlikely to manifest before 2060.
  3. Much like with Amdahl’s Law, there are limits to how much more money could’ve sped up vaccine manufacturing. My problem is that, while this is undoubtedly true, I see no indication that we were anywhere close to those limits—or indeed, that the paltry ~$9 billion the US spent on covid vaccines was the output of any rational cost/benefit calculation. It’s like: suppose an enemy army had invaded the US mainland, slaughtered 330,000 people, and shut down much of the economy. Can you imagine Congress responding by giving the Pentagon a 1.3% budget increase to fight back, reasoning that any more would run up against Amdahl’s Law? That’s how much $9 billion is.
  4. The old, inactivated-virus vaccines often took years to develop, so spending years to test them as well made a lot more sense. This is undoubtedly true, but is not a counterargument. It’s time to rethink the whole vaccine approval process for the era of programmable mRNA, which is also the era of pandemics that can spread around the world in months.
  5. Human challenge trials wouldn’t have provided much information, because you can’t do challenge trials with old or sick people, and because covid spread so widely that normal Phase III trials were perfectly informative. Actually, 1DaySooner had plenty of elderly volunteers and volunteers with preexisting conditions. It bothers me how the impossibility of using those volunteers is treated like a law of physics, rather than what it is: another non-obvious moral tradeoff. Also, compared to Phase III trials, it looks like challenge trials would’ve bought us at least a couple months and maybe a half-million lives.
  6. Doctors can’t think like utilitarians—e.g., risking hundreds of lives in challenge trials in order to save millions of lives with a vaccine—because it’s a slippery slope from there to cutting up one person in order to save ten with their organs. Well, I think the informed consent of the challenge trial participants is a pretty important factor here! As is their >99% chance of survival. Look, anyone who works in public health makes utilitarian tradeoffs; the question is whether they’re good or bad ones. As someone who lost most of his extended family in the Holocaust, my rule of thumb is that, if you’re worrying every second about whether you might become Dr. Mengele, that’s a pretty good sign that you won’t become Dr. Mengele.
  7. If a hastily-approved vaccine turned out to be ineffective or dangerous, it could diminish the public’s trust in all future vaccines. Yes, of course there’s such a tradeoff, but I want you to notice the immense irony: this argument effectively says we can condemn millions to die right now, out of concern for hypothetical other millions in the future. And yet some of the people making this argument will then turn around and call me a callous utilitarian!
  8. I’m suffering from hindsight bias: it might be clear now that vaccine approval and distribution should’ve happened a lot faster, but experts had no way of knowing that in the spring. Here’s my post from May 1, entitled “Vaccine challenge trials NOW!” I was encouraged by the many others who said similar things still earlier. Was it just a lucky gamble? Had we been allowed to get vaccinated then, at least we could’ve put our bloodstreams where our mouths were, and profited from the gamble! More seriously, I sympathize with the decision-makers who’d be on the hook had an early vaccine rollout proved disastrous. But if we don’t learn a lesson from this, and ready ourselves for the next pandemic with an mRNA platform that can be customized, tested, and injected into people’s arms within at most 2-3 months, we’ll really have no excuse.

My vaccine crackpottery: a confession

December 31st, 2020

I hope everyone is enjoying a New Years’ as festive as the circumstances allow!

I’ve heard from a bunch of you awaiting my next post on the continuum hypothesis, and it’s a-comin’, but I confess the new, faster-spreading covid variant is giving me the same sinking feeling that Covid 1.0 gave me in late February, making it really hard to think about the eternal. (For perspectives on Covid 2.0 from individuals who acquitted themselves well with their early warnings about Covid 1.0, see for example this by Jacob Falkovich, or this by Zvi Mowshowitz.)

So on that note: do you hold any opinions, on factual matters of practical importance, that most everyone around you sharply disagrees with? Opinions that those who you respect consider ignorant, naïve, imprudent, and well outside your sphere of expertise? Opinions that, nevertheless, you simply continue to hold, because you’ve learned that, unless and until someone shows you the light, you can no more will yourself to change what you think about the matter than change your blood type?

I try to have as few such opinions as possible. Having run Shtetl-Optimized for fifteen years, I’m acutely aware of the success rate of those autodidacts who think they’ve solved P versus NP or quantum gravity or whatever. It’s basically zero out of hundreds—and why wouldn’t it be?

And yet there’s one issue where I feel myself in the unhappy epistemic situation of those amateurs, spamming the professors in all-caps. So, OK, here it is:

I think that, in a well-run civilization, the first covid vaccines would’ve been tested and approved by around March or April 2020, while mass-manufacturing simultaneously ramped up with trillions of dollars’ investment. I think almost everyone on earth could have, and should have, already been vaccinated by now. I think a faster, “WWII-style” approach would’ve saved millions of lives, prevented economic destruction, and carried negligible risks compared to its benefits. I think this will be clear to future generations, who’ll write PhD theses exploring how it was possible that we invented multiple effective covid vaccines in mere days or weeks, but then simply sat on those vaccines for a year, ticking off boxes called “Phase I,” “Phase II,” etc. while civilization hung in the balance.

I’ve said similar things, on this blog and elsewhere, since the beginning of the pandemic, but part of me kept expecting events to teach me why I was wrong. Instead events—including the staggering cost of delay, the spectacular failures of institutional authorities to adapt to the scientific realities of covid, and the long-awaited finding that all the major vaccines safely work (some better than others), just like the experts predicted back in February—all this only made me more confident of my original, stupid and naïve position.

I’m saying all this—clearly enough that no one will misunderstand—but I’m also scared to say it. I’m scared because it sounds too much like colossal ingratitude, like Monday-morning quarterbacking of one of the great heroic achievements of our era by someone who played no part in it.

Let’s be clear: the ~11 months that it took to get from sequencing the novel coronavirus, to approving and mass-manufacturing vaccines, is a world record, soundly beating the previous record of 4 years. Nobel Prizes and billions of dollars are the least that those who made it happen deserve. Eternal praise is especially due to those like Katalin Karikó, who risked their careers in the decades before covid to do the basic research on mRNA delivery that made the development of these mRNA vaccines so blindingly fast.

Furthermore, I could easily believe that there’s no one agent—neither Pfizer nor BioNTech nor Moderna, neither the CDC nor FDA nor other health or regulatory agencies, neither Bill Gates nor Moncef Slaoui—who could’ve unilaterally sped things up very much. If one of them tried, they would’ve simply been ostracized by the other parts of the system, and they probably all understood that. It might have taken a whole different civilization, with different attitudes about utility and risk.

And yet the fact remains that, historic though it was, a one-to-two-year turnaround time wasn’t nearly good enough. Especially once we factor in the faster-spreading variant, by the time we’ve vaccinated everyone, we’ll already be a large fraction of the way to herd immunity and to the vaccine losing its purpose. For all the advances in civilization, from believing in demonic spirits all the way to understanding mRNA at a machine-code level of detail, covid is running wild much like it would have back in the Middle Ages—partly, yes, because modern transportation helps it spread, but partly also because our political and regulatory and public-health tools have lagged so breathtakingly behind our knowledge of molecular biology.

What could’ve been done faster? For starters, as I said back in March, we could’ve had human challenge trials with willing volunteers, of whom there were tens of thousands. We could’ve started mass-manufacturing months earlier, with funding commensurate with the problem’s scale (think trillions, not billions). Today, we could give as many people as possible the first doses (which apparently already provide something like ~80% protection) before circling back to give the second doses (which boost the protection as high as ~95%). We could distribute the vaccines that are now sitting in warehouses, spoiling, while people in the distribution chain take off for the holidays—but that’s such low-hanging fruit that it feels unsporting even to mention it.

Let me now respond to three counterarguments that would surely come up in the comments if I didn’t address them.

  1. The Argument from Actual Risk. Every time this subject arises, someone patiently explains to me that, since a vaccine gets administered to billions of healthy people, the standards for its safety and efficacy need to be even higher than they are for ordinary medicines. Of course that’s true, and it strikes me as an excellent reason not to inject people with a completely untested vaccine! All I ask is that the people who are, or could be, harmed by a faulty vaccine, be weighed on the same moral scale as the people harmed by covid itself. As an example, we know that the Phase III clinical trials were repeatedly halted for days or weeks because of a single participant developing strange symptoms—often a participant who’d received the placebo rather than the actual vaccine! That person matters. Any future vaccine recipient who might develop similar symptoms matters. But the 10,000 people who die of covid every single day we delay, along with the hundreds of millions more impoverished, kept out of school, etc., matter equally. If we threw them all onto the same utilitarian scale, would we be making the same tradeoffs that we are now? I feel like the question answers itself.
  2. The Argument from Perceived Risk. Even with all the testing that’s been done, somewhere between 16% and 40% of Americans (depending on which poll you believe) say that they’ll refuse to get a covid vaccine, often because of anti-vaxx conspiracy theories. How much higher would the percentage be had the vaccines been rushed out in a month or two? And of course, if not enough people get vaccinated, then R0 remains above 1 and the public-health campaign is a failure. In this way of thinking, we need three phases of clinical trials the same way we need everyone to take off their shoes at airport security: it might not prevent a single terrorist, but the masses will be too scared to get on the planes if we don’t. To me, this (if true) only underscores my broader point, that the year-long delay in getting vaccines out represents a failure of our entire civilization, rather than a failure of any one agent. But also: people’s membership in the pro- or anti-vaxx camps is not static. The percentage saying they’ll get a covid vaccine seems to have already gone up, as a formerly abstract question becomes a stark choice between wallowing in delusions and getting a deadly disease, or accepting reality and not getting it. So while the Phase III trials were still underway—when the vaccines were already known to be safe, and experts thought it much more likely than not that they’d work—would it have been such a disaster to let Pfizer and Moderna sell the vaccines, for a hefty profit, to those who wanted them? With the hope that, just like with the iPhone or any other successful consumer product, satisfied early adopters would inspire the more reticent to get in line too?
  3. The Argument from Trump. Now for the most awkward counterargument, which I’d like to address head-on rather than dodge. If the vaccines had been approved faster in the US, it would’ve looked to many like Trump deserved credit for it, and he might well have been reelected. And devastating though covid has been, Trump is plausibly worse! Here’s my response: Trump has the mentality of a toddler, albeit with curiosity swapped out for cruelty and vindictiveness. His and his cronies’ impulsivity, self-centeredness, and incompetence are likely responsible for at least ~200,000 of the 330,000 Americans now dead from covid. But, yes, reversing his previous anti-vaxx stance, Trump did say that he wanted to see a covid vaccine in months, just like I’ve said. Does it make me uncomfortable to have America’s worst president in my “camp”? Only a little, because I have no problem admitting that sometimes toddlers are right and experts are wrong. The solution, I’d say, is not to put toddlers in charge of the government! As should be obvious by now—indeed, as should’ve been obvious back in 2016—that solution has some exceedingly severe downsides. The solution, rather, is to work for a world where experts are unafraid to speak bluntly, so that it never falls to a mental toddler to say what the experts can’t say without jeopardizing their careers.

Anyway, despite everything I’ve written, considerations of Aumann’s Agreement Theorem still lead me to believe there’s an excellent chance that I’m wrong, and the vaccines couldn’t realistically have been rolled out any faster. The trouble is, I don’t understand why. And I don’t understand why compressing this process, from a year or two to at most a month or two, shouldn’t be civilization’s most urgent priority ahead of the next pandemic. So go ahead, explain it to me! I’ll be eternally grateful to whoever makes me retract this post in shame.

Update (Jan. 1, 2021): If you want a sense of the on-the-ground realities of administering the vaccine in the US, check out this long post by Zvi Mowshowitz. Briefly, it looks like in my post, I gave those in charge way too much benefit of the doubt (!!). The Trump administration pledged to administer 20 million vaccines by the end of 2020; instead it administered fewer than 3 million. Crucially, this is not because of any problem with manufacturing or supply, but just because of pure bureaucratic blank-facedness. Incredibly, even as the pandemic rages, most of the vaccines are sitting in storage, at severe risk of spoiling … and officials’ primary concern is not to administer the precious doses, but just to make sure no one gets a dose “out of turn.” In contrast to Israel, where they’re now administering vaccines 24/7, including on Shabbat, with the goal being to get through the entire population as quickly as possible, in the US they’re moving at a snail’s pace and took off for the holidays. In Wisconsin, a pharmacist intentionally spoiled hundreds of doses; in West Virginia, they mistakenly gave antibody treatments instead of vaccines. There are no longer any terms to understand what’s happening other than those of black comedy.

The case for moving to a red state

December 22nd, 2020

Update (Dec. 23): This post quickly attracted many of the most … colorful comments in this blog’s 15-year history. My moderation queue is overflowing right now with “gas the kikes,” “[f-word] [n-words],” “race war now,” “kikes deserve to burn in hell,” “a world without [n-words],” “the day of the rope approaches,” and countless similar contributions. One commenter focused on how hilarious he found my romantic difficulties earlier in life.

The puzzle, for me, is that I’d spent years denouncing Trump’s gleeful destruction of the country that I grew up believing in, using the strongest language I could muster. So why am I only now getting all the hate-spam?

Then a possible explanation hit me: namely, the sort of person who’d leave such comments is utterly impervious to moral condemnation. The only thing such a person cares about—indeed, as it turns out, feels a volcanic need to shout down—is someone articulating an actual plausible path to removing his resentment-fueled minority from power. If this is right, then I’m proud to have hit a nerve. –SA


  1. The US is now a failed democracy, with a president who’s considering declaring martial law to avoid conceding a lost election, and with the majority of his party eager to follow him arbitrarily far into the abyss. Even assuming, as I do, that the immediate putsch will fail, the Republic will not magically return to normal.
  2. The survival of Enlightenment values on Earth now depends, in large part, on the total electoral humiliation and defeat of the forces that enabled Trump—something that the last election failed to deliver.
  3. Alas, ever since it absorbed the Southern racists in the 1960s, the Republican Party has maintained a grip on power wholly out of proportion to its numbers through anti-democratic means. The most durable of these means are built into the Constitution itself: the Electoral College, the overrepresentation of sparsely-populated rural states in the Senate, and the gerrymandering of Congressional districts. Every effort to fix these anachronisms, whether by legislation or Constitutional amendment, has been blocked for generations. It’s fantasy to imagine the beneficiaries of these unjust advantages ever voluntarily giving them up.
  4. Accordingly, the survival of the nation might come down to whether enough Americans, in deep-blue areas like California and New York and Massachusetts, are willing to pick up and move to where their votes actually count.
  5. The pandemic has awoken tens of millions of people to the actual practical feasibility of working from home or in a different time zone from their employer. The culture has finally caught up to the abridgment of distance that the Internet, smartphones, and videoconferencing achieved well over a decade ago.
  6. Still, one doesn’t expect Brooklynites to settle by the thousands on remote mountaintops. And even if they did, there are many remote mountaintops, so the transplants’ power could be diluted to near nothing. Better for the transplants to concentrate themselves in a few Schelling points: ideally, cities where they could both swing the national electoral calculus and actually want to live.
  7. There’s been a spate of recent articles about the possible exodus of tech companies and professionals from the Bay Area, because of whatever combination of sky-high rents, NIMBYism, taxes, mismanagement, wildfires, blackouts, and the pandemic having removed the once-overwhelming reasons to be in the Bay. Oft-mentioned alternatives include Miami, Denver, and of course my own adopted hometown of Austin, TX, where Elon Musk and Oracle just announced they’re moving.
  8. If you were trying to optimize your environment for urban Blue-Tribeyness—indie music, craft beer, ironic tattoos, Bernie Sanders yard signs, etc. etc.—but subject to living in an important red or purple state, where your vote could plausibly contribute to a historic political realignment of the US—then you couldn’t do much better than Austin. Where else is in the running? Atlanta, Houston, San Antonio, Pittsburgh?
  9. It’s true that Texas is the state of Ken Paxton, the corrupt and unhinged Attorney General who unsuccessfully petitioned the US Supreme Court to overturn Trump’s election loss. But it’s also the state of MD Anderson, often considered the best oncology center on earth, and of Steven Weinberg, possibly the greatest living physicist. It’s where the spike proteins of both the Pfizer and Moderna covid vaccines were developed. It’s where Sheldon Cooper grew up—alright, he’s fictional, but I’ve worked with undergrads at UT Austin who almost could’ve been Sheldon. Like the US as a whole, the state has potential.
  10. Accelerating the mass migration of blue Americans to cities like Austin isn’t only good for the country and the world. The New Yorkers and San Franciscans left behind will thank the migrants for lower rents!
  11. But won’t climate change make Texas a living hell? Alas, as recent wildfires and hurricanes remind us, there aren’t many places on earth that climate change won’t soon make various shades of hell. At least Austin, like many red locales, is far inland. For the summers, there are lots of swimming pools and lakes.
  12. If Austin gets overrun by Silicon Valley refugees, won’t they recreate whatever dysfunctional conditions caused them to flee Silicon Valley in the first place? Maybe, eventually, but it would take quite a while. One problem at a time! And the “problems of Silicon Valley” are problems most places should desperately want.
  13. Is Texas winnable—or is a blue Texas like controlled nuclear fusion, forever a decade or two in the future? Well, Trump’s 6-point margin in Texas this November, 3 points less than his margin in 2016, amounted to 630,000 votes out of 11.3 million cast. Meanwhile, net migration to Texas over the past decade included 356,000 to Austin (growing its population by 20%), 687,000 to Dallas, 603,000 to Houston, 260,000 to San Antonio. Let’s say we want two million more transplants. The question is not whether they’re going to arrive but at what rate.
  14. Can the cities of Texas accommodate two million more people? Well, traffic will get worse, rents will get higher … but the answer is an unequivocal yes. Land, Texas has.
  15. Do the tech workers who I’d like to relocate even vote blue? Given the unremitting scorn that the woke press now heaps on “racist, sexist, greedy Silicon Valley techbros,” it can be easy to forget this, but the answer to the question is: yes, overwhelmingly, they do. Mountain View, CA, for example, went 83% Biden and only 15% Trump in November.
  16. Even if everything I’ve said is obvious, in order for the Great Red-State Tech-Worker Migration happen at the rate I want, it needs to become common knowledge that it’s happening—not merely known but known to be known, and so forth. Closely related, it needs to become a serious status symbol for any blue-triber to relocate to a contested state. (“You’re moving to Georgia to help save the Republic? And you’ll be able to afford a four-bedroom house? I’m so jealous!”)
  17. This has been the real purpose of this post: to make it clear that, if you help settle the wild frontier like my family did, then a tiny bit of the unattainable coolness of a stuttering quantum complexity theory blogger/professor could rub off on you.
  18. Think about it this way. Many of our grandparents gave their lives to save the world from fascism. Would you have done the same in their place? OK now, what if you didn’t have to lose your life: you only had to live in Austin or Miami?
  19. If this post plays a role in any like-minded reader’s decision to move to Austin, then once covid is over, they should tell me to redeem a personal welcome celebration from me and Dana. We’ll throw some extra brisket on the barbie.

Chinese BosonSampling experiment: the gloves are off

December 16th, 2020

Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn’t google, from writing to tell me how disappointed they were by my silence!

The reality, though, is that a lot has happened since the original announcement, so it’s way past time for an update.

I. The Quest to Spoof

Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It’s important to understand that this is exactly how the process is supposed to work: as I’ve often stressed, a quantum supremacy claim is credible only if it’s open to the community to refute and if no one can. It’s also important to understand that, for reasons we’ll go into, there’s a decent chance that people will succeed in simulating the new experiment classically, although they haven’t yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it’s already there.

Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it’s now clear that I didn’t exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can’t be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn’t need to go through a journal’s editorial system.

Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google’s result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I’ve learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema.

Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with “Fock state” BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron.

Incidentally, if anything’s become clear over the last two weeks, it’s that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject’s literature!

Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn’t yet been analyzed in the relevant regime, though there’s been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively). It wouldn’t do to reject USTC’s claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration!

OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn’t care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does.

In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they’ve ruled out the possibility that the k=2 level can explain their results either. We’re now eagerly awaiting the answer for larger values of k.

Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil’s 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I’ve always implored the skeptics to do.

II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB

There’s a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn’t that be solely the experimenters’ job?

To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called “Linear Cross-Entropy” (LXEB), closely related to what Lijie Chen and I called “Heavy Output Generation” (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that’s even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google’s chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn’t going to work against RCS and Linear XEB, with no computational assumption needed.)

So then why couldn’t the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called “HOG,” although it’s different from my and Lijie Chen’s HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang’s, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you’d get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results.

The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler’s work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled “BosonSampling Is Far From Uniform.” Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don’t know exactly where that ceiling is, we can’t tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one.

This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might’ve given the issue some more attention!

III. Fock vs. Gaussian BosonSampling

Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now.

In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place.

The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available “Spontaneous Parametric Down-Conversion” (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle, $$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there’s also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well.

OK, so what’s Gaussian BosonSampling? It’s simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary “Gaussian state”: for those in the know, a state that’s exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states.

While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the “Torontonian,” which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have “threshold detectors” (which decide whether one or more photons are present in a given mode), rather than “number-resolving detectors” (which count how many photons are present).

If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn’t follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn’t mean that a random Hafnian is as hard as a random permanent.

Nevertheless, I think it’s very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or “Torontonians”), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done!

IV. Application to Molecular Vibronic Spectra?

In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of “molecular vibronic spectra,” also known as “Franck-Condon profiles,” whatever those are.

I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these “vibronic spectra” seemed to be an example. But while it’s often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that’s not the only way to estimate physical quantities! And worryingly, in all the other examples we’d seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception?

After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception.

In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted sum like $$ w_1 s_1 + \cdots + w_ m s_m, $$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits’s classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$ w_1 s_1 + \cdots + w_ m s_m, $$ rather than merely additive approximations to the values—but since we’d still need to sort those sampled values into coarse “bins” in order to compute a histogram, it’s not clear why that additional precision would ever be of chemical interest.

This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would’ve provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended.

V. Application to Finding Dense Subgraphs?

A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.)

This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph’s density! Alas, Xanadu’s Juan Miguel Arrazola himself recently told me that there’s a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues!


And that’s all for now! I’m grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I’ll post updates as I learn more!