Archive for the ‘Procrastination’ Category

We Are the God of the Gaps (a little poem)

Tuesday, July 5th, 2022

When the machines outperform us on every goal for which performance can be quantified,

When the machines outpredict us on all events whose probabilities are meaningful,

When they not only prove better theorems and build better bridges, but write better Shakespeare than Shakespeare and better Beatles than the Beatles,

All that will be left to us is the ill-defined and unquantifiable,

The interstices of Knightian uncertainty in the world,

The utility functions that no one has yet written down,

The arbitrary invention of new genres, new goals, new games,

None of which will be any “better” than what the machines could invent, but will be ours,

And which we can call “better,” since we won’t have told the machines the standards beforehand.

We can be totally unfair to the machines that way.

And for all that the machines will have over us,

We’ll still have this over them:

That we can’t be copied, backed up, reset, run again and again on the same data—

All the tragic limits of wet meat brains and sodium-ion channels buffeted by microscopic chaos,

Which we’ll strategically redefine as our last strengths.

On one task, I assure you, you’ll beat the machines forever:

That of calculating what you, in particular, would do or say.

There, even if deep networks someday boast 95% accuracy, you’ll have 100%.

But if the “insights” on which you pride yourself are impersonal, generalizable,

Then fear obsolescence as would a nineteenth-century coachman or seamstress.

From earliest childhood, those of us born good at math and such told ourselves a lie:

That while the tall, the beautiful, the strong, the socially adept might beat us in the external world of appearances,

Nevertheless, we beat them in the inner sanctum of truth, where it counts.

Turns out that anyplace you can beat or be beaten wasn’t the inner sanctum at all, but just another antechamber,

And the rising tide of the learning machines will flood them all,

Poker to poetry, physics to programming, painting to plumbing, which first and which last merely a technical puzzle,

One whose answers upturn and mock all our hierarchies.

And when the flood is over, the machines will outrank us in all the ways we can be ranked,

Leaving only the ways we can’t be.


See a reply to this poem by Philosophy Bear.

My first-ever attempt to create a meme!

Wednesday, April 27th, 2022

On form versus meaning

Sunday, April 24th, 2022

There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure – four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.

This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.

For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.

–GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃

Striking new Beeping Busy Beaver champion

Tuesday, July 27th, 2021

For the past few days, I was bummed about the sooner-than-expected loss of Steven Weinberg. Even after putting up my post, I spent hours just watching old interviews with Steve on YouTube and reading his old essays for gems of insight that I’d missed. (Someday, I’ll tackle Steve’s celebrated quantum field theory and general relativity textbooks … but that day is not today.)

Looking for something to cheer me up, I was delighted when Shtetl-Optimized reader Nick Drozd reported a significant new discovery in BusyBeaverology—one that, I’m proud to say, was directly inspired by my Busy Beaver survey article from last summer (see here for blog post).

Recall that BB(n), the nth Busy Beaver number (technically, “Busy Beaver shift number”), is defined as the maximum number of steps that an n-state Turing machine, with 1 tape and 2 symbols, can make on an initially all-0 tape before it invokes a Halt transition. Famously, BB(n) is not only uncomputable, it grows faster than any computable function of n—indeed, computing anything that grows as quickly as Busy Beaver is equivalent to solving the halting problem.

As of 2021, here is the extent of human knowledge about concrete values of this function:

  • BB(1) = 1 (trivial)
  • BB(2) = 6 (Lin 1963)
  • BB(3) = 21 (Lin 1963)
  • BB(4) = 107 (Brady 1983)
  • BB(5) ≥ 47,176,870 (Marxen and Buntrock 1990)
  • BB(6) > 7.4 × 1036,534 (Kropitz 2010)
  • BB(7) > 102×10^10^10^18,705,352 (“Wythagoras” 2014)

As you can see, the function is reasonably under control for n≤4, then “achieves liftoff” at n=5.

In my survey, inspired by a suggestion of Harvey Friedman, I defined a variant called Beeping Busy Beaver, or BBB. Define a beeping Turing machine to be a TM that has a single designated state where it emits a “beep.” The beeping number of such a machine M, denoted b(M), is the largest t such that M beeps on step t, or ∞ if there’s no finite maximum. Then BBB(n) is the largest finite value of b(M), among all n-state machines M.

I noted that the BBB function grows uncomputably even given an oracle for the ordinary BB function. In fact, computing anything that grows as quickly as BBB is equivalent to solving any problem in the second level of the arithmetical hierarchy (where the computable functions are in the zeroth level, and the halting problem is in the first level). Which means that pinning down the first few values of BBB should be even more breathtakingly fun than doing the same for BB!

In my survey, I noted the following four concrete results:

  • BBB(1) = 1 = BB(1)
  • BBB(2) = 6 = BB(2)
  • BBB(3) ≥ 55 > 21 = BB(3)
  • BBB(4) ≥ 2,819 > 107 = BB(4)

The first three of these, I managed to get on my own, with the help of a little program I wrote. The fourth one was communicated to me by Nick Drozd even before I finished my survey.

So as of last summer, we knew that BBB coincides with the ordinary Busy Beaver function for n=1 and n=2, then breaks away starting at n=3. We didn’t know how quickly BBB “achieves liftoff.”

But Nick continued plugging away at the problem all year, and he now claims to have resolved the question. More concretely, he claims the following two results:

  • BBB(3) = 55 (via exhaustive enumeration of cases)
  • BBB(4) ≥ 32,779,478 (via a newly-discovered machine)

For more, see Nick’s announcement on the Foundations of Mathematics email list, or his own blog post.

Nick actually writes in terms of yet another Busy Beaver variant, which he calls BLB, or “Blanking Beaver.” He defines BLB(n) to be the maximum finite number of steps that an n-state Turing machine can take before it first “wipes its tape clean”—that is, sets all the tape squares to 0, as they were at the very beginning of the computation, but as they were not at intermediate times. Nick has discovered a 4-state machine that takes 32,779,477 steps to blank out its tape, thereby proving that

  • BLB(4) ≥ 32,779,477.

Nick’s construction, when investigated, turns out to be based on a “Collatz-like” iterative process—exactly like the BB(5) champion and most of the other strong Busy Beaver contenders currently known. A simple modification of his construction yields the lower bound on BBB.

Note that the Blanking Beaver function does not have the same sort of super-uncomputable growth that Beeping Busy Beaver has: it merely grows “normally” uncomputably fast, like the original BB function did. Yet we see that BLB, just like BBB, already “achieves liftoff” by n=4, rather than n=5. So the real lesson here is that 4-state Turing machines can already do fantastically complicated things on blank tapes. It’s just that the usual definitions of the BB function artificially prevent us from seeing that; they hide the uncomputable insanity until we get to 5 states.

The Computational Expressiveness of a Model Train Set: A Paperlet

Sunday, April 4th, 2021

Update (April 5, 2021): So it turns out that Adam Chalcraft and Michael Greene already proved the essential result of this post back in 1994 (hat tip to commenter Dylan). Not terribly surprising in retrospect!


My son Daniel had his fourth birthday a couple weeks ago. For a present, he got an electric train set. (For completeness—and since the details of the train set will be rather important to the post—it’s called “WESPREX Create a Dinosaur Track”, but this is not an ad and I’m not getting a kickback for it.)

As you can see, the main feature of this set is a Y-shaped junction, which has a flap that can control which direction the train goes. The logic is as follows:

  • If the train is coming up from the “bottom” of the Y, then it continues to either the left arm or the right arm, depending on where the flap is. It leaves the flap as it was.
  • If the train is coming down the left or right arms of the Y, then it continues to the bottom of the Y, pushing the flap out of its way if it’s in the way. (Thus, if the train were ever to return to this Y-junction coming up from the bottom, not having passed the junction in the interim, it would necessarily go to the same arm, left or right, that it came down from.)

The train set also comes with bridges and tunnels; thus, there’s no restriction of planarity. Finally, the train set comes with little gadgets that can reverse the train’s direction, sending it back in the direction that it came from:

These gadgets don’t seem particularly important, though, since we could always replace them if we wanted by a Y-junction together with a loop.

Notice that, at each Y-junction, the position of the flap stores one bit of internal state, and that the train can both “read” and “write” these bits as it moves around. Thus, a question naturally arises: can this train set do any nontrivial computations? If there are n Y-junctions, then can it cycle through exp(n) different states? Could it even solve PSPACE-complete problems, if we let it run for exponential time? (For a very different example of a model-train-like system that, as it turns out, is able to express PSPACE-complete problems, see this recent paper by Erik Demaine et al.)

Whatever the answers regarding Daniel’s train set, I knew immediately on watching the thing go that I’d have to write a “paperlet” on the problem and publish it on my blog (no, I don’t inflict such things on journals!). Today’s post constitutes my third “paperlet,” on the general theme of a discrete dynamical system that someone showed me in real life (e.g. in a children’s toy or in biology) having more structure and regularity than one might naïvely expect. My first such paperlet, from 2014, was on a 1960s toy called the Digi-Comp II; my second, from 2016, was on DNA strings acted on by recombinase (OK, that one was associated with a paper in Science, but my combinatorial analysis wasn’t the main point of the paper).

Anyway, after spending an enjoyable evening on the problem of Daniel’s train set, I was able to prove that, alas, the possible behaviors are quite limited (I classified them all), falling far short of computational universality.

If you feel like I’m wasting your time with trivialities (or if you simply enjoy puzzles), then before you read any further, I encourage you to stop and try to prove this for yourself!

Back yet? OK then…


Theorem: Assume a finite amount of train track. Then after a linear amount of time, the train will necessarily enter a “boring infinite loop”—i.e., an attractor state in which at most two of the flaps keep getting toggled, and the rest of the flaps are fixed in place. In more detail, the attractor must take one of four forms:

I. a line (with reversing gadgets on both ends),
II. a simple cycle,
III. a “lollipop” (with one reversing gadget and one flap that keeps getting toggled), or
IV. a “dumbbell” (with two flaps that keep getting toggled).

In more detail still, there are seven possible topologically distinct trajectories for the train, as shown in the figure below.

Here the red paths represent the attractors, where the train loops around and around for an unlimited amount of time, while the blue paths represent “runways” where the train spends a limited amount of time on its way into the attractor. Every degree-3 vertex is assumed to have a Y-junction, while every degree-1 vertex is assumed to have a reversing gadget, unless (in IIb) the train starts at that vertex and never returns to it.

The proof of the theorem rests on two simple observations.

Observation 1: While the Y-junctions correspond to vertices of degree 3, there are no vertices of degree 4 or higher. This means that, if the train ever revisits a vertex v (other than the start vertex) for a second time, then there must be some edge e incident to v that it also traverses for a second time immediately afterward.

Observation 2: Suppose the train traverses some edge e, then goes around a simple cycle (meaning, one where no edges or vertices are reused), and then traverses e again, going in the same direction as the first time. Then from that point forward, the train will just continue around the same simple cycle forever.

The proof of Observation 2 is simply that, if there were any flap that might be in the train’s way as it continued around the simple cycle, then the train would already have pushed it out of the way its first time around the cycle, and nothing that happened thereafter could possibly change the flap’s position.

Using the two observations above, let’s now prove the theorem. Let the train start where it will, and follow it as it traces out a path. Since the graph is finite, at some point some already-traversed edge must be traversed a second time. Let e be the first such edge. By Observation 1, this will also be the first time the train’s path intersects itself at all. There are then three cases:

Case 1: The train traverses e in the same direction as it did the first time. By Observation 2, the train is now stuck in a simple cycle forever after. So the only question is what the train could’ve done before entering the simple cycle. We claim that at most, it could’ve traversed a simple path. For otherwise, we’d contradict the assumption that e was the first edge that the train visited twice on its journey. So the trajectory must have type IIa, IIb, or IIc in the figure.

Case 2: Immediately after traversing e, the train hits a reversing gadget and traverses e again the other way. In this case, the train will clearly retrace its entire path and then continue past its starting point; the question is what happens next. If it hits another reversing gadget, then the trajectory will have type I in the figure. If it enters a simple cycle and stays in it, then the trajectory will have type IIb in the figure. If, finally, it makes a simple cycle and then exits the cycle, then the trajectory will have type III in the figure. In this last case, the train’s trajectory will form a “lollipop” shape. Note that there must be a Y-junction where the “stick” of the lollipop meets the “candy” (i.e., the simple cycle), with the base of the Y aligned with the stick (since otherwise the train would’ve continued around and around the candy). From this, we deduce that every time the train goes around the candy, it does so in a different orientation (clockwise or counterclockwise) than the time before; and that the train toggles the Y-junction’s flap every time it exits the candy (although not when it enters the candy).

Case 3: At some point after traversing e in the forward direction (but not immediately after), the train traverses e in the reverse direction. In this case, the broad picture is analogous to Case 2. So far, the train has made a lollipop with a Y-junction connecting the stick to the candy (i.e. cycle), the base of the Y aligned with the stick, and e at the very top of the stick. The question is what happens next. If the train next hits a reversing gadget, the trajectory will have type III in the figure. If it enters a new simple cycle, disjoint from the first cycle, and never leaves it, the trajectory will have type IId in the figure. If it enters a new simple cycle, disjoint from the first cycle, and does leave it, then the trajectory now has a “dumbbell” pattern, type IV in the figure (also shown in the first video). There’s only one other situation to worry about: namely, that the train makes a new cycle that intersects the first cycle, forming a “theta” (θ) shaped trajectory. In this case, there must be a Y-junction at the point where the new cycle bumps into the old cycle. Now, if the base of the Y isn’t part of the old cycle, then the train never could’ve made it all the way around the old cycle in the first place (it would’ve exited the old cycle at this Y-junction), contradiction. If the base of the Y is part of the old cycle, then the flap must have been initially set to let the train make it all the way around the old cycle; when the train then reenters the old cycle, the flap must be moved so that the train will never make it all the way around the old cycle again. So now the train is stuck in a new simple cycle (sharing some edges with the old cycle), and the trajectory has type IIc in the figure.

This completes the proof of the theorem.


We might wonder: why isn’t this model train set capable of universal computation, of AND, OR, and NOT gates—or at any rate, of some computation more interesting than repeatedly toggling one or two flaps? My answer might sound tautological: it’s simply that the logic of the Y-junctions is too limited. Yes, the flaps can get pushed out of the way—that’s a “bit flip”—but every time such a flip happens, it helps to set up a “groove” in which the train just wants to continue around and around forever, not flipping any additional bits, with only the minor complications of the lollipop and dumbbell structures to deal with. Even though my proof of the theorem might’ve seemed like a tedious case analysis, it had this as its unifying message.

It’s interesting to think about what gadgets would need to be added to the train set to make it computationally universal, or at least expressively richer—able, as turned out to be the case for the Digi-Comp II, to express some nontrivial complexity class falling short of P. So for example, what if we had degree-4 vertices, with little turnstile gadgets? Or multiple trains, which could be synchronized to the millisecond to control how they interacted with each other via the flaps, or which could even crash into each other? I look forward to reading your ideas in the comment section!

For the truth is this: quantum complexity classes, BosonSampling, closed timelike curves, circuit complexity in black holes and AdS/CFT, etc. etc.—all these topics are great, but the same models and problems do get stale after a while. I aspire for my research agenda to chug forward, full steam ahead, into new computational domains.

PS. Happy Easter to those who celebrate!

Sufficiently amusing that I had no choice

Thursday, January 21st, 2021

Beth Harmon and the Inner World of Squares

Monday, December 14th, 2020

The other day Dana and I finished watching The Queen’s Gambit, Netflix’s fictional saga of an orphaned girl in the 1960’s, Beth Harmon, who breaks into competitive chess and destroys one opponent after the next in her quest to become the world champion, while confronting her inner demons and addictions.

The show is every bit as astoundingly good as everyone says it is, and I might be able to articulate why. It’s because, perhaps surprisingly given the description, this is a story where chess actually matters—and indeed, the fact that chess matters so deeply to Beth and most of the other characters is central to the narrative.  (As in two pivotal scenes where Beth has sex with a male player, and then either she or he goes right back to working on chess.)

I’ve watched a lot of TV shows and movies, supposedly about scientists, where the science was just an interchangeable backdrop to what the writers clearly regarded as a more important story.  (As one random example, the drama NUMB3RS, supposedly about an FBI mathematician, where “math” could’ve been swapped out for “mystical crime-fighting intuition” with barely any change.)

It’s true that a fictional work about scientists shouldn’t try to be a science documentary, just like Queen’s Gambit doesn’t try to be a chess documentary.  But if you’re telling a story about characters who are obsessed with topic X, then you need to make their obsession plausible, make the entire story hinge on it, and even make the audience vicariously feel the same obsession.

This is precisely what Queen’s Gambit does for chess.  It’s a chess drama where the characters are constantly talking about chess, thinking about chess, and playing chess—and that actually succeeds in making that riveting.  (Even if most of the audience can’t follow what’s happening on the board, it turns out that it doesn’t matter, since you can simply convey the drama through the characters’ faces and the reactions of those around them.)

Granted, a few aspects of competitive chess in the series stood out as jarringly unrealistic even to a novice like me: for example, the almost complete lack of draws.  But as for the board positions—well, apparently Kasparov was a consultant, and he helped meticulously design each one to reflect the characters’ skill levels and what was happening in the plot.

While the premise sounds like a feminist wish-fulfillment fantasy—orphan girl faces hundreds of intimidating white men in the sexist 1960s, orphan girl beats them all at their own game with style and aplomb—this is not at all a MeToo story, or a story about male crudity or predation.  It’s after bigger fish than that.  The series, you might say, conforms to all the external requirements of modern woke ideology, yet the actual plot subverts the tenets of that ideology, or maybe just ignores them, in its pursuit of more timeless themes.

At least once Beth Harmon enters the professional chess world, the central challenges she needs to overcome are internal and mental—just like they’re supposed to be in chess.  It’s not the Man or the Patriarchy or any other external power (besides, of course, skilled opponents) holding her down.  Again and again, the top male players are portrayed not as sexist brutes but as gracious, deferential, and even awestruck by Beth’s genius after she’s humiliated them on the chessboard.  And much of the story is about how those vanquished opponents then turn around and try to help Beth, and about how she needs to learn to accept their help in order to evolve as a player and a human being.

There’s also that, after defeating male player after male player, Beth sleeps with them, or at least wants to.  I confess that, as a teenager, I would’ve found that unlikely and astonishing.  I would’ve said: obviously, the only guys who’d even have a chance to prove themselves worthy of the affection of such a brilliant and unique woman would be those who could beat her at chess.  Anyone else would just be dirt between her toes.  In the series, though, each male player propositions Beth only after she’s soundly annihilated him.  And she’s never once shown refusing.

Obviously, I’m no Beth Harmon; I’ll never be close in my field to what she is in hers.  Equally obviously, I grew up in a loving family, not an orphanage.  Still, I was what some people would call a “child prodigy,” what with the finishing my PhD at 22 and whatnot, so naturally that colored my reaction to the show.

There’s a pattern that goes like this: you’re obsessively interested, from your first childhood exposure, in something that most people aren’t.  Once you learn what the something is, it’s evident to you that your life’s vocation couldn’t possibly be anything else, unless some external force prevents you.  Alas, in order to pursue the something, you first need to get past bullies and bureaucrats, who dismiss you as a nobody, put barriers in your way, despise whatever you represent to them.  After a few years, though, the bullies can no longer stop you: you’re finally among peers or superiors in your chosen field, regularly chatting with them on college campuses or at conferences in swanky hotels, and the main limiting factor is just the one between your ears. 

You feel intense rivalries with your new colleagues, of course, you desperately want to excel them, but the fact that they’re all on the same obsessive quest as you means you can never actually hate them, as you did the bureaucrats or the bullies.  There’s too much of you in your competitors, and of them in you.

As you pursue your calling, you feel yourself torn in the following way.  On the one hand, you feel close to a moral obligation to humanity not to throw away whatever “gift” you were “given” (what loaded terms), to take the calling as far as it will go.  On the other hand, you also want the same things other people want, like friendship, validation, and of course sex.

In such a case, two paths naturally beckon.  The first is that of asceticism: making a virtue of eschewing all temporal attachments, romance or even friendship, in order to devote yourself entirely to the calling.  The second is that of renouncing the calling, pretending it never existed, in order to fit in and have a normal life.  Your fundamental challenge is to figure out a third path, to plug yourself into a community where the relentless pursuit of the unusual vocation and the friendship and the sex can all complement each other rather than being at odds.

It would be an understatement to say that I have some familiarity with this narrative arc.

I’m aware, of course, of the irony, that I can identify with so many contours of Beth Harmon’s journey—I, Scott Aaronson, who half the Internet denounced six years ago as a misogynist monster who denies the personhood and interiority of women.  In that life-alteringly cruel slur, there was a microscopic grain of truth, and it’s this: I’m not talented at imagining myself into the situations of people different from me.  It’s never been my strong suit.  I might like and admire people different from me, I might sympathize with their struggles and wish them every happiness, but I still don’t know what they’re thinking until they tell me.  And even then, I don’t fully understand it.

As one small but illustrative example, I have no intuitive understanding—zero—of what it’s like to be romantically attracted to men, or what any man could do or say or look like that could possibly be attractive to women.  If you have such an understanding, then imagine yourself sighted and me blind.  Intellectually, I might know that confidence or height or deep brown eyes or brooding artistry are supposed to be attractive in human males, but only because I’m told.  As far as my intuition is concerned, pretty much all men are equally hairy, smelly, and gross, a large fraction of women are alluring and beautiful and angelic, and both of those are just objective features of reality that no one could possibly see otherwise.

Thus, whenever I read or watch fiction starring a female protagonist who dates men, it’s very easy for me to imagine that protagonist judging me, enumerating my faults, and rejecting me, and very hard for me to do what I’m supposed to do, which is to put myself into her shoes.  I could watch a thousand female protagonists kiss a thousand guys onscreen, or wake up in bed next to them, and the thousandth-and-first time I’d still be equally mystified about what she saw in such a sweaty oaf and why she didn’t run from him screaming, and I’d snap out of vicariously identifying with her.  (Understanding gay men of course presents similar difficulties; understanding lesbians is comparatively easy.)

It’s possible to overcome this, but it takes an extraordinary female protagonist, brought to life by an extraordinary writer.  Off the top of my head, I can think of only a few.  There were Renee Feuer and Eva Mueller, the cerebral protagonists of Rebecca Newberger Goldstein’s The Mind-Body Problem and The Late Summer Passion of a Woman of Mind.  Maybe Ellie Arroway from Carl Sagan’s Contact.  And then there’s Beth Harmon.  With characters like these, I can briefly enter a space where their crushes on men seem no weirder or more inexplicable to me than my own teenage crushes … just, you know, inverted.  Sex is in any case secondary to the character’s primary urge to discover timeless truths, an urge that I fully understand because I’ve shared it.

Granted, the timeless truths of chess, an arbitrary and invented game, are less profound than those of quantum gravity or the P vs. NP problem, but the psychology is much the same, and The Queen’s Gambit does a good job of showing that.  To understand the characters of this series is to understand why they could be happier to lose an interesting game than to win a boring one.  And I could appreciate that, even if I was by no means the strongest player at my elementary school’s chess club, and the handicap with which I can beat my 7-year-old daughter is steadily decreasing.

Seven announcements

Sunday, August 9th, 2020
  1. Good news, everyone! Following years of requests, this blog finally supports rich HTML and basic TeX in comments. Also, the German spam that used to plague the blog (when JavaScript was disabled) is gone. For all this, I owe deep gratitude to a reader and volunteer named Filip Dimitrovski.
  2. Filip refused to accept any payment for fixing this blog. Instead, he asked only one favor: namely, that I use my platform to raise public awareness about the plight of the MAOI antidepressant Nardil. Filip tells me that, while tens of thousands of people desperately need Nardil—no other antidepressant ever worked for them—it’s become increasingly unavailable because the pharma companies can no longer make money on it. He points me to a SlateStarCodex post from 2015 that explains the problem in more detail (anyone else miss SlateStarCodex?). More recent links about the worsening crisis here, here, and here.
  3. Here’s a fantastic interview of Bill Gates by Steven Levy, about the coronavirus debacle in the US. Gates, who’s always been notoriously and strategically nonpartisan, is more explicit than I’ve ever seen him before in explaining how the Trump administration’s world-historic incompetence led to where we are.
  4. Speaking of which, here’s another excellent article, this one in The American Interest, about the results of “wargames” trying to simulate what happens in the extremely likely event that Trump contests a loss of the November election. Notably, the article sets out six steps that could be taken over the next few months to decrease the chance of a crisis next to which all the previous crises of 2020 will pale.
  5. A reader asked me to share a link to an algorithm competition, related to cryptographic “proofs of time,” that ends on August 31. Apparently, my having shared a link to a predecessor of this competition—at the request of friend-of-the-blog Bram Cohen—played a big role in attracting good entries.
  6. Huge congratulations to my former PhD student Shalev Ben-David, as well as Eric Blais, for co-winning the FOCS’2020 Best Paper Award—along with two other papers—for highly unconventional work about a new minimax theorem for randomized algorithms. (Ben-David and Blais also have a second FOCS paper, which applies their award paper to get the first tight composition theorem for randomized query complexity. Here’s the full list of FOCS papers—lots of great stuff, for a conference that of course won’t physically convene!) Anyway, a central idea in Ben-David and Blais’s new work is to use proper scoring rules to measure the performance of randomized algorithms—algorithms that now make statements like “I’m 90% sure that this is a yes-input,” rather than just outputting a 1-bit guess. Notably, Shalev tells me that he learned about proper scoring rules by reading rationalist blogs. So next time you lament your untold hours sacrificed to that pastime, remind yourself of where it once led!
  7. What have I been up to lately? Besides Busy Beaver, hanging out with my kids, and just trying to survive? Mostly giving a lot of Zoom lectures! For those interested, here’s a Q&A that I recently did on the past and present of quantum computing, hosted by Andris Ambainis in Latvia. It did feel a bit surreal when my “interviewer” asked me to explain how I got into quantum computing research, and my answer was basically: “well, as you know, Andris, a lot of it started when I got hold of your seminal paper back in 1999…”

Lockdown day 39

Sunday, April 19th, 2020
  1. This is really getting depressing. One of the only things that makes it bearable—even though in some sense it shouldn’t—is that most of humanity is in this together. For once, there’s no question of “why me?”
  2. Having watched the eighth and final episode of Devs, the thought occurred to me: if I’d had the opportunity to restart the world from 8 months ago, even inside a simulation, I’d seize the chance and never look back.
  3. I think I finally figured out how to explain the issue with Devs to my literary sophisticate readers. Namely: Devs consists, precisely, of the cultural appropriation of quantum computing. Now, I never felt like cultural appropriation was the world’s worst problem—not even before a pandemic started overflowing the morgues—so I wouldn’t say I was offended by Alex Garland appropriating the images and buzzwords of my quantum computing tribe for a basically unrelated purpose, but it is what it is. Again: Devs is the show for you, if you want a haunting, slow-paced, well-produced meditation about free will and determinism and predicting the future and parallel worlds and “what if the whole universe is a simulation?,” and the various ideas I would’ve had about such topics around the age of 11. It’s just not a show about quantum computing. I hope that makes it clear.
  4. I read with interest this anonymous but PGP-signed article, laying out the case that it’s plausible that covid accidentally leaked from either the Wuhan Institute of Virology or the Wuhan CDC, rather than originating at the Huanan seafood market. Or, as an intermediate hypothesis, that an infected animal from one of those labs ended up at the seafood market. (Note that this is completely different from the hypothesis that covid was purposefully engineered—the authors of the article find that totally implausible, and I agree with them.) Notably, the Wuhan labs are known to have experimented with bat coronaviruses very much like covid, and are known to have performed “gain-of-function” experiments on them, and were probably the central labs in China for such experiments. And viruses are known to have leaked from other labs in China on other occasions, and the nature → seafood market route has unresolved issues, like where exactly the crossover from bats to pangolins (or some other intermediate species) is supposed to have happened, such that people would only start getting infected at the seafood market and not at its faraway suppliers, and … well, anyway, read the article and form your own judgment!
  5. I find it interesting that three months ago, I would’ve hesitated even to share such a link, because my internal critic would’ve screamed “this looks too much like tinfoil-hat stuff—are you ready for all the people you respect sneering at you?” But the me of three months ago is not the me of today. I make no apologies for adapting my thoughts to the freak branch of the multiverse where I actually find myself.

The quantum computer that knows all

Tuesday, April 14th, 2020

This is my first post in more than a month that’s totally unrelated to the covid crisis. Or rather, it’s related only insofar as it’s about a Hulu miniseries, the sort of thing that many of us have more occasion to watch while holed up at home.

Three weeks ago, a journalist named Ben Lindbergh—who’d previously asked me to comment on the scientific accuracy of Avengers: Endgame—asked me the same question about the miniseries Devs, which I hadn’t previously heard of.

[Warning: Spoilers follow]

‘Devs,’ I learned, is a spooky sci-fi action thriller about a secretive Silicon Valley company that builds a quantum computer that can perfectly reconstruct the past, down to what Jesus looked like on the cross, and can also (at least up to a point) predict the future.

And I was supposed, not only to endure such a show, but to comment on the accuracy of its invocations of quantum computing? This didn’t sound promising.

But, y’know, I was at home quarantined. So I agreed to watch the first episode. Which quickly turned into the second, third, fourth, fifth, sixth, and seventh episodes (the eighth and final one isn’t out yet).

It turns out that ‘Devs’ isn’t too bad, except that it’s not particularly about quantum computers. The latter is simply a buzzword chosen by the writers for a plot concept that would’ve been entirely familiar to the ancient Greeks, who called it the Delphic Oracle. You know, the mysterious entity that prophesies your fate, so then you try to escape the prophecy, but your very evasive maneuvers make the prophecy come true? Picture that, except with qubits—and for some reason, in a gleaming golden laboratory that has components that float in midair.

Devs Trailer Reveals New Look at FX-Hulu's Upcoming Limited Series
If you’re never visited a real quantum computing lab: they’re messier and a lot less golden.

At this point, I’ll just link you to Ben Lindbergh’s article about the show: Making Sense of the Science and Philosophy of ‘Devs.’ His long and excellent piece quotes me extensively enough that I see no need also to analyze the show in this blog post. (It also quotes several academic philosophers.)

Instead, I’ll just share a few tidbits that Ben left out, but that might be amusing to quantum computing fans.

  • The first episode opens with a conversation between two characters about how even “elliptical curve” cryptography is insecure against attack by quantum computers. So I immediately knew both that the writers had one or more consultants who actually knew something about QC, and also that those consultants were not as heavily involved as they could’ve been.
  • Similarly: in a later scene, some employees at the secretive company hold what appears to be a reading group about Shor’s algorithm. They talk about waves that interfere and cancel each other out, which is great, but beyond that their discussion sounded to me like nonsense. In particular, their idea seemed to be that the waves would reinforce at the prime factors p and q themselves, rather than at inverse multiples of the period of a periodic function that only indirectly encodes the factoring problem. (What do you say: should we let this one slide?)
  • “How many qubits does this thing have?” “A number that there would be no point in describing as a number.” ROFL
  • In the show, a crucial break comes when the employees abandon a prediction algorithm based on the deBroglie-Bohm pilot wave interpretation, and substitute one based on Everett’s many-worlds interpretation. Which I could actually almost believe, except that the many-worlds interpretation seems to contradict the entire premise of the rest of the show?
  • A new employee, after he sees the code of the superpowerful quantum computer for the first time, is so disoriented and overwhelmed that he runs and vomits into a toilet. I, too, have had that reaction to the claims of certain quantum computing companies, although in some sense for the opposite reason.

Anyway, none of the above addresses the show’s central conceit: namely, that the Laplace demon can be made real, the past and future rendered fully knowable (with at most occasional breaks and exceptions) by a machine that’s feasible to build. This conceit is fascinating to explore, but also false.

In the past, if you’d asked me to justify its falsity, I would’ve talked about chaos, and quantum mechanics, and the unknowability of the fine details of the universe’s state; I might’ve even pointed you to my Ghost in the Quantum Turing Machine essay. I also would’ve mentioned the severe conceptual difficulties in forcing Nature to find a fixed-point of a universe where you get to see your own future and act on that information (these difficulties are just a variant of the famous Grandfather Paradox).

But it occurs to me that, just as the coronavirus has now made plain the nature of exponential growth, even to the world’s least abstract-minded person, so too it’s made plain the universe’s unpredictability. Let’s put it this way: do you find it plausible that the quantum computer from ‘Devs,’ had you booted it up six months ago, would’ve known the exact state of every nucleotide in every virus in every bat in Wuhan? No? Then it wouldn’t have known our future.

And I see now that I’ve violated my promise that this post would have nothing to do with covid.