Archive for March, 2014

This review of Max Tegmark’s book also occurs infinitely often in the decimal expansion of π

Saturday, March 22nd, 2014

Two months ago, commenter rrtucci asked me what I thought about Max Tegmark and his “Mathematical Universe Hypothesis”: the idea, which Tegmark defends in his recent book Our Mathematical Universe, that physical and mathematical existence are the same thing, and that what we call “the physical world” is simply one more mathematical structure, alongside the dodecahedron and so forth.  I replied as follows:

…I find Max a fascinating person, a wonderful conference organizer, someone who’s always been extremely nice to me personally, and an absolute master at finding common ground with his intellectual opponents—I’m trying to learn from him, and hope someday to become 10-122 as good.  I can also say that, like various other commentators (e.g., Peter Woit), I personally find the “Mathematical Universe Hypothesis” to be devoid of content.

After Peter Woit found that comment and highlighted it on his own blog, my comments section was graced by none other than Tegmark himself, who wrote:

Thanks Scott for your all to [sic] kind words!  I very much look forward to hearing what you think about what I actually say in the book once you’ve had a chance to read it!  I’m happy to give you a hardcopy (which can double as door-stop) – just let me know.

With this reply, Max illustrated perfectly why I’ve been trying to learn from him, and how far I fall short.  Where I would’ve said “yo dumbass, why don’t you read my book before spouting off?,” Tegmark gracefully, diplomatically shamed me into reading his book.

So, now that I’ve done so, what do I think?  Briefly, I think it’s a superb piece of popular science writing—stuffed to the gills with thought-provoking arguments, entertaining anecdotes, and fascinating facts.  I think everyone interested in math, science, or philosophy should buy the book and read it.  And I still think the MUH is basically devoid of content, as it stands.

Let me start with what makes the book so good.  First and foremost, the personal touch.  Tegmark deftly conveys the excitement of being involved in the analysis of the cosmic microwave background fluctuations—of actually getting detailed numerical data about the origin of the universe.  (The book came out just a few months before last week’s bombshell announcement of B-modes in the CMB data; presumably the next edition will have an update about that.)  And Tegmark doesn’t just give you arguments for the Many-Worlds Interpretation of quantum mechanics; he tells you how he came to believe it.  He writes of being a beginning PhD student at Berkeley, living at International House (and dating an Australian exchange student who he met his first day at IHouse), who became obsessed with solving the quantum measurement problem, and who therefore headed to the physics library, where he was awestruck by reading the original Many-Worlds articles of Hugh Everett and Bryce deWitt.  As it happens, every single part of the last sentence also describes me (!!!)—except that the Australian exchange student who I met my first day at IHouse lost interest in me when she decided that I was too nerdy.  And also, I eventually decided that the MWI left me pretty much as confused about the measurement problem as before, whereas Tegmark remains a wholehearted Many-Worlder.

The other thing I loved about Tegmark’s book was its almost comical concreteness.  He doesn’t just metaphorically write about “knobs” for adjusting the constants of physics: he shows you a picture of a box with the knobs on it.  He also shows a “letter” that lists not only his street address, zip code, town, state, and country, but also his planet, Hubble volume, post-inflationary bubble, quantum branch, and mathematical structure.  Probably my favorite figure was the one labeled “What Dark Matter Looks Like / What Dark Energy Looks Like,” which showed two blank boxes.

Sometimes Tegmark seems to subtly subvert the conventions of popular-science writing.  For example, in the first chapter, he includes a table that categorizes each of the book’s remaining chapters as “Mainstream,” “Controversial,” or “Extremely Controversial.”  And whenever you’re reading the text and cringing at a crucial factual point that was left out, chances are good you’ll find a footnote at the bottom of the page explaining that point.  I hope both of these conventions become de rigueur for all future pop-science books, but I’m not counting on it.

The book has what Tegmark himself describes as a “Dr. Jekyll / Mr. Hyde” structure, with the first (“Dr. Jekyll”) half of the book relaying more-or-less accepted discoveries in physics and cosmology, and the second (“Mr. Hyde”) half focusing on Tegmark’s own Mathematical Universe Hypothesis (MUH).  Let’s accept that both halves are enjoyable reads, and that the first half contains lots of wonderful science.  Is there anything worth saying about the truth or falsehood of the MUH?

In my view, the MUH gestures toward two points that are both correct and important—neither of them new, but both well worth repeating in a pop-science book.  The first is that the laws of physics aren’t “suggestions,” which the particles can obey when they feel like it but ignore when Uri Geller picks up a spoon.  In that respect, they’re completely unlike human laws, and the fact that we use the same word for both is unfortunate.  Nor are the laws merely observed correlations, as in “scientists find link between yogurt and weight loss.”  The links of fundamental physics are ironclad: the world “obeys” them in much the same sense that a computer obeys its code, or the positive integers obey the rules of arithmetic.  Of course we don’t yet know the complete program describing the state evolution of the universe, but everything learned since Galileo leads one to expect that such a program exists.  (According to quantum mechanics, the program describing our observed reality is a probabilistic one, but for me, that fact by itself does nothing to change its lawlike character.  After all, if you know the initial state, Hamiltonian, and measurement basis, then quantum mechanics gives you a perfect algorithm to calculate the probabilities.)

The second true and important nugget in the MUH is that the laws are “mathematical.”  By itself, I’d say that’s a vacuous statement, since anything that can be described at all can be described mathematically.  (As a degenerate case, a “mathematical description of reality” could simply be a gargantuan string of bits, listing everything that will ever happen at every point in spacetime.)  The nontrivial part is that, at least if we ignore boundary conditions and the details of our local environment (which maybe we shouldn’t!), the laws of nature are expressible as simple, elegant math—and moreover, the same structures (complex numbers, group representations, Riemannian manifolds…) that mathematicians find important for internal reasons, again and again turn out to play a crucial role in physics.  It didn’t have to be that way, but it is.

Putting the two points together, it seems fair to say that the physical world is “isomorphic to” a mathematical structure—and moreover, a structure whose time evolution obeys simple, elegant laws.   All of this I find unobjectionable: if you believe it, it doesn’t make you a Tegmarkian; it makes you ready for freshman science class.

But Tegmark goes further.  He doesn’t say that the universe is “isomorphic” to a mathematical structure; he says that it is that structure, that its physical and mathematical existence are the same thing.  Furthermore, he says that every mathematical structure “exists” in the same sense that “ours” does; we simply find ourselves in one of the structures capable of intelligent life (which shouldn’t surprise us).  Thus, for Tegmark, the answer to Stephen Hawking’s famous question—“What is it that breathes fire into the equations and gives them a universe to describe?”—is that every consistent set of equations has fire breathed into it.  Or rather, every mathematical structure of at most countable cardinality whose relations are definable by some computer program.  (Tegmark allows that structures that aren’t computably definable, like the set of real numbers, might not have fire breathed into them.)

Anyway, the ensemble of all (computable?) mathematical structures, constituting the totality of existence, is what Tegmark calls the “Level IV multiverse.”  In his nomenclature, our universe consists of anything from which we can receive signals; anything that exists but that we can’t receive signals from is part of a “multiverse” rather than our universe.  The “Level I multiverse” is just the entirety of our spacetime, including faraway regions from which we can never receive a signal due to the dark energy.  The Level II multiverse consists of the infinitely many other “bubbles” (i.e., “local Big Bangs”), with different values of the constants of physics, that would, in eternal inflation cosmologies, have generically formed out of the same inflating substance that gave rise to our Big Bang.  The Level III multiverse is Everett’s many worlds.  Thus, for Tegmark, the Level IV multiverse is a sort of natural culmination of earlier multiverse theorizing.  (Some people might call it a reductio ad absurdum, but Tegmark is nothing if not a bullet-swallower.)

Now, why should you believe in any of these multiverses?  Or better: what does it buy you to believe in them?

As Tegmark correctly points out, none of the multiverses are “theories,” but they might be implications of theories that we have other good reasons to accept.  In particular, it seems crazy to believe that the Big Bang created space only up to the furthest point from which light can reach the earth, and no further.  So, do you believe that space extends further than our cosmological horizon?  Then boom! you believe in the Level I multiverse, according to Tegmark’s definition of it.

Likewise, do you believe there was a period of inflation in the first ~10-32 seconds after the Big Bang?  Inflation has made several confirmed predictions (e.g., about the “fractal” nature of the CMB perturbations), and if last week’s announcement of B-modes in the CMB is independently verified, that will pretty much clinch the case for inflation.  But Alan Guth, Andrei Linde, and others have argued that, if you accept inflation, then it seems hard to prevent patches of the inflating substance from continuing to inflate forever, and thereby giving rise to infinitely many “other” Big Bangs.  Furthermore, if you accept string theory, then the six extra dimensions should generically curl up differently in each of those Big Bangs, giving rise to different apparent values of the constants of physics.  So then boom! with those assumptions, you’re sold on the Level II multiverse as well.  Finally, of course, there are people (like David Deutsch, Eliezer Yudkowsky, and Tegmark himself) who think that quantum mechanics forces you to accept the Level III multiverse of Everett.  Better yet, Tegmark claims that these multiverses are “falsifiable.”  For example, if inflation turns out to be wrong, then the Level II multiverse is dead, while if quantum mechanics is wrong, then the Level III one is dead.

Admittedly, the Level IV multiverse is a tougher sell, even by the standards of the last two paragraphs.  If you believe physical existence to be the same thing as mathematical existence, what puzzles does that help to explain?  What novel predictions does it make?  Forging fearlessly ahead, Tegmark argues that the MUH helps to “explain” why our universe has so many mathematical regularities in the first place.  And it “predicts” that more mathematical regularities will be discovered, and that everything discovered by science will be mathematically describable.  But what about the existence of other mathematical universes?  If, Tegmark says (on page 354), our qualitative laws of physics turn out to allow a narrow range of numerical constants that permit life, whereas other possible qualitative laws have no range of numerical constants that permit life, then that would be evidence for the existence of a mathematical multiverse.  For if our qualitative laws were the only ones into which fire had been breathed, then why would they just so happen to have a narrow but nonempty range of life-permitting constants?

I suppose I’m not alone in finding this totally unpersuasive.  When most scientists say they want “predictions,” they have in mind something meatier than “predict the universe will continue to be describable by mathematics.”  (How would we know if we found something that wasn’t mathematically describable?  Could we even describe such a thing with English words, in order to write papers about it?)  They also have in mind something meatier than “predict that the laws of physics will be compatible with the existence of intelligent observers, but if you changed them a little, then they’d stop being compatible.”  (The first part of that prediction is solid enough, but the second part might depend entirely on what we mean by a “little change” or even an “intelligent observer.”)

What’s worse is that Tegmark’s rules appear to let him have it both ways.  To whatever extent the laws of physics turn out to be “as simple and elegant as anyone could hope for,” Tegmark can say: “you see?  that’s evidence for the mathematical character of our universe, and hence for the MUH!”  But to whatever extent the laws turn out not to be so elegant, to be weird or arbitrary, he can say: “see?  that’s evidence that our laws were selected more-or-less randomly among all possible laws compatible with the existence of intelligent life—just as the MUH predicted!”

Still, maybe the MUH could be sharpened to the point where it did make definite predictions?  As Tegmark acknowledges, the central difficulty with doing so is that no one has any idea what measure to use over the space of mathematical objects (or even computably-describable objects).  This becomes clear if we ask a simple question like: what fraction of the mathematical multiverse consists of worlds that contain nothing but a single three-dimensional cube?

We could try to answer such a question using the universal prior: that is, we could make a list of all self-delimiting computer programs, then count the total weight of programs that generate a single cube and then halt, where each n-bit program gets assigned 1/2n weight.  Sure, the resulting fraction would be uncomputable, but at least we’d have defined it.  Except wait … which programming language should we use?  (The constant factors could actually matter here!)  Worse yet, what exactly counts as a “cube”?  Does it have to have faces, or are vertices and edges enough?  How should we interpret the string of 1’s and 0’s output by the program, in order to know whether it describes a cube or not?  (Also, how do we decide whether two programs describe the “same” cube?  And if they do, does that mean they’re describing the same universe, or two different universes that happen to be identical?)

These problems are simply more-dramatic versions of the “standard” measure problem in inflationary cosmology, which asks how to make statistical predictions in a multiverse where everything that can happen will happen, and will happen an infinite number of times.  The measure problem is sometimes discussed as if it were a technical issue: something to acknowledge but then set to the side, in the hope that someone will eventually come along with some clever counting rule that solves it.  To my mind, however, the problem goes deeper: it’s a sign that, although we might have started out in physics, we’ve now stumbled into metaphysics.

Some cosmologists would strongly protest that view.  Most of them would agree with me that Tegmark’s Level IV multiverse is metaphysics, but they’d insist that the Level I, Level II, and perhaps Level III multiverses were perfectly within the scope of scientific inquiry: they either exist or don’t exist, and the fact that we get confused about the measure problem is our issue, not nature’s.

My response can be summed up in a question: why not ride this slippery slope all the way to the bottom?  Thinkers like Nick Bostrom and Robin Hanson have pointed out that, in the far future, we might expect that computer-simulated worlds (as in The Matrix) will vastly outnumber the “real” world.  So then, why shouldn’t we predict that we’re much more likely to live in a computer simulation than we are in one of the “original” worlds doing the simulating?  And as a logical next step, why shouldn’t we do physics by trying to calculate a probability measure over different kinds of simulated worlds: for example, those run by benevolent simulators versus evil ones?  (For our world, my own money’s on “evil.”)

But why stop there?  As Tegmark points out, what does it matter if a computer simulation is actually run or not?  Indeed, why shouldn’t you say something like the following: assuming that π is a normal number, your entire life history must be encoded infinitely many times in π’s decimal expansion.  Therefore, you’re infinitely more likely to be one of your infinitely many doppelgängers “living in the digits of π” than you are to be the “real” you, of whom there’s only one!  (Of course, you might also be living in the digits of e or √2, possibilities that also merit reflection.)

At this point, of course, you’re all the way at the bottom of the slope, in Mathematical Universe Land, where Tegmark is eagerly waiting for you.  But you still have no idea how to calculate a measure over mathematical objects: for example, how to say whether you’re more likely to be living in the first 1010^120 digits of π, or the first 1010^120 digits of e.  And as a consequence, you still don’t know how to use the MUH to constrain your expectations for what you’re going to see next.

Now, notice that these different ways down the slippery slope all have a common structure:

  1. We borrow an idea from science that’s real and important and profound: for example, the possible infinite size and duration of our universe, or inflationary cosmology, or the linearity of quantum mechanics, or the likelihood of π being a normal number, or the possibility of computer-simulated universes.
  2. We then run with that idea until we smack right into a measure problem, and lose the ability to make useful predictions.

Many people want to frame the multiverse debates as “science versus pseudoscience,” or “science versus science fiction,” or (as I did before) “physics versus metaphysics.”  But actually, I don’t think any of those dichotomies get to the nub of the matter.  All of the multiverses I’ve mentioned—certainly the inflationary and Everett multiverses, but even the computer-simuverse and the π-verse—have their origins in legitimate scientific questions and in genuinely-great achievements of science.  However, they then extrapolate those achievements in a direction that hasn’t yet led to anything impressive.  Or at least, not to anything that we couldn’t have gotten without the ontological commitments that led to the multiverse and its measure problem.

What is it, in general, that makes a scientific theory impressive?  I’d say that the answer is simple: connecting elegant math to actual facts of experience.

When Einstein said, the perihelion of Mercury precesses at 43 seconds of arc per century because gravity is the curvature of spacetime—that was impressive.

When Dirac said, you should see a positron because this equation in quantum field theory is a quadratic with both positive and negative solutions (and then the positron was found)—that was impressive.

When Darwin said, there must be equal numbers of males and females in all these different animal species because any other ratio would fail to be an equilibrium—that was impressive.

When people say that multiverse theorizing “isn’t science,” I think what they mean is that it’s failed, so far, to be impressive science in the above sense.  It hasn’t yet produced any satisfying clicks of understanding, much less dramatically-confirmed predictions.  Yes, Steven Weinberg kind-of, sort-of used “multiverse” reasoning to predict—correctly—that the cosmological constant should be nonzero.  But as far as I can tell, he could just as well have dispensed with the “multiverse” part, and said: “I see no physical reason why the cosmological constant should be zero, rather than having some small nonzero value still consistent with the formation of stars and galaxies.”

At this, many multiverse proponents would protest: “look, Einstein, Dirac, and Darwin is setting a pretty high bar!  Those guys were smart but also lucky, and it’s unrealistic to expect that scientists will always be so lucky.  For many aspects of the world, there might not be an elegant theoretical explanation—or any explanation at all better than, ‘well, if it were much different, then we probably wouldn’t be here talking about it.’  So, are you saying we should ignore where the evidence leads us, just because of some a-priori prejudice in favor of mathematical elegance?”

In a sense, yes, I am saying that.  Here’s an analogy: suppose an aspiring filmmaker said, “I want my films to capture the reality of human experience, not some Hollywood myth.  So, in most of my movies nothing much will happen at all.  If something does happen—say, a major character dies—it won’t be after some interesting, character-forming struggle, but meaninglessly, in a way totally unrelated to the rest of the film.  Like maybe they get hit by a bus.  Then some other random stuff will happen, and then the movie will end.”

Such a filmmaker, I’d say, would have a perfect plan for creating boring, arthouse movies that nobody wants to watch.  Dramatic, character-forming struggles against the odds might not be the norm of human experience, but they are the central ingredient of entertaining cinema—so if you want to create an entertaining movie, then you have to postselect on those parts of human experience that do involve dramatic struggles.  In the same way, I claim that elegant mathematical explanations for observed facts are the central ingredient of great science.  Not everything in the universe might have such an explanation, but if one wants to create great science, one has to postselect on the things that do.

(Note that there’s an irony here: the same unsatisfyingness, the same lack of explanatory oomph, that make something a “lousy movie” to those with a scientific mindset, can easily make it a great movie to those without such a mindset.  The hunger for nontrivial mathematical explanations is a hunger one has to acquire!)

Some readers might argue: “but weren’t quantum mechanics, chaos theory, and Gödel’s theorem scientifically important precisely because they said that certain phenomena—the exact timing of a radioactive decay, next month’s weather, the bits of Chaitin’s Ω—were unpredictable and unexplainable in fundamental ways?”  To me, these are the exceptions that prove the rule.  Quantum mechanics, chaos, and Gödel’s theorem were great science not because they declared certain facts unexplainable, but because they explained why those facts (and not other facts) had no explanations of certain kinds.  Even more to the point, they gave definite rules to help figure out what would and wouldn’t be explainable in their respective domains: is this state an eigenstate of the operator you’re measuring?  is the Lyapunov exponent positive?  is there a proof of independence from PA or ZFC?

So, what would be the analogue of the above for the multiverse?  Is there any Level II or IV multiverse hypothesis that says: sure, the mass of electron might be a cosmic accident, with at best an anthropic explanation, but the mass of the Higgs boson is almost certainly not such an accident?  Or that the sum or difference of the two masses is not an accident?  (And no, it doesn’t count to affirm as “non-accidental” things that we already have non-anthropic explanations for.)  If such a hypothesis exists, tell me in the comments!  As far as I know, all Level II and IV multiverse hypotheses are still at the stage where basically anything that isn’t already explained might vary across universes and be anthropically selected.  And that, to my mind, makes them very different in character from quantum mechanics, chaos, or Gödel’s theorem.

In summary, here’s what I feel is a reasonable position to take right now, regarding all four of Tegmark’s multiverse levels (not to mention the computer-simuverse, which I humbly propose as Level 3.5):

Yes, these multiverses are a perfectly fine thing to speculate about: sure they’re unobservable, but so are plenty of other entities that science has forced us to accept.  There are even natural reasons, within physics and cosmology, that could lead a person to speculate about each of these multiverse levels.  So if you want to speculate, knock yourself out!  If, however, you want me to accept the results as more than speculation—if you want me to put them on the bookshelf next to Darwin and Einstein—then you’ll need to do more than argue that other stuff I already believe logically entails a multiverse (which I’ve never been sure about), or point to facts that are currently unexplained as evidence that we need a multiverse to explain their unexplainability, or claim as triumphs for your hypothesis things that don’t really need the hypothesis at all, or describe implausible hypothetical scenarios that could confirm or falsify the hypothesis.  Rather, you’ll need to use your multiverse hypothesis—and your proposed solution to the resulting measure problem—to do something new that impresses me.

The Scientific Case for P≠NP

Friday, March 7th, 2014

Out there in the wider world—OK, OK, among Luboš Motl, and a few others who comment on this blog—there appears to be a widespread opinion that P≠NP is just “a fashionable dogma of the so-called experts,” something that’s no more likely to be true than false.  The doubters can even point to at least one accomplished complexity theorist, Dick Lipton, who publicly advocates agnosticism about whether P=NP.

Of course, not all the doubters reach their doubts the same way.  For Lipton, the thinking is probably something like: as scientists, we should be rigorously open-minded, and constantly question even the most fundamental hypotheses of our field.  For the outsiders, the thinking is more like: computer scientists are just not very smart—certainly not as smart as real scientists—so the fact that they consider something a “fundamental hypothesis” provides no information of value.

Consider, for example, this comment of Ignacio Mosqueira:

If there is no proof that means that there is no reason a-priori to prefer your arguments over those [of] Lubos. Expertise is not enough.  And the fact that Lubos is difficult to deal with doesn’t change that.

In my response, I wondered how broadly Ignacio would apply the principle “if there’s no proof, then there’s no reason to prefer any argument over any other one.”  For example, would he agree with the guy interviewed on Jon Stewart who earnestly explained that, since there’s no proof that turning on the LHC will destroy the world, but also no proof that it won’t destroy the world, the only rational inference is that there’s a 50% chance it will destroy the world?  (John Oliver’s deadpan response was classic: “I’m … not sure that’s how probability works…”)

In a lengthy reply, Luboš bites this bullet with relish and mustard.  In physics, he agrees, or even in “continuous mathematics that is more physics-wise,” it’s possible to have justified beliefs even without proof.  For example, he admits to a 99.9% probability that the Riemann hypothesis is true.  But, he goes on, “partial evidence in discrete mathematics just cannot exist.”  Discrete math and computer science, you see, are so arbitrary, manmade, and haphazard that every question is independent of every other; no amount of experience can give anyone any idea which way the next question will go.

No, I’m not kidding.  That’s his argument.

I couldn’t help wondering: what about number theory?  Aren’t the positive integers a “discrete” structure?  And isn’t the Riemann Hypothesis fundamentally about the distribution of primes?  Or does the Riemann Hypothesis get counted as an “honorary physics-wise continuous problem” because it can also be stated analytically?  But then what about Goldbach’s Conjecture?  Is Luboš 50/50 on that one too?  Better yet, what about continuous, analytic problems that are closely related to P vs. NP?  For example, Valiant’s Conjecture says you can’t linearly embed the permanent of an n×n matrix as the determinant of an m×m matrix, unless m≥exp(n).  Mulmuley and others have connected this “continuous cousin” of P≠NP to issues in algebraic geometry, representation theory, and even quantum groups and Langlands duality.  So, does that make it kosher?  The more I thought about the proposed distinction, the less sense it made to me.

But enough of this.  In the rest of this post, I want to explain why the odds that you should assign to P≠NP are more like 99% than they are like 50%.  This post supersedes my 2006 post on the same topic, which I hereby retire.  While that post was mostly OK as far as it went, I now feel like I can do a much better job articulating the central point.  (And also, I made the serious mistake in 2006 of striving for literary eloquence and tongue-in-cheek humor.  That works great for readers who already know the issues inside-and-out, and just want to be amused.  Alas, it doesn’t work so well for readers who don’t know the issues, are extremely literal-minded, and just want ammunition to prove their starting assumption that I’m a doofus who doesn’t understand the basics of his own field.)

So, OK, why should you believe P≠NP?  Here’s why:

Because, like any other successful scientific hypothesis, the P≠NP hypothesis has passed severe tests that it had no good reason to pass were it false.

What kind of tests am I talking about?

By now, tens of thousands of problems have been proved to be NP-complete.  They range in character from theorem proving to graph coloring to airline scheduling to bin packing to protein folding to auction pricing to VLSI design to minimizing soap films to winning at Super Mario Bros.  Meanwhile, another cluster of tens of thousands of problems has been proved to lie in P (or BPP).  Those range from primality to matching to linear and semidefinite programming to edit distance to polynomial factoring to hundreds of approximation tasks.  Like the NP-complete problems, many of the P and BPP problems are also related to each other by a rich network of reductions.  (For example, countless other problems are in P “because” linear and semidefinite programming are.)

So, if we were to draw a map of the complexity class NP  according to current knowledge, what would it look like?  There’d be a huge, growing component of NP-complete problems, all connected to each other by an intricate network of reductions.  There’d be a second huge component of P problems, many of them again connected by reductions.  Then, much like with the map of the continental US, there’d be a sparser population in the middle: stuff like factoring, graph isomorphism, and Unique Games that for various reasons has thus far resisted assimilation onto either of the coasts.

Of course, to prove P=NP, it would suffice to find a single link—that is, a single polynomial-time equivalence—between any of the tens of thousands of problems on the P coast, and any of the tens of thousands on the NP-complete one.  In half a century, this hasn’t happened: even as they’ve both ballooned exponentially, the two giant regions have remained defiantly separate from each other.  But that’s not even the main point.  The main point is that, as people explore these two regions, again and again there are “close calls”: places where, if a single parameter had worked out differently, the two regions would have come together in a cataclysmic collision.  Yet every single time, it’s just a fake-out.  Again and again the two regions “touch,” and their border even traces out weird and jagged shapes.  But even in those border zones, not a single problem ever crosses from one region to the other.  It’s as if they’re kept on their respective sides by an invisible electric fence.

As an example, consider the Set Cover problem: i.e., the problem, given a collection of subsets S1,…,Sm⊆{1,…,n}, of finding as few subsets as possible whose union equals the whole set.  Chvatal showed in 1979 that a greedy algorithm can produce, in polynomial time, a collection of sets whose size is at most ln(n) times larger than the optimum size.  This raises an obvious question: can you do better?  What about 0.9ln(n)?  Alas, building on a long sequence of prior works in PCP theory, it was recently shown that, if you could find a covering set at most (1-ε)ln(n) times larger than the optimum one, then you’d be solving an NP-complete problem, and P would equal NP.  Notice that, conversely, if the hardness result worked for ln(n) or anything above, then we’d also get P=NP.  So, why do the algorithm and the hardness result “happen to meet” at exactly ln(n), with neither one venturing the tiniest bit beyond?  Well, we might say, ln(n) is where the invisible electric fence is for this problem.

Want another example?  OK then, consider the “Boolean Max-k-CSP” problem: that is, the problem of setting n bits so as to satisfy the maximum number of constraints, where each constraint can involve an arbitrary Boolean function on any k of the bits.  The best known approximation algorithm, based on semidefinite programming, is guaranteed to satisfy at least a 2k/2k fraction of the constraints.  Can you guess where this is going?  Recently, Siu On Chan showed that it’s NP-hard to satisfy even slightly more than a 2k/2k fraction of constraints: if you can, then P=NP.  In this case the invisible electric fence sends off its shocks at 2k/2k.

I could multiply such examples endlessly—or at least, Dana (my source for such matters) could do so.  But there are also dozens of “weird coincidences” that involve running times rather than approximation ratios; and that strongly suggest, not only that P≠NP, but that problems like 3SAT should require cn time for some constant c.  For a recent example—not even a particularly important one, but one that’s fresh in my memory—consider this paper by myself, Dana, and Russell Impagliazzo.  A first thing we do in that paper is to give an approximation algorithm for a family of two-prover games called “free games.”  Our algorithm runs in quasipolynomial time:  specifically, nO(log(n)).  A second thing we do is show how to reduce the NP-complete 3SAT problem to free games of size ~2O(√n).

Composing those two results, you get an algorithm for 3SAT whose overall running time is roughly

$$ 2^{O( \sqrt{n} \log 2^{\sqrt{n}}) } = 2^{O(n)}. $$

Of course, this doesn’t improve on the trivial “try all possible solutions” algorithm.  But notice that, if our approximation algorithm for free games had been slightly faster—say, nO(log log(n))—then we could’ve used it to solve 3SAT in $$ 2^{O(\sqrt{n} \log n)} $$ time.  Conversely, if our reduction from 3SAT had produced free games of size (say) $$ 2^{O(n^{1/3})} $$ rather than 2O(√n), then we could’ve used that to solve 3SAT in $$ 2^{O(n^{2/3})} $$ time.

I should stress that these two results have completely different proofs: the approximation algorithm for free games “doesn’t know or care” about the existence of the reduction, nor does the reduction know or care about the algorithm.  Yet somehow, their respective parameters “conspire” so that 3SAT still needs cn time.  And you see the same sort of thing over and over, no matter which problem domain you’re interested in.  These ubiquitous “coincidences” would be immediately explained if 3SAT actually did require cn time—i.e., if it had a “hard core” for which brute-force search was unavoidable, no matter which way you sliced things up.  If that’s not true—i.e., if 3SAT has a subexponential algorithm—then we’re left with unexplained “spooky action at a distance.”  How do the algorithms and the reductions manage to coordinate with each other, every single time, to avoid spilling the subexponential secret?

Notice that, contrary to Luboš’s loud claims, there’s no “symmetry” between P=NP and P≠NP in these arguments.  Lower bound proofs are much harder to come across than either algorithms or reductions, and there’s not really a mystery about why: it’s hard to prove a negative!  (Especially when you’re up against known mathematical barriers, including relativization, algebrization, and natural proofs.)  In other words, even under the assumption that lower bound proofs exist, we now understand a lot about why the existing mathematical tools can’t deliver them, or can only do so for much easier problems.  Nor can I think of any example of a “spooky numerical coincidence” between two unrelated-seeming results, which would’ve yielded a proof of P≠NP had some parameters worked out differently.  P=NP and P≠NP can look like “symmetric” possibilities only if your symmetry is unbroken by knowledge.

Imagine a pond with small yellow frogs on one end, and large green frogs on the other.  After observing the frogs for decades, herpetologists conjecture that the populations represent two distinct species with different evolutionary histories, and are not interfertile.  Everyone realizes that to disprove this hypothesis, all it would take would be a single example of a green/yellow hybrid.  Since (for some reason) the herpetologists really care about this question, they undertake a huge program of breeding experiments, putting thousands of yellow female frogs next to green male frogs (and vice versa) during mating season, with candlelight, soft music, etc.  Nothing.

As this green vs. yellow frog conundrum grows in fame, other communities start investigating it as well: geneticists, ecologists, amateur nature-lovers, commercial animal breeders, ambitious teenagers on the science-fair circuit, and even some extralusionary physicists hoping to show up their dimwitted friends in biology.  These other communities try out hundreds of exotic breeding strategies that the herpetologists hadn’t considered, and contribute many useful insights.  They also manage to breed a larger, greener, but still yellow frog—something that, while it’s not a “true” hybrid, does have important practical applications for the frog-leg industry.  But in the end, no one has any success getting green and yellow frogs to mate.

Then one day, someone exclaims: “aha!  I just found a huge, previously-unexplored part of the pond where green and yellow frogs live together!  And what’s more, in this part, the small yellow frogs are bigger and greener than normal, and the large green frogs are smaller and yellower!”

This is exciting: the previously-sharp boundary separating green from yellow has been blurred!  Maybe the chasm can be crossed after all!

Alas, further investigation reveals that, even in the new part of the pond, the two frog populations still stay completely separate.  The smaller, yellower frogs there will mate with other small yellow frogs (even from faraway parts of the pond that they’d never ordinarily visit), but never, ever with the larger, greener frogs even from their own part.  And vice versa.  The result?  A discovery that could have falsified the original hypothesis has instead strengthened it—and precisely because it could’ve falsified it but didn’t.

Now imagine the above story repeated a few dozen more times—with more parts of the pond, a neighboring pond, sexually-precocious tadpoles, etc.  Oh, and I forgot to say this before, but imagine that doing a DNA analysis, to prove once and for all that the green and yellow frogs had separate lineages, is extraordinarily difficult.  But the geneticists know why it’s so difficult, and the reasons have more to do with the limits of their sequencing machines and with certain peculiarities of frog DNA, than with anything about these specific frogs.  In fact, the geneticists did get the sequencing machines to work for the easier cases of turtles and snakes—and in those cases, their results usually dovetailed well with earlier guesses based on behavior.  So for example, where reddish turtles and bluish turtles had never been observed interbreeding, the reason really did turn out to be that they came from separate species.  There were some surprises, of course, but nothing even remotely as shocking as seeing the green and yellow frogs suddenly getting it on.

Now, even after all this, someone could saunter over to the pond and say: “ha, what a bunch of morons!  I’ve never even seen a frog or heard one croak, but I know that you haven’t proved anything!  For all you know, the green and yellow frogs will start going at it tomorrow.  And don’t even tell me about ‘the weight of evidence,’ blah blah blah.  Biology is a scummy mud-discipline.  It has no ideas or principles; it’s just a random assortment of unrelated facts.  If the frogs started mating tomorrow, that would just be another brute, arbitrary fact, no more surprising or unsurprising than if they didn’t start mating tomorrow.  You jokers promote the ideology that green and yellow frogs are separate species, not because the evidence warrants it, but just because it’s a convenient way to cover up your own embarrassing failure to get them to mate.  I could probably breed them myself in ten minutes, but I have better things to do.”

At this, a few onlookers might nod appreciatively and say: “y’know, that guy might be an asshole, but let’s give him credit: he’s unafraid to speak truth to competence.”

Even among the herpetologists, a few might beat their breasts and announce: “Who’s to say he isn’t right?  I mean, what do we really know?  How do we know there even is a pond, or that these so-called ‘frogs’ aren’t secretly giraffes?  I, at least, have some small measure of wisdom, in that I know that I know nothing.”

What I want you to notice is how scientifically worthless all of these comments are.  If you wanted to do actual research on the frogs, then regardless of which sympathies you started with, you’d have no choice but to ignore the naysayers, and proceed as if the yellow and green frogs were different species.  Sure, you’d have in the back of your mind that they might be the same; you’d be ready to adjust your views if new evidence came in.  But for now, the theory that there’s just one species, divided into two subgroups that happen never to mate despite living in the same habitat, fails miserably at making contact with any of the facts that have been learned.  It leaves too much unexplained; in fact it explains nothing.

For all that, you might ask, don’t the naysayers occasionally turn out to be right?  Of course they do!  But if they were right more than occasionally, then science wouldn’t be possible.  We would still be in caves, beating our breasts and asking how we can know that frogs aren’t secretly giraffes.

So, that’s what I think about P and NP.  Do I expect this post to convince everyone?  No—but to tell you the truth, I don’t want it to.  I want it to convince most people, but I also want a few to continue speculating that P=NP.

Why, despite everything I’ve said, do I want maybe-P=NP-ism not to die out entirely?  Because alongside the P=NP carpers, I also often hear from a second group of carpers.  This second group says that P and NP are so obviously, self-evidently unequal that the quest to separate them with mathematical rigor is quixotic and absurd.  Theoretical computer scientists should quit wasting their time struggling to understand truths that don’t need to be understood, but only accepted, and do something useful for the world.  (A natural generalization of this view, I guess, is that all basic science should end.)  So, what I really want is for the two opposing groups of naysayers to keep each other in check, so that those who feel impelled to do so can get on with the fascinating quest to understand the ultimate limits of computation.


Update (March 8): At least eight readers have by now emailed me, or left comments, asking why I’m wasting so much time and energy arguing with Luboš Motl.  Isn’t it obvious that, ever since he stopped doing research around 2006 (if not earlier), this guy has completely lost his marbles?  That he’ll never, ever change his mind about anything?

Yes.  In fact, I’ve noticed repeatedly that, even when Luboš is wrong about a straightforward factual matter, he never really admits error: he just switches, without skipping a beat, to some other way to attack his interlocutor.  (To give a small example: watch how he reacts to being told that graph isomorphism is neither known nor believed to be NP-complete.  Caught making a freshman-level error about the field he’s attacking, he simply rants about how graph isomorphism is just as “representative” and “important” as NP-complete problems anyway, since no discrete math question is ever more or less “important” than any other; they’re all equally contrived and arbitrary.  At the Luboš casino, you lose even when you win!  The only thing you can do is stop playing and walk away.)

Anyway, my goal here was never to convince Luboš.  I was writing, not for him, but for my other readers: especially for those genuinely unfamiliar with these interesting issues, or intimidated by Luboš’s air of certainty.  I felt like I owed it to them to set out, clearly and forcefully, certain facts that all complexity theorists have encountered in their research, but that we hardly ever bother to articulate.  If you’ve never studied physics, then yes, it sounds crazy that there would be quadrillions of invisible neutrinos coursing through your body.  And if you’ve never studied computer science, it sounds crazy that there would be an “invisible electric fence,” again and again just barely separating what the state-of-the-art approximation algorithms can handle from what the state-of-the-art PCP tools can prove is NP-complete.  But there it is, and I wanted everyone else at least to see what the experts see, so that their personal judgments about the likelihood of P=NP could be informed by seeing it.

Luboš’s response to my post disappointed me (yes, really!).  I expected it to be nasty and unhinged, and so it was.  What I didn’t expect was that it would be so intellectually lightweight.  Confronted with the total untenability of his foot-stomping distinction between “continuous math” (where you can have justified beliefs without proof) and “discrete math” (where you can’t), and with exactly the sorts of “detailed, confirmed predictions” of the P≠NP hypothesis that he’d declared impossible, Luboš’s response was simply to repeat his original misconceptions, but louder.

And that brings me, I confess, to a second reason for my engagement with Luboš.  Several times, I’ve heard people express sentiments like:

Yes, of course Luboš is a raging jerk and a social retard.  But if you can just get past that, he’s so sharp and intellectually honest!  No matter how many people he needlessly offends, he always tells it like it is.

I want the nerd world to see—in as stark a situation as possible—that the above is not correct.  Luboš is wrong much of the time, and he’s intellectually dishonest.

At one point in his post, Luboš actually compares computer scientists who find P≠NP a plausible working hypothesis to his even greater nemesis: the “climate cataclysmic crackpots.”  (Strangely, he forgot to compare us to feminists, Communists, Muslim terrorists, or loop quantum gravity theorists.)  Even though the P versus NP and global warming issues might not seem closely linked, part of me is thrilled that Luboš has connected them as he has.  If, after seeing this ex-physicist’s “thought process” laid bare on the P versus NP problem—how his arrogance and incuriosity lead him to stake out a laughably-absurd position; how his vanity then causes him to double down after his errors are exposed—if, after seeing this, a single person is led to question Lubošian epistemology more generally, then my efforts will not have been in vain.

Anyway, now that I’ve finally unmasked Luboš—certainly to my own satisfaction, and I hope to that of most scientifically-literate readers—I’m done with this.  The physicist John Baez is rumored to have said: “It’s not easy to ignore Luboš, but it’s ALWAYS worth the effort.”  It took me eight years, but I finally see the multiple layers of profundity hidden in that snark.

And thus I make the following announcement:

For the next three years, I, Scott Aaronson, will not respond to anything Luboš says, nor will I allow him to comment on this blog.

In March 2017, I’ll reassess my Luboš policy.  Whether I relent will depend on a variety of factors—including whether Luboš has gotten the professional help he needs (from a winged pig, perhaps?) and changed his behavior; but also, how much my own quality of life has improved in the meantime.


Another Update (3/11): There’s some further thoughtful discussion of this post over on Reddit.


Another Update (3/13): Check out my MathOverflow question directly inspired by the comments on this post.


Yet Another Update (3/17): Dick Lipton and Ken Regan now have a response up to this post. My own response is coming soon in their comment section. For now, check out an excellent comment by Timothy Gowers, which begins “I firmly believe that P≠NP,” then plays devil’s-advocate by exploring the possibility that in this comment thread I called P being ‘severed in two,’ then finally returns to reasons for believing that P≠NP after all.

Recent papers by Susskind and Tao illustrate the long reach of computation

Sunday, March 2nd, 2014

Most of the time, I’m a crabby, cantankerous ogre, whose only real passion in life is using this blog to shoot down the wrong ideas of others.  But alas, try as I might to maintain my reputation as a pure bundle of seething negativity, sometimes events transpire that pierce my crusty exterior.  Maybe it’s because I’m in Berkeley now, visiting the new Simons Institute for Theory of Computing during its special semester on Hamiltonian complexity.  And it’s tough to keep up my acerbic East Coast skepticism of everything new in the face of all this friggin’ sunshine.  (Speaking of which, if you’re in the Bay Area and wanted to meet me, this week’s the week!  Email me.)  Or maybe it’s watching Lily running around, her face wide with wonder.  If she’s so excited by her discovery of (say) a toilet plunger or some lint on the floor, what right do I have not to be excited by actual scientific progress?

Which brings me to the third reason for my relatively-sunny disposition: two long and fascinating recent papers on the arXiv.  What these papers have in common is that they use concepts from theoretical computer science in unexpected ways, while trying to address open problems at the heart of “traditional, continuous” physics and math.  One paper uses quantum circuit complexity to help understand black holes; the other uses fault-tolerant universal computation to help understand the Navier-Stokes equations.

Recently, our always-pleasant string-theorist friend Luboš Motl described computational complexity theorists as “extraordinarily naïve” (earlier, he also called us “deluded” and “bigoted”).  Why?  Because we’re obsessed with “arbitrary, manmade” concepts like the set of problems solvable in polynomial time, and especially because we assume things we haven’t yet proved such as P≠NP.  (Jokes about throwing stones from a glass house—or a stringy house—are left as exercises for the reader.)  The two papers that I want to discuss today reflect a different perspective: one that regards computation as no more “arbitrary” than other central concepts of mathematics, and indeed, as something that shows up even in contexts that seem incredibly remote from it, from the AdS/CFT correspondence to turbulent fluid flow.


Our first paper is Computational Complexity and Black Hole Horizons, by Lenny Susskind.  As readers of this blog might recall, last year Daniel Harlow and Patrick Hayden made a striking connection between computational complexity and the black-hole “firewall” question, by giving complexity-theoretic evidence that performing the measurement of Hawking radiation required for the AMPS experiment would require an exponentially-long quantum computation.  In his new work, Susskind makes a different, and in some ways even stranger, connection between complexity and firewalls.  Specifically, given an n-qubit pure state |ψ〉, recall that the quantum circuit complexity of |ψ〉 is the minimum number of 2-qubit gates needed to prepare |ψ〉 starting from the all-|0〉 state.  Then for reasons related to black holes and firewalls, Susskind wants to use the quantum circuit complexity of |ψ〉 as an intrinsic clock, to measure how long |ψ〉 has been evolving for.  Last week, I had the pleasure of visiting Stanford, where Lenny spent several hours explaining this stuff to me.  I still don’t fully understand it, but since it’s arguable that no one (including Lenny himself) does, let me give it a shot.

My approach will be to divide into two questions.  The first question is: why, in general (i.e., forgetting about black holes), might one want to use quantum circuit complexity as a clock?  Here the answer is: because unlike most other clocks, this one should continue to tick for an exponentially long time!

Consider some standard, classical thermodynamic system, like a box filled with gas, with the gas all initially concentrated in one corner.  Over time, the gas will diffuse across the box, in accord with the Second Law, until it completely equilibrates.  Furthermore, if we know the laws of physics, then we can calculate exactly how fast this diffusion will happen.  But this implies that we can use the box as a clock!  To do so, we’d simply have to measure how diffused the gas was, then work backwards to determine how much time had elapsed since the gas started diffusing.

But notice that this “clock” only works until the gas reaches equilibrium—i.e., is equally spread across the box.  Once the gas gets to equilibrium, which it does in a reasonably short time, it just stays there (at least until the next Poincaré recurrence time).  So, if you see the box in equilibrium, there’s no measurement you could make—or certainly no “practical” measurement—that would tell you how long it’s been there.  Indeed, if we model the collisions between gas particles (and between gas particles and the walls of the box) as random events, then something even stronger is true.  Namely, the probability distribution over all possible configurations of the gas particles will quickly converge to an equilibrium distribution.  And if you all you knew was that the particles were in the equilibrium distribution, then there’s no property of their distribution that you could point to—not even an abstract, unmeasurable property—such that knowing that property would tell you how long the gas had been in equilibrium.

Interestingly, something very different happens if we consider a quantum pure state, in complete isolation from its environment.  If you have some quantum particles in a perfectly-isolating box, and you start them out in a “simple” state (say, with all particles unentangled and in a corner), then they too will appear to diffuse, with their wavefunctions spreading out and getting entangled with each other, until the system reaches “equilibrium.”  After that, there will once again be no “simple” measurement you can make—say, of the density of particles in some particular location—that will give you any idea of how long the box has been in equilibrium.  On the other hand, the laws of unitary evolution assure us that the quantum state is still evolving, rotating serenely through Hilbert space, just like it was before equilibration!  Indeed, in principle you could even measure that the evolution was still happening, but to do so, you’d need to perform an absurdly precise and complicated measurement—one that basically inverted the entire unitary transformation that had been applied since the particles started diffusing.

Lenny now asks the question: if the quantum state of the particles continues to evolve even after “equilibration,” then what physical quantity can we point to as continuing to increase?  By the argument above, it can’t be anything simple that physicists are used to talking about, like coarse-grained entropy.  Indeed, the most obvious candidate that springs to mind, for a quantity that should keep increasing even after equilibration, is just the quantum circuit complexity of the state!  If there’s no “magic shortcut” to simulating this system—that is, if the fastest way to learn the quantum state at time T is just to run the evolution equations forward for T time steps—then the quantum circuit complexity will continue to increase linearly with T, long after equilibration.  Eventually, the complexity will “max out” at ~cn, where n is the number of particles, simply because (neglecting small multiplicative terms) the dimension of the Hilbert space is always an upper bound on the circuit complexity.  After even longer amounts of time—like ~cc^n—the circuit complexity will dip back down (sometimes even to 0), as the quantum state undergoes recurrences.  But both of those effects only occur on timescales ridiculously longer than anything normally relevant to physics or everyday life.

Admittedly, given the current status of complexity theory, there’s little hope of proving unconditionally that the quantum circuit complexity continues to rise until it becomes exponential, when some time-independent Hamiltonian is continuously applied to the all-|0〉 state.  (If we could prove such a statement, then presumably we could also prove PSPACE⊄BQP/poly.)  But maybe we could prove such a statement modulo a reasonable conjecture.  And we do have suggestive weaker results.  In particular (and as I just learned this Friday), in 2012 Brandão, Harrow, and Horodecki, building on earlier work due to Low, showed that, if you apply S>>n random two-qubit gates to n qubits initially in the all-|0〉 state, then with high probability, not only do you get a state with large circuit complexity, you get a state that can’t even be distinguished from the maximally mixed state by any quantum circuit with at most ~S1/6 gates.

OK, now on to the second question: what does any of this have to do with black holes?  The connection Lenny wants to make involves the AdS/CFT correspondence, the “duality” between two completely different-looking theories that’s been the rage in string theory since the late 1990s.  On one side of the ring is AdS (Anti de Sitter), a quantum-gravitational theory in D spacetime dimensions—one where black holes can form and evaporate, etc., but on the other hand, the entire universe is surrounded by a reflecting boundary a finite distance away, to help keep everything nice and unitary.  On the other side is CFT (Conformal Field Theory): an “ordinary” quantum field theory, with no gravity, that lives only on the (D-1)-dimensional “boundary” of the AdS space, and not in its interior “bulk.”  The claim of AdS/CFT is that despite how different they look, these two theories are “equivalent,” in the sense that any calculation in one theory can be transformed to a calculation in the other theory that yields the same answer.  Moreover, we get mileage this way, since a calculation that’s hard on the AdS side is often easy on the CFT side and vice versa.

As an example, suppose we’re interested in what happens inside a black hole—say, because we want to investigate the AMPS firewall paradox.  Now, figuring out what happens inside a black hole (or even on or near the event horizon) is a notoriously hard problem in quantum gravity; that’s why people have been arguing about firewalls for the past two years, and about the black hole information problem for the past forty!  But what if we could put our black hole in an AdS box?  Then using AdS/CFT, couldn’t we translate questions about the black-hole interior to questions about the CFT on the boundary, which don’t involve gravity and which would therefore hopefully be easier to answer?

In fact people have tried to do that—but frustratingly, they haven’t been able to use the CFT calculations to answer even the grossest, most basic questions about what someone falling into the black hole would actually experience.  (For example, would that person hit a “firewall” and die immediately at the horizon, or would she continue smoothly through, only dying close to the singularity?)  Lenny’s paper explores a possible reason for this failure.  It turns out that the way AdS/CFT works, the closer to the black hole’s event horizon you want to know what happens, the longer you need to time-evolve the quantum state of the CFT to find out.  In particular, if you want to know what’s going on at distance ε from the event horizon, then you need to run the CFT for an amount of time that grows like log(1/ε).  And what if you want to know what’s going on inside the black hole?  In line with the holographic principle, it turns out that you can express an observable inside the horizon by an integral over the entire AdS space outside the horizon.  Now, that integral will include a part where the distance ε from the event horizon goes to 0—so log(1/ε), and hence the complexity of the CFT calculation that you have to do, diverges to infinity.  For some kinds of calculations, the ε→0 part of the integral isn’t very important, and can be neglected at the cost of only a small error.  For other kinds of calculations, unfortunately—and in particular, for the kind that would tell you whether or not there’s a firewall—the ε→0 part is extremely important, and it makes the CFT calculation hopelessly intractable.

Note that yes, we even need to continue the integration for ε much smaller than the Planck length—i.e., for so-called “transplanckian” distances!  As Lenny puts it, however:

For most of this vast sub-planckian range of scales we don’t expect that the operational meaning has anything to do with meter sticks … It has more to do with large times than small distances.

One could give this transplanckian blowup in computational complexity a pessimistic spin: darn, so it’s probably hopeless to use AdS/CFT to prove once and for all that there are no firewalls!  But there’s also a more positive interpretation: the interior of a black hole is “protected from meddling” by a thick armor of computational complexity.  To explain this requires a digression about firewalls.

The original firewall paradox of AMPS could be phrased as follows: if you performed a certain weird, complicated measurement on the Hawking radiation emitted from a “sufficiently old” black hole, then the expected results of that measurement would be incompatible with also seeing a smooth, Einsteinian spacetime if you later jumped into the black hole to see what was there.  (Technically, because you’d violate the monogamy of entanglement.)  If what awaited you behind the event horizon wasn’t a “classical” black hole interior with a singularity in the middle, but an immediate breakdown of spacetime, then one says you would’ve “hit a firewall.”

Yes, it seems preposterous that “firewalls” would exist: at the least, it would fly in the face of everything people thought they understood for decades about general relativity and quantum field theory.  But crucially—and here I have to disagree with Stephen Hawking—one can’t “solve” this problem by simply repeating the physical absurdities of firewalls, or by constructing scenarios where firewalls “self-evidently” don’t arise.  Instead, as I see it, solving the problem means giving an account of what actually happens when you do the AMPS experiment, or of what goes wrong when you try to do it.

On this last question, it seems to me that Susskind and Juan Maldacena achieved a major advance in their much-discussed ER=EPR paper last year.  Namely, they presented a picture where, sure, a firewall arises (at least temporarily) if you do the AMPS experiment—but no firewall arises if you don’t do the experiment!  In other words, doing the experiment sends a nonlocal signal to the interior of the black hole (though you do have to jump into the black hole to receive the signal, so causality outside the black hole is still preserved).  Now, how is it possible for your measurement of the Hawking radiation to send an instantaneous signal to the black hole interior, which might be light-years away from you when you measure?  On Susskind and Maldacena’s account, it’s possible because the entanglement between the Hawking radiation and the degrees of freedom still in the black hole, can be interpreted as creating wormholes between the two.  Under ordinary conditions, these wormholes (like most wormholes in general relativity) are “non-traversable”: they “pinch off” if you try to send signals through them, so they can’t be used for faster-than-light communication.  However, if you did the AMPS experiment, then the wormholes would become traversable, and could carry a firewall (or an innocuous happy-birthday message, or whatever) from the Hawking radiation to the black hole interior.  (Incidentally, ER stands for Einstein and Rosen, who wrote a famous paper on wormholes, while EPR stands for Einstein, Podolsky, and Rosen, who wrote a famous paper on entanglement.  “ER=EPR” is Susskind and Maldacena’s shorthand for their proposed connection between wormholes and entanglement.)

Anyway, these heady ideas raise an obvious question: how hard would it be to do the AMPS experiment?  Is sending a nonlocal signal to the interior of a black hole via that experiment actually a realistic possibility?  In their work a year ago on computational complexity and firewalls, Harlow and Hayden already addressed that question, though from a different perspective than Susskind.  In particular, Harlow and Hayden gave strong evidence that carrying out the AMPS experiment would require solving a problem believed to be exponentially hard even for a quantum computer: specifically, a complete problem for QSZK (Quantum Statistical Zero Knowledge).  In followup work (not yet written up, though see my talk at KITP and my PowerPoint slides), I showed that the Harlow-Hayden problem is actually at least as hard as inverting one-way functions, which is even stronger evidence for hardness.

All of this suggests that, even supposing we could surround an astrophysical black hole with a giant array of perfect photodetectors, wait ~1069 years for the black hole to (mostly) evaporate, then route the Hawking photons into a super-powerful, fault-tolerant quantum computer, doing the AMPS experiment (and hence, creating traversable wormholes to the black hole interior) still wouldn’t be a realistic prospect, even if the equations formally allow it!  There’s no way to sugarcoat this: computational complexity limitations seem to be the only thing protecting the geometry of spacetime from nefarious experimenters.

Anyway, Susskind takes that amazing observation of Harlow and Hayden as a starting point, but then goes off on a different tack.  For one thing, he isn’t focused on the AMPS experiment (the one involving monogamy of entanglement) specifically: he just wants to know how hard it is to do any experiment (possibly a different one) that would send nonlocal signals to the black hole interior.  For another, unlike Harlow and Hayden, Susskind isn’t trying to show that such an experiment would be exponentially hard.  Instead, he’s content if the experiment is “merely” polynomially hard—but in the same sense that (say) unscrambling an egg, or recovering a burned book from the smoke and ash, are polynomially hard.  In other words, Susskind only wants to argue that creating a traversable wormhole would be “thermodynamics-complete.”  A third, related, difference is that Susskind considers an extremely special model scenario: namely, the AdS/CFT description of something called the “thermofield double state.”  (This state involves two entangled black holes in otherwise-separated spacetimes; according to ER=EPR, we can think of those black holes as being connected by a wormhole.)  While I don’t yet understand this point, apparently the thermofield double state is much more favorable for firewall production than a “realistic” spacetime—and in particular, the Harlow-Hayden argument doesn’t apply to it.  Susskind wants to show that even so (i.e., despite how “easy” we’ve made it), sending a signal through the wormhole connecting the two black holes of the thermofield double state would still require solving a thermodynamics-complete problem.

So that’s the setup.  What new insights does Lenny get?  This, finally, is where we circle back to the view of quantum circuit complexity as a clock.  Briefly, Lenny finds that the quantum state getting more and more complicated in the CFT description—i.e., its quantum circuit complexity going up and up—directly corresponds to the wormhole getting longer and longer in the AdS description.  (Indeed, the length of the wormhole increases linearly with time, growing like the circuit complexity divided by the total number of qubits.)  And the wormhole getting longer and longer is what makes it non-traversable—i.e., what makes it impossible to send a signal through.

Why has quantum circuit complexity made a sudden appearance here?  Because in the CFT description, the circuit complexity continuing to increase is the only thing that’s obviously “happening”!  From a conventional physics standpoint, the quantum state of the CFT very quickly reaches equilibrium and then just stays there.  If you measured some “conventional” physical observable—say, the energy density at a particular point—then it wouldn’t look like the CFT state was continuing to evolve at all.  And yet we know that the CFT state is evolving, for two extremely different reasons.  Firstly, because (as we discussed early on in this post) unitary evolution is still happening, so presumably the state’s quantum circuit complexity is continuing to increase.  And secondly, because in the dual AdS description, the wormhole is continuing to get longer!

From this connection, at least three striking conclusions follow:

  1. That even when nothing else seems to be happening in a physical system (i.e., it seems to have equilibrated), the fact that the system’s quantum circuit complexity keeps increasing can be “physically relevant” all by itself.  We know that it’s physically relevant, because in the AdS dual description, it corresponds to the wormhole getting longer!
  2. That even in the special case of the thermofield double state, the geometry of spacetime continues to be protected by an “armor” of computational complexity.  Suppose that Alice, in one half of the thermofield double state, wants to send a message to Bob in the other half (which Bob can retrieve by jumping into his black hole).  In order to get her message through, Alice needs to prevent the wormhole connecting her black hole to Bob’s from stretching uncontrollably—since as long as it stretches, the wormhole remains non-traversable.  But in the CFT picture, stopping the wormhole from stretching corresponds to stopping the quantum circuit complexity from increasing!  And that, in turn, suggests that Alice would need to act on the radiation outside her black hole in an incredibly complicated and finely-tuned way.  For “generically,” the circuit complexity of an n-qubit state should just continue to increase, the longer you run unitary evolution for, until it hits its exp(n) maximum.  To prevent that from happening would essentially require “freezing” or “inverting” the unitary evolution applied by nature—but that’s the sort of thing that we expect to be thermodynamics-complete.  (How exactly do Alice’s actions in the “bulk” affect the evolution of the CFT state?  That’s an excellent question that I don’t understand AdS/CFT well enough to answer.  All I know is that the answer involves something that Lenny calls “precursor operators.”)
  3. The third and final conclusion is that there can be a physically-relevant difference between pseudorandom n-qubit pure states and “truly” random states—even though, by the definition of pseudorandom, such a difference can’t be detected by any small quantum circuit!  Once again, the way to see the difference is using AdS/CFT.  It’s easy to show, by a counting argument, that almost all n-qubit pure states have nearly-maximal quantum circuit complexity.  But if the circuit complexity is already maximal, that means in particular that it’s not increasing!  Lenny argues that this corresponds to the wormhole between the two black holes no longer stretching.  But if the wormhole is no longer stretching, then it’s “vulnerable to firewalls” (i.e., to messages going through!).  It had previously been argued that random CFT states almost always correspond to black holes with firewalls—and since the CFT states formed by realistic physical processes will look “indistinguishable from random states,” black holes that form under realistic conditions should generically have firewalls as well.  But Lenny rejects this argument, on the ground that the CFT states that arise in realistic situations are not random pure states.  And what distinguishes them from random states?  Simply that they have non-maximal (and increasing) quantum circuit complexity!

I’ll leave you with a question of my own about this complexity / black hole connection: one that I’m unsure how to think about, but that perhaps interests me more than any other here.  My question is: could you ever learn the answer to an otherwise-intractable computational problem by jumping into a black hole?  Of course, you’d have to really want the answer—so much so that you wouldn’t mind dying moments after learning it, or not being able to share it with anyone else!  But never mind that.  What I have in mind is first applying some polynomial-size quantum circuit to the Hawking radiation, then jumping into the black hole to see what nonlocal effect (if any) the circuit had on the interior.  The fact that the mapping between interior and exterior states is so complicated suggests that there might be complexity-theoretic mileage to be had this way, but I don’t know what.  (It’s also possible that you can get a computational speedup in special cases like the thermofield double state, but that a Harlow-Hayden-like obstruction prevents you from getting one with real astrophysical black holes.  I.e., that for real black holes, you’ll just see a smooth, boring, Einsteinian black hole interior no matter what polynomial-size quantum circuit you applied to the Hawking radiation.)


If you’re still here, the second paper I want to discuss today is Finite-time blowup for an averaged three-dimensional Navier-Stokes equation by Terry Tao.  (See also the excellent Quanta article by Erica Klarreich.)  I’ll have much, much less to say about this paper than I did about Susskind’s, but that’s not because it’s less interesting: it’s only because I understand the issues even less well.

Navier-Stokes existence and smoothness is one of the seven Clay Millennium Problems (alongside P vs. NP, the Riemann Hypothesis, etc).  The problem asks whether the standard, classical differential equations for three-dimensional fluid flow are well-behaved, in the sense of not “blowing up” (e.g., concentrating infinite energy on a single point) after a finite amount of time.

Expanding on ideas from his earlier blog posts and papers about Navier-Stokes (see here for the gentlest of them), Tao argues that the Navier-Stokes problem is closely related to the question of whether or not it’s possible to “build a fault-tolerant universal computer out of water.”  Why?  Well, it’s not the computational universality per se that matters, but if you could use fluid flow to construct general enough computing elements—resistors, capacitors, transistors, etc.—then you could use those elements to recursively shift the energy in a given region into a region half the size, and from there to a region a quarter the size, and so on, faster and faster, until you got infinite energy density after a finite amount of time.

Strikingly, building on an earlier construction by Katz and Pavlovic, Tao shows that this is actually possible for an “averaged” version of the Navier-Stokes equations!  So at the least, any proof of existence and smoothness for the real Navier-Stokes equations will need to “notice” the difference between the real and averaged versions.  In his paper, though, Tao hints at the possibility (or dare one say likelihood?) that the truth might go the other way.  That is, maybe the “universal computer” construction can be ported from the averaged Navier-Stokes equations to the real ones.  In that case, we’d have blowup in finite time for the real equations, and a negative solution to the Navier-Stokes existence and smoothness problem.  Of course, such a result wouldn’t imply that real, physical water was in any danger of “blowing up”!  It would simply mean that the discrete nature of water (i.e., the fact that it’s made of H2O molecules, rather than being infinitely divisible) was essential to understanding its stability given arbitrary initial conditions.

So, what are the prospects for such a blowup result?  Let me quote from Tao’s paper:

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program [to prove a blowup result for the “real” Navier-Stokes equations] are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice.  The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties.  In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key difference of having a linear evolution rather than a nonlinear one) may prove to be useful.

One minor point that I’d love to understand is, what happens in two dimensions?  Existence and smoothness are known to hold for the 2-dimensional analogues of the Navier-Stokes equations.  If they also held for the averaged 2-dimensional equations, then it would follow that Tao’s “universal computer” must be making essential use of the third dimension. How?  If I knew the answer to that, then I’d feel for the first time like I had some visual crutch for understanding why 3-dimensional fluid flow is so complicated, even though 2-dimensional fluid flow isn’t.

I see that, in blog comments here and here, Tao says that the crucial difference between the 2- and 3-dimensional Navier-Stokes equations arises from the different scaling behavior of the dissipation term: basically, you can ignore it in 3 or more dimensions, but you can’t ignore it in 2.  But maybe there’s a more doofus-friendly explanation, which would start with some 3-dimensional fluid logic gate, and then explain why the gate has no natural 2-dimensional analogue, or why dissipation causes its analogue to fail.


Obviously, there’s much more to say about both papers (especially the second…) than I said in this post, and many people more knowledgeable than I am to say those things.  But that’s what the comments section is for.  Right now I’m going outside to enjoy the California sunshine.