Archive for the ‘Embarrassing Myself’ Category

I’m liveblogging from the Taj Mahal

Saturday, December 22nd, 2007

No particular news to report — it’s about the same as it was 400 years ago, I guess. I just wanted to liveblog from the Taj Mahal, is all. (Jonathan Walgate is the one who suggested it.). Now I’ll go back to looking at it.

The most trivial theorem I’ve ever written up

Wednesday, May 2nd, 2007

Theorem: Suppose NP-complete problems are efficiently solvable by quantum computers. Then either the polynomial hierarchy collapses, or else BQP ⊄ AM (that is, quantum computations can’t be efficiently simulated by Arthur-Merlin protocols).

Proof: Suppose NP ⊆ BQP and BQP ⊆ AM. Then coNP ⊆ BQP ⊆ AM, and hence the polynomial hierarchy collapses to the second level by a result of Boppana, Håstad, and Zachos.

Note: If only we could delete the weasel phrase “or else BQP ⊄ AM” from my Most Trivial Theorem, we would’ve achieved a long-sought breakthrough in quantum computing theory. In particular, we would’ve shown that any fast quantum algorithm to solve NP-complete problems would imply an unlikely collapse of classical complexity classes. But while the weasel phrase is weaselly enough to make the Most Trivial Theorem a triviality, I don’t think it’s infinitely weaselly. The reason is my growing suspicion that BQP ⊆ AM in the unrelativized world.

Second Note: When I call this my “Most Trivial Theorem,” obviously I’m excluding homework exercises.

Physics for Doofuses: Understanding Electricity

Sunday, April 15th, 2007

Welcome to an occasional new Shtetl-Optimized series, where physicists get to amuse themselves by watching me struggle to understand the most basic concepts of their discipline. I’ll consider my post on black hole singularities to be retroactively part of this series.

Official motto: “Because if I talked about complexity, you wouldn’t understand it.”

Unofficial motto: “Because if I talked about climate change, I’d start another flamewar — and as much as I want to save civilization, I want even more for everyone to like me.”

Today’s topic is Understanding Electricity. First of all, what makes electricity confusing? Well, besides electricity’s evil twin magnetism (which we’ll get to another time), what makes it confusing is that there are six things to keep track of: charge, current, energy, power, voltage, and resistance, which are measured respectively in coulombs, amps, joules, watts, volts, and ohms. And I mean, sure you can memorize formulas for these things, but what are they, in terms of actual electrons flowing through a wire?

Alright, let’s take ’em one by one.

Charge is the q in kqq/r2. Twice as many electrons, twice as much charge. ‘Nuff said.

Current is charge per unit time. It’s how many electrons are flowing through a cross-section of the wire every second. If you’ve got 100 amps coming out, you can send 50 this way and 50 that way, or π this way and 100-π that way, etc.

Energy … Alright, even I know this one. Energy is what we fight wars to liberate. In our case, if you have a bunch of electrons going through a wire, then the energy scales like the number of electrons times the speed of the electrons squared.

Power is energy per unit time: how much energy does your appliance consume every second? Duh, that’s why a 60-watt light bulb is environmentally-friendlier than a 100-watt bulb.

Voltage is the first one I had trouble with back in freshman physics. It’s energy per charge, or power per current. Intuitively, voltage measures how much energy gets imparted to each individual electron. Thus, if you have a 110-volt hairdryer and you plug it into a 220-volt outlet, then the trouble is that the electrons have twice as much energy as the hairdryer expects. This is what transformers are for: to ramp voltages up and down.

Incidentally, the ability to transform voltages is related to why what comes out of your socket is alternating current (AC) instead of direct current (DC). AC, of course, is the kind where the electrons switch direction 60 times or so per second, while DC is the kind where they always flow in the same direction. For computers and other electronics, you clearly want DC, since logic gates are unidirectional. And indeed, the earliest power plants did transmit DC. In the 1890’s, Thomas Edison fought vigorously against the adoption of AC, going so far as to electrocute dogs, horses, and even an elephant using AC in order to “prove” that it was unsafe. (These demonstrations proved about as much as D-Wave’s quantum computer — since needless to say, one can also electrocute elephants using DC. To draw any conclusions a comparative study is needed.)

So why did AC win? Because it turns out that it’s not practical to transmit DC over distances of more than about a mile. The reason is this: the longer the wire, the more power gets lost along the way. On the other hand, the higher the voltage, the less power gets lost along the way. This means that if you want to send power over a long wire and have a reasonable amount of it reach its destination, then you want to transmit at high voltages. But high voltages are no good for household appliances, for safety and other reasons. So once the power gets close to its destination, you want to convert back down to lower voltages.

Now, the simplest way to convert high voltages to low ones was discovered by Michael Faraday, and relies on the principle of electromagnetic induction. This is the principle according to which a changing electric current creates a changing magnetic field, which can in turn be used to drive another current. (Damn, I knew we wouldn’t get far without bumping into electricity’s evil and confusing magnetwin.) And that gives us a simple way to convert one voltage to another — analogous to using a small, quickly-rotating gear to drive a big, slowly-rotating gear.

So to make a long story short: while in principle it’s possible to convert voltages with DC, it’s more practical to do it with AC. And if you don’t convert voltages, then you can only transmit power for about a mile — meaning that you’d have to build millions of tiny power plants, unless you only cared about urban centers like New York.

Resistance is the trickiest of the six concepts. Basically, resistance is the thing you need to cut in half, if you want to send twice as much current through a wire at the same voltage. If you have two appliances hooked up serially, the total resistance is the sum of the individual resistances: Rtot = R1 + R2. On the other hand, if you have two appliances hooked up in parallel, the reciprocal of the total resistance is the sum of the reciprocals of the individual resistances: 1/Rtot = 1/R1 + 1/R2. If you’re like me, you’ll immediately ask: why should resistance obey these identities? Or to put it differently, why should the thing that obeys one or both of these identities be resistance, defined as voltage divided by current?

Well, as it turns out, the identities don’t always hold. That they do in most cases of interest is just an empirical fact, called Ohm’s Law. I suspect that much confusion could be eliminated in freshman physics classes, were it made clear that there’s nothing obvious about this “Law”: a new physical assumption is being introduced. (Challenge for commenters: can you give me a handwaving argument for why Ohm’s Law should hold? The rule is that your argument has to be grounded in terms of what the actual electrons in a wire are doing.)

Here are some useful formulas that follow from the above discussion:

Power = Voltage2/Resistance = Current2 x Resistance = Voltage x Current
Voltage = Power/Current = Current x Resistance = √(Power x Resistance)
Resistance = Voltage/Current = Power/Current2 = Voltage2/Power
Current = Power/Voltage = Voltage/Resistance = √(Power/Resistance)

Understand? Really? Take the test!

Update (4/16): Chad Orzel answers my question about Ohm’s Law.

Quantum gravity computation: you, too, can be an expert in this field

Friday, March 30th, 2007


I am, I’m slightly embarrassed to admit, quoted pretty extensively in the cover story of this week’s New Scientist magazine (alas, only available to subscribers or those willing to shell out $4.95). The story, by Michael Brooks, is about an interesting recent paper by Lucien Hardy of Perimeter Institute, on the power of “quantum gravity computers.” Lucien’s paper considers the following question: by exploiting quantum fluctuations in the causal structure of spacetime, can one efficiently solve problems that are not efficiently solvable with a garden-variety quantum computer?

As I told Brooks, I really do think this is a hell of a question, one that’s intimately related to the challenge of understanding quantum gravity itself. The trouble is that, until an actual quantum theory of gravity chooses to make itself known to us, almost everything we can say about the question is pure speculation.

But of course, pure speculation is what New Scientist gobbles up with french fries and coleslaw. And so, knowing what kind of story they were going to run, I did my best to advocate giving reality at least a few column inches. Fortunately, the end result isn’t quite as bad as I’d feared.

(Full disclosure: recently New Scientist asked me to write an article for them on theoretical computer science breakthroughs of the last 30 years. Remembering some of the steamers NS has unloaded in the recent past, I faced a moral dilemma for approximately five minutes. I then wrote back to them and said I’d be delighted to do it.)

Anyway, here are a few relevant excerpts from the article. If New Scientist wants me to take these down, then of course I’ll have to comply — though I imagine that being linked to from the 25,000th most popularest blog on the entire Internet could only boost their sales.

A NEW computer is always welcome, isn’t it? It’s always faster than your old one, and it always does more stuff. An upgrade, the latest model with all the bells and whistles is an exciting prospect.

And when it comes to the kind of machine physicists are hoping for, you really are looking at something special. No ordinary upgrade for them: this will be the ultimate computer, and radically different from anything we have ever seen. Not only might it be supremely powerful, defying the logic of cause and effect to give instantaneous answers, it might also tell us exactly how the universe works. It might even tell us how our minds produce the phenomenon we call consciousness. Clear a space on your desk, then, for the quantum gravity computer.

Of course, there’s a chance it may not fit on your desktop because we don’t yet know what the machine will look like. Neither do we know how to build it, or even whether it will do all that its proponents hope. Nevertheless, just thinking about how this processor works could improve our understanding of the universe. “The power of quantum gravity computers is one of the deepest problems in physics,” says Scott Aaronson, a mathematician based at the University of Waterloo in Ontario, Canada.

Put [quantum theory and general relativity] together to make a quantum theory of gravity and it is almost inevitable that we are going to have trouble with notions of cause and effect: the logic of tock following tick or output following input just won’t apply in the quantum-gravity universe.

Aaronson agrees with Hardy. “General relativity says that the causal structure can vary, and quantum mechanics says that anything that can vary can be in superposition,” he says. “So to me, an indefinite causal structure seems like the main new conceptual feature.”

The big question is how powerful [a quantum gravity computer] could be: will it be the ultimate processor?

It turns out this is a hard question to answer. Traditionally, a computer’s power is rated by the number of computations it can do in a given time. IBM’s Blue Gene computer currently tops the world rankings for classical computers: it can do 280 trillion calculations per second. In theory, a quantum computer can do even better. It will be able to crack the world’s toughest codes in the blink of an eye.

The quantum gravity computer, on the other hand, can’t compete under these rules because “quickly” doesn’t mean anything in a scheme where space and time can’t be separated. Or, as Aaronson puts it: “It would be nice if the quantum gravity theorists could at least tell us what they mean by ‘time’.”

Nevertheless, Hardy thinks there is good reason to suppose the quantum gravity computer would indeed be a more powerful machine than anything we have so far envisioned. The fact that it might glimpse its results without running a computation hints at this, he says — though he admits this is just speculation.

What’s more convincing, he says, is the difficulty of simulating a quantum gravity computer on a quantum computer. The fact that we have no algorithm for simulating quantum systems on classical computers highlights the gulf between a classical computer and a quantum computer. If a quantum computer cannot simulate a quantum gravity computer, then that implies there might be another huge leap in computing power waiting to be exploited.

It is a controversial conclusion, though. Seth Lloyd of the Massachusetts Institute of Technology thinks there is no reason to invoke a discontinuity that separates quantum gravity from more familiar processes … Aaronson’s money is on the Lloyd camp: quantum gravity computers can’t be more powerful than quantum computers, he says. In his view, it is a short step from ultra-powerful quantum gravity computers to total cosmic anarchy. If, as Hardy suggests, a quantum gravity computer might be able to see its result without having to run its algorithms, it is essentially no different to having a quantum computer strapped to a time machine. As we all know, time machines don’t make sense: they would enable us to do things like travel back in history to kill our grandparents and thereby cease to exist. “It’s hard to come up with any plausible way to make quantum computers more powerful that wouldn’t make them absurdly more powerful,” he says.

Whatever the truth, this is why investigating the characteristics of the quantum gravity computer is so valuable. It ties theories to the real world, Aaronson says, and stops the important issues, such as a link with observable facts or staying within the bounds of what’s physically possible, from being swept under the carpet. After all, a computer has to produce an observable, measurable output based on an input and a known set of rules. “The connection to observation is no longer a minor detail,” Aaronson says. “It’s the entire problem.”

Two obvious corrections:

  1. I certainly don’t think that quantum gravity computers “can’t” be more powerful than ordinary quantum computers. What I think is that, at the moment, there’s no good evidence that they would be.
  2. I am not a mathematician.

Update: Six months ago, New Scientist ran a credulous, uncomprehending story about a rocket propulsion system that flagrantly violates conservation of momentum (!). This led to interesting discussions here, here, and here about what can be done to improve the magazine’s standards. If you enjoyed the D-Wave fiasco, you’ll also like the spectacle of commenters rushing to defend the article against those elitist, ivory-tower academics with their oh-so-sacred conservation laws. In a world of Homer Simpsons, it’s not easy being a Lisa.

The event horizon’s involved, but the singularity is committed

Thursday, March 22nd, 2007

Lenny Susskind — the Stanford string theorist who Shtetl-Optimized readers will remember from this entry — is currently visiting Perimeter Institute to give a fascinating series of lectures on “Black Holes and Holography.”

After this morning’s lecture (yes, I’m actually getting up at 10am for them), the following question occurred to me: what’s the connection between a black hole having an event horizon and its having a singularity? In other words, once you’ve clumped enough stuff together that light can’t escape, why have you also clumped enough together to create a singularity? I know there’s a physics answer; what I’m looking for is a conceptual answer.

Of course, one direction of the correspondence — that you can’t have a singularity without also having an event horizon — is the famous Cosmic Censorship Hypothesis popularized by Hawking. But what about the other direction?

When I posed this question at lunch, Daniel Gottesman patiently explained to me that singularities and event horizons just sort of go together, “like bacon and eggs.” However, this answer was unsatisfying to me for several reasons — one of them being that, with my limited bacon experience, I don’t know why bacon and eggs go together. (I have eaten eggs with turkey bacon, but I wouldn’t describe their combined goodness as greater than the sum of their individual goodnesses.)

So then Daniel gave me a second answer, which, by the time it lodged in my brain, had morphed itself into the following. By definition, an event horizon is a surface that twists the causal structure in its interior, so that none of the geodesics (paths taken by light rays) lead outside the horizon. But geodesics can’t just stop: assuming there are no closed timelike curves, they have to either keep going forever or else terminate at a singularity. In particular, if you take a causal structure that “wants” to send geodesics off to infinity, and shoehorn it into a finite box (as you do when creating a black hole), the causal structure gets very, very angry — so much so that it has to “vent its anger” somewhere by forming a singularity!

Of course this can’t be the full explanation, since why can’t the geodesics just circle around forever? But if it’s even slightly correct, then it makes me much happier. The reason is that it reminds me of things I already know, like the hairy ball theorem (there must be a spot on the Earth’s surface where the wind isn’t blowing), or Cauchy’s integral theorem (if the integral around a closed curve in the complex plane is nonzero, then there must be a singularity in the middle), or even the Nash equilibrium theorem. In each of these cases, you take a geometric structure with some global property, and then deduce that having that property makes the structure “angry,” so that it needs a special point (a singularity, an equilibrium, or whatever) to blow off some steam.

So, question for the relativistas: is there a theorem in GR anything like my beautiful story, or am I just talking out of my ass as usual?

Update (3/22): Well, it turns out that I was ignorantly groping toward the famous Penrose-Hawking singularity theorems. Thanks to Dave Bacon, Sean Carroll, and ambitwistor for immediately pointing this out.

Why I’m not a physicist: reason #4329

Saturday, August 26th, 2006

I botched the calculation. While I got the answer I wanted (a quadratic improvement in energy), and while I more-or-less correctly identified the reason for that answer (unintuitive properties of the relativistic velocity addition formula), I did the calculation in the rest frame of one of the particles instead of the zero-momentum rest frame, and thereby obtained a scaling of 1/sqrt(ε) versus 1/ε instead of 1/ε1/4 versus 1/sqrt(ε). As a result, my answer flagrantly violates conservation of energy.

Thanks to rrtucci and perseph0ne. In my defense, I did call it a doofus discovery.

Why I’m not a physicist: reason #4328

Saturday, August 26th, 2006

There’s a trivial question about particle accelerators that bugged me for a while. Today I finally figured out the answer, and I’m so excited by my doofus “discovery” that I want to tell the world.

In Ye Olde Times, accelerators used to smash particles against a fixed target. But today’s accelerators smash one particle moving at almost the speed of light against another particle moving at almost the speed of light — that’s why they’re called particle colliders (duhhh). Now, you’d think this trick would increase the collision energy by a constant factor, but according to the physicists, it does asymptotically better than that: it squares the energy!

My question was, how could that be? Even if both particles are moving, we can clearly imagine that one of them is stationary, since the particles’ motion with respect to the Earth is irrelevant. So then what’s the physical difference between a particle hitting a fixed target and two moving particles hitting each other, that could possibly produce a quadratic improvement in energy?

[Warning: Spoiler Ahead]

The answer pops out if we consider the rule for adding velocities in special relativity. If in our reference frame, particle 1 is headed left at a v fraction of the speed of light, while particle 2 is headed right at a w fraction of the speed of light, then in particle 1’s reference frame, particle 2 is headed right at a (v+w)/(1+vw) fraction of the speed of light. Here 1+vw is the relativistic correction, “the thing you put in to keep the fraction less than 1.” If v and w are both close to 0, then of course we get v+w, the Newtonian answer.

Now set v=w=1-ε. Then (v+w)/(1+vw) = 1 – ε2/(2-2ε+ε2), which scales like 1-ε2. Aha!

To finish the argument, remember that relativistic energy increases with speed like 1/sqrt(1-v2). If we plug in v=1-ε, then we get 1/sqrt(2ε-ε2), while if we plug in v=1-ε2, then we get 1/sqrt(2ε24). So in the case of a fixed target the energy scales like 1/sqrt(ε), while in the case of two colliding particles it scales like 1/ε.

In summary, nothing’s going on here except relativistic addition of velocities. As with Grover’s algorithm, as with the quantum Zeno effect, it’s our intuition about linear versus quadratic that once again leads us astray.

Alright, alright, back to complexity

Wednesday, April 26th, 2006

I’ve learned my lesson, at least for the next day or two.

And speaking of learning — in computational learning theory, there’s an “obvious” algorithm for learning a function from random samples. Here’s the algorithm: output any hypothesis that minimizes the error on those samples.

I’m being intentionally vague about what the learning model is — since as soon as you specify a model, it seems like some version of that algorithm is what you want to do, if you want the best tradeoff between the number of samples and the error of your hypothesis. For example, if you’re trying to learn a Boolean function from a class C, then you want to pick any hypothesis from C that’s consistent with all your observations. If you’re trying to learn a Boolean function based on noisy observations, then you want to pick any hypothesis that minimizes the total number of disagreements. If you’re trying to learn a degree-d real polynomial based on observations subject to Gaussian noise, then you want to pick any degree-d polynomial that minimizes the least-squared error, and so on.

Here’s my question: is the “obvious” algorithm always the best one, or is there a case where a different algorithm needs asymptotically fewer samples? That is, do you ever want to pick a hypothesis that disagrees with more of your observations over one that disagrees with less?

While I’m on the subject, have you ever wished you could help Scott Aaronson do his actual research, and even be thanked — by name — in the acknowledgments of one of his papers? Well then, don’t miss this chance! All you have to do is read this seminal paper by Alon, Ben-David, Cesa-Bianchi, and Haussler, and then tell me what upper bound on the sample complexity of p-concept learning follows from their results. (Perversely, all they prove in the paper is that some finite number of samples suffices — must be a mathematician thing.)

The mouth that cannot bite

Friday, March 24th, 2006

Warning: Today’s post has not been approved by the Family Research Council.

There’s a puzzle about evolution that’s been bothering me for years. The most vivid way to state it is as follows: why don’t vaginas have retractable teeth?

Think about it. If vaginas had teeth, rape would be difficult if not impossible. Females would have much greater control over which males could impregnate them. Wouldn’t a biting vagina be a useful Darwinian adaptation?

Of course, the question applies not only to humans, but to any species where the females can be impregnated against their will. (I guess seahorses and black widow spiders don’t count.)

I realize that feminists, psychoanalysts, and comedians could all have a field day with my puzzle, but let’s set that aside and see if we can actually answer it. I can think of five hypotheses, but none of them completely satisfy me.

The first is the boring “spandrels” hypothesis: that putting teeth in vaginas would be too difficult embryologically to be worth the Darwinian payoff. This hypothesis would only convince me if accompanied by an explanation of why a biting vagina would be so much harder to build than a bee stinger, or an elephant tusk, or any of evolution’s other strange inventions.

The second hypothesis is that, if vaginas had teeth, then rapists would just threaten their victims with injury or death if they resisted (as, alas, they often do anyway). But this hypothesis can be made irrelevant by changing the thought experiment a little. Instead of a biting vagina, imagine a flap between the vagina and uterus that could be open or closed at will. If a woman had such a flap, then she could consciously decide whether to let a sex partner impregnate her, without the partner knowing her decision until possibly months later. In other words, she would have built-in birth control.

The third hypothesis is that, even without the teeth or flap, women already have lots of control over which sex partners can impregnate them. As we all know, women in developed countries gained such control in the 20th century — and despite the best efforts of the Republicans, they’ve fortunately retained it, more or less, in every US state except South Dakota. But I’m asking whether women had such control for most of evolutionary history, and also whether females elsewhere in the animal kingdom have it.

In particular, you might have heard the controversial theory that a woman can “choose” to retain more of her partner’s sperm (thereby increasing the chance of conception) by having an orgasm — and indeed, that that’s why the female orgasm evolved in the first place. This theory, if true, would be one example of what I’m talking about, but not the only possible example. Do any of you know how far back in human history abortions were performed — and also, whether any non-human animals perform abortions?

The fourth hypothesis is what I’ll call “genetic paternalism.” This is the idea that, while giving birth to a rapist’s child is an unimaginable trauma from the woman’s perspective, her genes’ perspective might differ from hers. From the genes’ standpoint, maybe the child will grow up to become a rapist himself, thereby spreading his mother’s genes to yet more victims.

(Here I should state an obvious ground rule: when engaging in Darwinian speculation, you have to wear the distinction between “is” and “ought” like a radiation suit. There’s no scientific discovery that could possibly justify violence against women, since the wrongness of such violence isn’t based on science to begin with.)

Of course, the genetic paternalism hypothesis begs the question of why a woman’s genes would build a brain so opposed to the genes’ own interests. But that question shows up all over the place in human evolution.

The fifth hypothesis is that vaginas lack teeth for the same reason many women wear high heels and the Chinese used to mutilate girls’ feet. As Carl Sagan and Ann Druyan point out in their superb book Shadows of Forgotten Ancestors, men have always fetishized female helplessness. For most of human history, marriage wasn’t a union of soulmates; it was a deal between the groom and the bride’s parents. If a man “invested” in a wife, he’d want to be sure she would bear him children, just like if he invested in a cow, he’d want to be sure it would give him milk. (In Fiddler on the Roof, there’s a hilarious exchange between Tevye the dairyman and Lazar Wolf the butcher playing on that similarity.) So, if most women had teeth in their vaginas, then a woman who was known not to have such teeth might be a hot commodity on the marriage market. Of course, that leaves open the question of how she would advertise her toothlessness to prospective suitors (“Hi, I’m Alice, and my vagina doesn’t bite!”).

Surprisingly, I’ve never seen my “biting vagina puzzle” discussed in any book or article on evolutionary biology. (I’d be grateful for a reference.) I have seen plenty of other sex-related puzzles. For example, why are there homosexuals? Why don’t women just clone themselves, instead of “diluting” their genetic contribution by 50% by mixing their genes with a man’s? For that matter, why is there sex in the first place? To me, all these questions are so perplexing that it’s a wonder the creationists never harp on them. I guess that to harp on them, they’d first have to understand them.

And the CMB spoke unto WMAP

Saturday, March 18th, 2006

On Thursday afternoon, the WMAP team released its latest data about the origin and fate of the universe. For readers with social lives, WMAP is the Wilkinson Microwave Anisotropy Probe, which was launched in 2001 and cost $150 million. While that’s less than a third the cost of a single Space Shuttle launch, keep in mind that WMAP has taught us next to nothing about the effects of weightlessness on snails, toads, or even fish. Its sole mission is to study nerdy, technical things like what the universe is made of and whether it’s finite or infinite.

I was at Perimeter Institute on Thurday morning, and people there were awaiting the data as if (har, har) the fate of the universe depended on it. I especially enjoyed chatting with Justin Khoury, a cosmologist who studies the “ekpyrotic scenario.” What is the ekpyrotic scenario? Well, three things I know about it are that

  1. it posits that our universe is a 4-dimensional “brane” embedded in a 5-dimensional manifold, and that the Big Bang was caused by a different brane slamming into ours 13.6 billion years ago,
  2. it doesn’t say where the branes or the manifold came from originally, and
  3. it was co-invented by the father of my former MathCamp roommate.

Like its chief rival — Alan Guth’s inflationary cosmology — the ekpyrotic scenario predicts the fluctuations in the cosmic microwave background that WMAP (as well as its predecessor COBE) observed. But inflation also predicts long-wavelength gravity waves, while the ekpyrotic scenario doesn’t. There was a tiny chance that Thursday’s WMAP release would show evidence of such waves — in which case the ekpyrotic scenario would be killed (or in technical terms, “braned”).

As it turns out, though, the latest results mostly confirm what we already thought, albeit with better precision. The observable universe looks to be 4% “normal stuff” (mostly intergalactic baryons, but also free AOL trial CD’s), 22% cold dark matter, and 74% dark energy. There’s no doubt at all that the dark energy is there, and that it will continue pulling the universe apart (so if you want to visit a different galactic supercluster, leave now). The “scalar spectral index” seems to be slightly less than 1, which is apparently is what you’d expect if inflation were true. Also, space continues to look pretty flat — but then again, the Earth also looks pretty flat, even from the window of a commercial airliner. At least we can say that, if space has a nontrivial curvature, then the radius is a lot bigger than the 14 billion light-years we can see.

(Note that it’s logically possible for space to be finite — that is, to “loop back on itself” — despite having zero curvature. In that case, the universe would be like one of those arcade games where, when your spaceship goes off the edge of the screen, it reappears on the other edge. The questions of the geometry and topology of space are related but different.)

What general conclusions can we draw from all this?

First, that we theoretical computer scientists really ought to get ourselves one of these space probes — one that can peer directly into the face of God and report back to us on whether P=BPP, whether BQP is in AM, and so on. What the physicists do feels like cheating to me, like peeking at the answers in the back of the book. (When I griped about this to Lee Smolin, he offered the following consolation: “At least when you guys answer a question, it stays answered.”)

Second, that space is where the excitement is in fundamental physics these days. If you don’t believe me, look at these awesome slides by John Baez (as well as this from Baez and this from Lee Smolin). Baez points out that, of the three big discoveries of the past 25 years — dark matter, dark energy, and neutrino mass — all three came from astronomy (not from particle accelerators), and not one was predicted by theorists (who’ve been busily trying to explain them post hoc). From my outsider perspective, it seems clear that the astrophysicists have some sort of unfair advantage here, and that the only way to rectify the situation is to cut NASA’s space science budget. Fortunately, that’s exactly what W. has done.

The third conclusion is that it’s time for a new religion: one that would celebrate the release of new CMB data as an event roughly analogous to Moses descending from Sinai with new tablets in hand, and that would regard the Space Shuttle as a blasphemy, an orbiting golden calf. Seriously — am I the only person who sees measuring the CMB fluctuations as a religious obligation?