The array size of the universe

I’ve been increasingly tempted to make this blog into a forum solely for responding to the posts at Overcoming Bias. (Possible new name: “Wallowing in Bias.”)

Two days ago, Robin Hanson pointed to a fascinating paper by Bousso, Harnik, Kribs, and Perez, on predicting the cosmological constant from an “entropic” version of the anthropic principle. Say what you like about whether anthropicology is science or not, for me there’s something delightfully non-intimidating about any physics paper with “anthropic” in the abstract. Sure, you know it’s going to have metric tensors, etc. (after all, it’s a physics paper) — but you also know that in the end, it’s going to turn on some core set of assumptions about the number of sentient observers, the prior probability of the universe being one way rather than another, etc., which will be comprehensible (if not necessarily plausible) to anyone familiar with Bayes’ Theorem and how to think formally.

So in this post, I’m going to try to extract an “anthropic core” of Bousso et al.’s argument — one that doesn’t depend on detailed calculations of entropy production (or anything else) — trusting my expert readers to correct me where I’m mistaken. In defense of this task, I can hardly do better than to quote the authors themselves. In explaining why they make what will seem to many like a decidedly dubious assumption — namely, that the “number of observations” in a given universe should be proportional to the increase in non-gravitational entropy, which is dominated (or so the authors calculate) by starlight hitting dust — they write:

We could have … continued to estimate the number of observers by more explicit anthropic criteria. This would not have changed our final result significantly. But why make a strong assumption if a more conservative one suffices? [p. 14]

In this post I’ll freely make strong assumptions, since my goal is to understand and explain the argument rather than to defend it.

The basic question the authors want to answer is this: why does our causally-connected patch of the universe have the size it does? Or more accurately: taking everything else we know about physics and cosmology as given, why shouldn’t we be surprised that it has the size it does?

From the standpoint of post-1998 cosmology, this is more-or-less equivalent to asking why the cosmological constant Λ ~ 10-122 should have the value it has. For the radius of our causal patch scales like

1/√Λ ~ 1061 Planck lengths ~ 1010 light-years,

while (if you believe the holographic principle) its maximum information content scales like 1/Λ ~ 10122 qubits. To put it differently, there might be stars and galaxies and computers that are more than ~1010 light-years away from us, and they might require more than ~10122 qubits to describe. But if so, they’re receding from us so quickly that we’ll never be able to observe or interact with them.

Of course, to ask why Λ has the value it does is really to ask two questions:

1. Why isn’t Λ smaller than it is, or even zero? (In this post, I’ll ignore the possibility of its being negative.)
2. Why isn’t Λ bigger than it is?

Presumably, any story that answers both questions simultaneously will have to bring in some actual facts about the universe. Let’s face it: 10-122 is just not the sort of answer you expect to get from armchair philosophizing (not that it wouldn’t be great if you did). It’s a number.

As a first remark, it’s easy to understand why Λ isn’t much bigger than it is. If it were really big, then matter in the early universe would’ve flown apart so quickly that stars and galaxies wouldn’t have formed, and hence we wouldn’t be here to blog about it. But this upper bound is far from tight. Bousso et al. write that, based on current estimates, Λ could be about 2000 times bigger than it is without preventing galaxy formation.

As for why Λ isn’t smaller, there’s a “naturalness” argument due originally (I think) to Weinberg, before the astronomers even discovered that Λ>0. One can think of Λ as the energy of empty space; as such, it’s a sum of positive and negative contributions from all possible “scalar fields” (or whatever else) that contribute to that energy. That all of these admittedly-unknown contributions would happen to cancel out exactly, yielding Λ=0, seems fantastically “unnatural” if you choose to think of the contributions as more-or-less random. (Attempts to calculate the likely values of Λ, with no “anthropic correction,” notoriously give values that are off by 120 orders of magnitude!) From this perspective, the smaller you want Λ to be, the higher the price you have to pay in the unlikelihood of your hypothesis.

Based on the above reasoning, Weinberg predicted that Λ would have close to the largest possible value it could have, consistent with the formation of galaxies. As mentioned before, this gives a prediction that’s too big by a factor of 2000 — a vast improvement over the other approaches, which gave predictions that were off by factors of 10120 or infinity!

Still, can’t we do better? One obvious approach to pushing Λ down would be to extend the relatively-uncontroversial argument explaining why Λ can’t be enormous. After all, the tinier we make Λ, the bigger the universe (or at least our causal patch of it) will be. And hence, one might argue, the more observers there will be, hence the more likely we’ll be to exist in the first place! This form of anthropicizing — that we’re twice as likely to exist in a universe with twice as many observers — is what philosopher Nick Bostrom calls the Self-Indication Assumption.

However, two problems with this idea are evident. First, why should it be our causal patch of the universe that matters, rather than the universe as a whole? For anthropic purposes, who cares if the various civilizations that arise in some universe are in causal contact with each other or not, provided they exist? Bousso et al.’s response is basically just to stress that, from what we know about quantum gravity (in particular, black-hole complementarity), it probably doesn’t even make sense to assign a Hilbert space to the entire universe, as opposed to some causal patch of it. Their “Causal-Patch Self-Indication Assumption” still strikes me as profoundly questionable — but let’s be good sports, assume it, and see what the consequences are.

If we do this, we immediately encounter a second problem with the anthropic argument for a low value of Λ: namely, it seems to work too well! On its face, the Self-Indication Assumption wants the number of observers in our causal patch to be infinite, hence the patch itself to be infinite in size, hence Λ=0, in direct conflict with observation.

But wait: what exactly is our prior over the possible values of Λ? Well, it appears Landscapeologists typically just assume a uniform prior over Λ within some range. (Can someone enlighten me on the reasons for this, if there are any? E.g., is it just that the middle part of a Gaussian is roughly uniform?) In that case, the probability that Λ is between ε and 2ε will be of order ε — and such an event, we might guess, would lead to a universe of “size” 1/ε, with order 1/ε observers. In other words, it seems like the tiny prior probability of a small cosmological constant should precisely cancel out the huge number of observers that such a constant leads to — Λ(1/Λ)=1 — leaving us with no prediction whatsoever about the value of Λ. (When I tried to think about this issue years ago, that’s about as far as I got.)

So to summarize: Bousso et al. need to explain to us on the one hand why Λ isn’t 2000 times bigger than it is, and on the other hand why it’s not arbitrarily smaller or 0. Alright, so are you ready for the argument?

The key, which maybe isn’t so surprising in retrospect, turns out to be other stuff that’s known about physics and astronomy (independent of Λ), together with the assumption that that other stuff stays the same (i.e., that all we’re varying is Λ). Sure, say Bousso et al.: in principle a universe with positive cosmological constant Λ could contain up to ~1/Λ bits of information, which corresponds — or so a computer scientist might estimate! — to ~1/Λ observers, like maybe ~1/√Λ observers in each of ~1/√Λ time periods. (The 1/√Λ comes from the Schwarzschild bound on the amount of matter and energy within a given radius, which is linear in the radius and therefore scales like 1/√Λ.)

But in reality, that 1/Λ upper bound on the number of observers won’t be anywhere close to saturated. In reality, what will happen is that after a billion or so years stars will begin to form, radiating light and quickly increasing the universe’s entropy, and then after a couple tens of billions more years, those stars will fizzle out and the universe will return to darkness. And this means that, even though you pay a Λ price in prior probability for a universe with 1/Λ information content, as Λ goes to zero what you get for your money is not ~1/√Λ observers in each of ~1/√Λ time periods (hence ~1/Λ observers in total), but rather just ~1/√Λ observers over a length of time independent of Λ (hence ~1/√Λ observers in total). In other words, you get diminishing returns for postulating a bigger and bigger causal patch, once your causal patch exceeds a few tens of billions of light-years in radius.

So that’s one direction. In the other direction, why shouldn’t we expect Λ to be 2000 times bigger than it is (i.e. the radius of our causal patch to be ~45 times smaller)? Well, Λ could be that big, say the authors, but in that case the galaxies would fly apart from each other before starlight really started to heat things up. So once again you lose out: during the very period when the stars are shining the brightest, entropy production is at its peak, civilizations are presumably arising and killing each other off, etc., the number of galaxies per causal patch is minuscule, and that more than cancels out the larger prior probability that comes with a larger value of Λ.

Putting it all together, then, what you get is a posterior distribution for Λ that’s peaked right around 10-122 or so, corresponding to a causal patch a couple tens of light-years across. This, of course, is exactly what’s observed. You also get the prediction that we should be living in the era when Λ is “just taking over” from gravity, which again is borne out by observation. According to another paper, which I haven’t yet read, several other predictions of cosmological parameters come out right as well.

On the other hand, it seems to me that there are still few enough data points that physicists’ ability to cook up some anthropic explanation to fit them all isn’t sufficiently surprising to compel belief. (In learning theory terms, the measurable cosmological parameters still seem shattered by the concept class of possible anthropic stories.) For those of us who, unlike Eliezer Yudkowsky, still hew to the plodding, non-Bayesian, laughably human norms of traditional science, it seems like what’s needed is a successful prediction of a not-yet-observed cosmological parameter.

Until then, I’m happy to adopt a bullet-dodging attitude toward this and all other proposed anthropic explanations. I assent to none, but wish to understand them all — the more so if they have a novel conceptual twist that I personally failed to think of.

48 Responses to “The array size of the universe”

  1. Aaron Bergman Says:

    For what it’s worth, I wrote a bunch of stuff on similar reasoning starting here and in the subsequent posts linked at the bottom.

  2. Eliezer Yudkowsky Says:

    Hey! I adhere to the plodding human norms of science whenever I can. It’s just… sometimes you gotta be Bayesian, man, sometimes you got no choice.

    Does it help if I say that I fear anthropic reasoning, “Bayesian” or not, because it involves quantities like “number of ‘observers'” that are still mysterious unto me?

    I also fear that you may be confusing the Self-Sampling Assumption and the Self-Indication Assumption – though I’m not sure because I didn’t read the original paper.

  3. Scott Says:

    Does it help if I say that I fear anthropic reasoning, “Bayesian” or not, because it involves quantities like “number of ‘observers’” that are still mysterious unto me?

    Yes, it does! In that vein, does it help if I say terms like “number of parallel-universe copies of oneself” are still mysterious unto me?

    I also fear that you may be confusing the Self-Sampling Assumption and the Self-Indication Assumption – though I’m not sure because I didn’t read the original paper.

    You’re right; thanks! Corrected.

  4. Scott Says:

    Incidentally, here’s a challenge that I issue to any and all readers:

    Why is the age of the Earth — and of life on Earth — of the same order as the age of the entire universe? Why isn’t one measured in billions of years and the other in quadrillions? Can you tell me an anthropic story that will make me a little less surprised by this?

    (The Bousso et al. story seems to be silent about this particular coincidence, since it takes everything other than Λ as given.)

  5. Mark Says:

    Scott, I have a question about anthropic reasoning in general. We have a fairly large portfolio of anthropic explanations for things (e.g. fine-tuning of physical constants, size of universe, etc) at this point. But, it seems like, if you look at a large number of these things, it seems unlikely that we are always sitting right on the expected value, so to speak, in every case. However, that is the assumption. In other words, if you sample enough variables, a small number of them *ought* to display large deviations from the EV of their respective distributions. (Unless the standard deviations of the distrubutions are very small, of course). So perhaps we should expect that in a small number of areas, our universe is the way it is against all the odds?

    I don’t really know where I’m going with this, but do you have anything to say about it? If you won the lottery, would you decide, as any good bayesian would, that you must certainly have misread the winning number – actually having won being far too unlikely?

    As to your challenge: It’s really hard, because I suspect what you are actually asking is “why is our universe such that M-class planets form early in the life of the universe?” Perhaps there is no answer. If, given a random universe, the time it takes for planet formation to commence is a uniformly distributed variable over a very wide range, then we shouldn’t be surprised by anything, right? Or, would you be less surprised if the universe were quadrillions of years old instead of billions?

    If you’re only asking the question in terms of universes that look basically like ours, it seems easier: Life has to form before the stars burn out, which only takes a few dozen billion years, right? And the stars have to form before matter becomes too widely dispersed (if the universe is expanding quickly) or recollapses (if it is recollapsing). So you’re left with a fairly small window of time in which observers can exist.

  6. UnHoly Says:

    “hat’s peaked right around 10-122 or so, corresponding to a causal patch a couple tens of billions light-years across.”

    Unless I’m vastly mistaken, and the universe really is 6000 years old….

  7. Eliezer Yudkowsky Says:

    Scott: Yes, it does! In that vein, does it help if I say terms like “number of parallel-universe copies of oneself” are still mysterious unto me?

    Er… I’ve painted gigantic neon signs on my whole quantum physics series reading, “I DO NOT CLAIM TO KNOW WHERE THE BORN PROBABILITIES COME FROM”. I’ve explicitly pointed out problems with trying to get them by counting – while also stating explicitly that this is not a problem for many-worlds alone, but a problem with which any physical theory must wrestle, if it allows for the theoretical possibility of a brain splitting in half.

    So I will certainly accept that you have problems with counting observers in parallel universes, provided that (1) you admit you have problems counting observers in general, not just in parallel universes and (2) you admit that the parallel universes exist, because a single global world would violate relativity.

    I really cannot see any legitimate reason for trying to have a single world when the quantum equations explicitly describe something splitting and trying to have a single world is explicitly ruled out by the combination of EPR and Bell’s Theorem.

    The Born Probabilities are mysterious as hell, and I don’t know if we’re supposed to get them by counting – counting is also mysterious as hell, which is actually kinda promising when you think about it – but one thing is for certain, there is not only one Earth.

    Anyway, back to the main topic:

    I will be tremendously happy if someone makes a successful, surprising advance prediction using an anthropic argument that uses the Self-Indication Assumption. The reason being that the SIA exactly cancels out the Doomsday Argument.

    Also it would be the first successful advance prediction made by any anthropic theory, ever, which would indicate that we were starting to get somewhere.

  8. steven Says:

    Scott, this paper suggests that past a certain point high metallicity creates too many hot jupiters that destroy potential earths.

  9. Scott Says:

    In other words, if you sample enough variables, a small number of them *ought* to display large deviations from the EV of their respective distributions.

    Mark: That’s entirely true, and is discussed at length in the Bousso et al. paper (in the context of why we shouldn’t expect the observed Λ to be exactly at the peak of their distribution, which it isn’t). Have no fear, too-accurate predictions are usually not the problem with these sorts of explanations… 🙂

    Personally, I’m happy to accept that there are aspects of te world that we (rightly or wrongly) consider vastly improbable, but that are actually brute facts with no deep explanation. The fact that the sun and moon cover almost exactly the same portion of the sky as viewed from earth might be a good example. As another example, if Darwinism were false, it seems the right response would be to throw up our hands and accept complex life as an improbable brute fact, rather than to believe some particular origin story that lacked explanatory power (this was basically David Hume’s point). On the other hand, in those cases where we do luck out and find a theory like Darwinism, it’s a lot more satisfying than when we don’t.

    In your example: if it seems like you won the lottery, yes, it might be a brute fact that you’ll have to swallow hard and accept (sorry about that). But if evidence emerges that you did misread the ticket, or that mischievous nephews altered it, or that your friend at the lottery rigged it in your favor without telling you, those could indeed be preferable as explanations.

  10. Ian Durham Says:

    Somewhat ironically, the idea that spawned the anthropic principle, originally proposed by the ubiquitous Bob Dickie, was supposed to be a counter-argument to the introduction of mysticism, “intelligent design,” or whatever else you want to essentially call unscientific reasoning. But, like virtually everything else, it got distorted and warped into almost the exact opposite.

  11. Scott Says:

    Eliezer:

    So I will certainly accept that you have problems with counting observers in parallel universes, provided that (1) you admit you have problems counting observers in general, not just in parallel universes and (2) you admit that the parallel universes exist, because a single global world would violate relativity.

    I not only admit (1); I proclaim it from the rooftops!

    As for (2), note that if you think of observers as “standing outside the wavefunction” and collapsing it in various times and places, then there’s not actually any conflict with relativity. This is because a measurement, while non-unitary, is still an admissible quantum map (i.e. a superoperator) and is therefore subject to the no-signaling theorem. So I don’t think the main issue is relativity; rather, it’s just the desire to model observers as subject to the same physical laws as everything else.

    I agree that QM clearly predicts that the wavefunction of any large enough part of the universe is constantly splitting into a huge number of decoherent branches. But whether to think of all those branches as “equally real”, or whether to think of one as real and the others as part of a “guiding field”, or whether it doesn’t even make sense to assign a wavefunction to sufficiently large parts of the universe, is a question that I confess I don’t know the right way to think about. Foot-stomping arguments that only of these answers obeys Occam’s Razor and all the others are clearly insane — and I’ve read many such arguments, from all sides! — are not of much help to me. They all seem insane. 🙂

    So the best I can say is that I hope we’ll learn something new during my lifetime that will clarify the situation — and given the various relevant things we learned in the first 80 years since QM was discovered, I don’t think it’s unreasonable to expect we will. The building of a large-scale quantum computer, or the discovery of something fundamental that prevents it, might qualify.

  12. Len Ornstein Says:

    Scott:

    “Until then, I’m happy to adopt a bullet-dodging attitude toward this and all other proposed anthropic explanations. I assent to none, but wish to understand them all — the more so if they have a novel conceptual twist that I personally failed to think of.”

    Right on 😉

    Visa a vis the issue of Bayesian vs Scientific approaches:

    From a biological perspective, compared to winning a lottery, the probability of the origin of life, as well as the evolution of possible observers, appears to be infinitesimal, as reviewed in my critique of Weinberg’s Anthropic Landscape.

    http://www.pipeline.com/~lenornst/Anthropic.html

    Len Ornstein

  13. John Sidles Says:

    This is yet another really fun topic.

    However, the discussion is somewhat more non-specific than usual … in fact no one has suggested *any* specific anthropic predictions that are not already in the literature.

    So let’s see if we can sharpen things up … and in particular come up with some specific anthropic predictions that are *not* in the literature!

    To begin, howzabout we drop those pesky concepts of “consciousness” and “observer” and instead deduce some anthropic predictions along strictly informatic lines?

    Specifically, suppose we hypothesize as the fundamental law of nature “The physical state-space of the universe is the informatically minimal state-space that is required to produce observed physics.”

    And by observed physics, we will conservatively embracce those predictions of the Standard Model of field theory plus gravity that have been already been verified … and no other predictions. 🙂

    This leads to three predictions that (at first sight) seem deeply pessimistic for fundamental physics. I will call them the three “Dystopian Anthropic Predictions” (DAPs).

    DAP-I: We shouldn’t expect to find much new physics from the LHC … the Higgs maybe and that’s it. Nothing that the Standard Model can’t accommodate with a few minor adjustments.

    DAP-II: M-theory (when we finish working it out) will be kind of boring … it will be some minimal accommodation of Einstein gravity with the Standard Model, achieved via some ingenious but not fundamentally insightful mathematical tricks.

    DAP-III: Experimentally speaking, quantum computers won’t work, for the simple reason that theorists like Ashtekar and Schilling are right about Hilbert space being (economically) low-dimension and curved, rather than (prolifically) large-dimension and linear.

    The point being that the question “Hey, why should Mother Nature provide us with any more informatic goodies than she has already provided?” has the informatically respectable answer “She’s not going to.”

    Now, these three dystopian anthropic predictions are IMHO pretty solid … albeit discomfiting … unless one or more utopian anthropic principles (UAPs) can be found …

    … and suggesting where to look for UAPs is of course the main point of this post!

    History suggests that the best place to look for a UAP is the realm of the ordinary … or at least the seeming ordinary … like the ordinary observation that all objects fall with the same acceleration.

    Because what UAPs tend to have in common is this: they awaken within us to a realization that the ordinary world is far richer than we thought it was.

    Everyone will have their own personal favorite UAP … mine is the everyday experience that objects in the world (both classical and quantum) are ubiquitously compressible. Why is that?

    … and what is your favorite UAP?

  14. Robin Hanson Says:

    I’m quite happy with them using entropy production as a proxy for the more ellusive concept of “observer.” What I have trouble with is this causal patch stuff – it greatly penalizes very early and very late entropy production.

  15. Scott Says:

    Robin: Yeah, I agree — especially since their integral over time already implicitly acknowledges that “there’s more to the universe than our causal patch.” (Unless I’m mistaken, galaxies can contribute to the entropy count at early times, but then not after they’ve receded past our horizon.)

    The trouble, of course, is that we don’t know the size of the universe (even if it’s finite or not), so we can’t attempt an entropy count or an observer count for the whole thing. All we know is the size of our causal patch.

    As far as I can tell, the argument for restricting to a causal patch basically boils down to “faith in holography”: if only we understood quantum gravity, we’d know why this was a reasonable thing to do. Of course it’s hard to evaluate that argument without a quantum theory of gravity.

  16. Jack in Danville Says:

    Trivial anthropic story: the age of the Earth — and of life on Earth — is of the same order as the age of the entire universe because the universe evolves along the most efficient path for sentient beings to arise.

    I’m partial to John Sidles’ DAP I & II, but don’t know enough to comment on III. Is it due to I & II that physics and philosophy are colliding to produce anthropicology (i.e. the physics frontier is vanishing)? If this is science (and I’m dodging that bullet) let us go all in and ask why my (or your) particular conscious is in a sentient being right here right now.

  17. Ben Says:

    Why is the age of the Earth — and of life on Earth — of the same order as the age of the entire universe? Why isn’t one measured in billions of years and the other in quadrillions? Can you tell me an anthropic story that will make me a little less surprised by this?

    The star formation rate is higher earlier in the universe’s history when galaxies have more gas, and it has been declining for the last 7 billion years or so. So if you look over the ensemble of stars, most of them were formed sometime from, say, 3-10 billion years ago. The star formation rate will continue to decline, because much baryonic mass stays locked up in low mass stars and stellar remnants. So many of the stars that ever will be formed already have been formed.

    This argument has nothing to do with bogus anthropic principles one way or the other. It’s got to do with actual empirical observations of the universe.

  18. Scott Says:

    Ben: My question was, why should the laws of physics be such that that 3-10 billion year star formation timescale is of the same order as the evolution timescale? You’ll probably consider that a meaningless question, and there’s an excellent chance it is — I’m just not yet convinced there’s nothing to say about it.

  19. John Sidles Says:

    Since so few are posting on this topic, perhaps folks won’t mind if the thread gets hijacked for one hour, ten minutes, and twenty-two seconds … which is how long it is before the University of Arizona Phoenix spacecraft lands on Mars.

    From an anthropic point of view, what are the odds that a recently evolved chimpanzee could seriously discuss questions like “What time is it on Mars?” with competent attention to the general relativistic details?

    I’m sure there are folks for whom this fact is strong anthropic evidence that (1) we humans are as smart as we’re ever going to get, or else (2) quite soon, we humans are going to get much smarter, or else (3) quite soon, we humans are going to be departing the stage. The question is, which one is it?

    Seriously … good luck & “happy landings” to everyone on the Phoenix team. 🙂

  20. James S Graber Says:

    “DAP-III: Experimentally speaking, quantum computers won’t work, for the simple reason that theorists like Ashtekar and Schilling are right about Hilbert space being (economically) low-dimension and curved, rather than (prolifically) large-dimension and linear. ”
    I’ve asked about this before, but I still don’t get it.
    Wouldn’t a genuinely curved nonlinear dimension be a more powerful resource than any number of linear dimensions?
    If not, why not? I’m thinking of Abrams and Lloyd, of course.
    Jim Graber

  21. Jonathan Vos Post Says:

    Scott, Ben, I answered your question in an email.

    There is a first generation of stars, formed from primordial hydrogen (with a little helium and a tiny amount of lithium).

    Some of them blow up and expel the material that makes a second generation of stars, that nucleosynthesize the hydrogen and the little helium into what astronomers alone call metals (anything heavier than helium, especially carbon, nitrogen, oxygen, silicon, phosphorus, sulfur, and the like). Those elements are also what makes terrestrial planets, people, and computers.

    Some of those second generation stars go supernova and eventually
    yield new planets and third generation stars.

    There are not likely any significant fraction of fourth generation stars. But after quadrillion of years, per your question, there would be.

    The composition of ourselves and our our planet and our instruments suggest an anthropic argument for why our sun is on the order of
    magnitude the age of the universe, but roughly a third that amount, rather than a thousandth or millionth.

  22. Scott Says:

    I’m sure there are folks for whom this fact is strong anthropic evidence that (1) we humans are as smart as we’re ever going to get, or else (2) quite soon, we humans are going to get much smarter, or else (3) quite soon, we humans are going to be departing the stage. The question is, which one is it?

    I note that (1) and (3) are perfectly consistent with each other. 🙂

  23. John Sidles Says:

    I’ll answer Jim’s question in three segments … bearing in mind that he has asked a mighty hard question!

    ————

    Part I: The University of Arizona’s Phoenix spacecraft has successfully landed on Mars. What a thrill … it’s great that NASA lets us watch mission control in real-time.

    This is the kind of collaborative & peaceful achievement that makes me really proud of our species.

    The quantum connection arises because the Phoenix Mars Lander depends utterly on high-powered information theory. You see, this isn’t the first time that Phoenix has landed on Mars … it’s the eleventh time!

    What makes this landing special is that this final time was for real … the previous ten (or more) Phoenix landings were all simulated at the Phoenix Program’s (incredibly realistic) Payload Interoperability Testbed (PIT).

    The Phoenix PIT facility plays at least two vital roles … it ensures that the Phoenix hardware systems work together, and—at least as important—these high-fidelity simulations ensure that the Phoenix people work well together as a team.

    ————

    Part II: In our UW QSE Laboratory, and in many other modern laboratories, it is not classical systems that have to work well together, but rather quantum systems. In practice, operating our MRFM experiment is astoundingly similar to operating a Mars Lander … which is why we urgently need effective simulation methods for the quantum state-spaces upon which our “MRFM spacecraft” operates. Because without this quantum simulation capability, it is very difficult to ensure that both the hardware and people are working well together.

    Constructing the needed quantum simulation capability is not too hard—although there are a lot of details that require attention—because we can simply project all of the standard “Ike and Mike” quantum measurement postulates onto low-dimension algebraic varieties … this technique is widely used in chemistry, condensed matter physics, etc.

    The resulting simulated quantum systems are geometrically nonlinear, by virtue of their low-dimension curved state-space, rather than dynamically nonlinear (as in Abrams and Lloyd arXiv:quant-ph/9801041).

    —————-

    Part III. Now we’re a little bit worried … we know from Abrams and Lloyd … and from Zurek’s no-cloning theorems too … that weird and/or bad things can happen when we mess with orthodox quantum mechanics! Things like NP-hard computation, causality paradoxes, and faster-than-light communication.

    Of course, we’re talking about simulations rather than reality. But from an engineering point of view, non-physical simulations are just as bad as non-physical reality, because we engineers rely on simulations to tell us reliably how well our hardware is going to work. That is why we engineers end up being very interested in all the usual issues associated with the foundations of quantum mechanics.

    Although the last word definitely has not been said — these questions are both subtle and tough — there is at least one reassuring result. Namely, the fundamental informatic invariance that is indexed by Nielsen and Chuang under Theorem: unitary freedom in the operator-sum representation (it is Theorem 8.2 of Section 8.2) is respected by the projective quantum simulations. The derivation is given in eq. 53 of our Practical Recipes manuscript (arXiv:0805.1844).

    That is why, according to our present understanding, projective geometric quantum mechanics—whether viewed as reality or as a simulation of reality—seems to have the pretty much the same informatic robustness as orthodox quantum theory.

    There are numerous questions still to be investigated, that are important whether one is seeking to simulate quantum systems for pragmatic engineering reasons, or seeking describe their fundamental reality.

    Thus, the intent of the preceding is *not* to claim that the last word on this subject has been said, but rather, to point out that there are solid reasons for engineers, physicists, and information theorists to share a keen interest in the foundations of quantum mechanics.

  24. Job Says:

    Isn’t anthropic reasoning an example of the logical fallacy of “begging the question”?

    In addition, doesn’t the possibility of living in a virtual environment (i.e. living inside a computer simulation) refute most anthropic explanations? Yet isn’t there an anthropic argument for “living inside a computer”?

    Aren’t we more likely to exist in our current state given that we’re living inside a simulation?

    It’s a self destructing form of reasoning.

  25. Scott Says:

    Job, my view is that there are some anthropic explanations that are self-evident and unobjectionable (e.g. the “Goldilocks explanation” for why the earth is 93 million miles from the sun and not closer or further), and others that are transparently vacuous and absurd (“we must have ten fingers for, if we had twelve, then it wouldn’t be us but some twelve-fingered variant of us that was asking the question”). The hard part is figuring out the boundary between the two.

  26. Sean Carroll Says:

    What I want to know (as a simple working-class physicist) is why it should be the non-gravitational entropy that counts. That seems to be a highly artificial and unmotivated. Most of the entropy production is in the process of black hole formation, growth, and subsequent evaporation. So I would claim (and I’ve told Bousso, but he obviously isn’t convinced) that this theory makes a strong prediction: we (typical observers) should live in the “atmosphere” of Hawking radiation around supermassive black holes in the far future. But that theory doesn’t stack up so well against the data.

  27. Scott Says:

    Sean, I completely agree that using entropy as a proxy for the number of observers leads to all sorts of bizarre predictions, including the one you mentioned! That’s why in my presentation, I tried to give a version of the Bousso et al. argument that could stand or fall independently of their “Causal Entropic Principle.” Ironically, the biggest idea I took from their paper was that, given what we know about stellar evolution and so forth, the number of observers per causal diamond should scale differently from the holographic entropy bound — the former like ~1/√Λ and the latter like ~1/Λ as Λ→0. (For large Λ, in their telling, the number of observers should also fail to saturate the holographic bound, but for different reasons.) Unless I’m mistaken, this failure of the number of observers to saturate the entropy bound is really the key to the whole argument!

  28. Robin Hanson Says:

    Does entropy production at black holes create a large causally connected region of such production? If not, then one could argue that the evolution of observers from non-observers requires such a large connected region, and then think of this as predicting where the first observers will be found.

  29. Sean Carroll Says:

    I will confess to not understanding what “causally connected” means in this particular instance. If it means that light cones intersect in the past, then yes — there are 100 billion galaxies with supermassive black holes, and each of them have overlapping past light cones. But they might not have overlapping future light cones, by the time they swallow up most of their host galaxies and settle down to evaporating. I’m not sure which counts in this game, or why.

  30. Sean Carroll Says:

    Sorry, I should have finished that thought: so it becomes a quantitative question, comparing the entropy generated by black hole creation/evaporation vs. that of starlight. However, at first glance it certainly appears as if black holes will win. A single black hole of about the mass of our galaxy, 10^11 solar masses, has an entropy of about 10^100, larger by far than all of the entropy in starlight in the current universe (which is maybe 10^90).

  31. John Sidles Says:

    Scott says: using entropy as a proxy for the number of observers leads to all sorts of bizarre predictions …

    In favor of Bousso, Harnik, Kribs, and Perez’s reasoning is the fact that when you apply their entropic/anthropic principle not to the entire universe, but solely to the planet earth, it yields some remarkably sensible predictions.

    Specifically, their arguments predict that earthly observers will evolve in those regions of the earthly biosphere where entropy production is maximal. Off the top of my head, these are (1) regions where sunlight is absorbed by water (oceans and jungles), and (2) regions where magma is cooled by water (volcanoes and undersea vents). And these are in fact the regions where life is most abundant on our planet.

    So it seems clear that the entropic/anthropic principle works pretty well in our little earthly region of the universe — with the caveat that field ecologists and anthropologists will regard these entropic arguments as being mostly self-evident.

    So perhaps someone will write a book titled Guns, Germs, and Entropy, seeking to explain not physics, but history from an entropic point of view?

    Of course, we all know that very recently, a third earthly habitat of large entropy production has been created … VLSI chips … so maybe we should all invest in Cyberdyne Systems? 🙂

    Slightly more seriously … with awareness of the strange conclusions that result from anthropic reasoning … if the Phoenix Mars Lander doesn’t find life in the next few weeks, won’t that count as strong evidence against the anthropic/entropic arguments of Bousso et al?

    It seems to me that forms of reasoning that yield good results … except when they don’t! … are properly called “heuristics”. So we can and should accept anthropic reasoning as a useful (and fun) heuristic … but I agree with most posters that it is far from clear that these methods will ever graduate to the big leagues of mathematical rigor and predictive laws of nature.

  32. Scott Says:

    It seems to me that forms of reasoning that yield good results … except when they don’t! … are properly called “heuristics”. So we can and should accept anthropic reasoning as a useful (and fun) heuristic … but I agree with most posters that it is far from clear that these methods will ever graduate to the big leagues of mathematical rigor and predictive laws of nature.

    Well said. 🙂

  33. Len Ornstein Says:

    John Sidles:

    Very few types of living organisms pass muster as “observers”. So your heuristic about where observers will likely evolve is quite weak;-) See my link in comment #12.

  34. John Sidles Says:

    Len Ornstein says: Very few types of living organisms pass muster as “observers”

    Len, can’t a pretty solid QIT case can be made for the *opposite* conclusion?

    Here I have in mind a principle along the lines of “If a system needs information-extracting POVMs to describe its dynamics, then that system is (a) an observer, and (b) that system is alive … and these are the same thing.”

    This principle surely is true on a common-sense level … because I’m an observer … you’re an observer … my aged cat definitely is an observer … (`cuz she’s watching me type this) … the mama robin sitting on her nest outside is an observer … so doesn’t the physicality of observation extend to pretty much any object that has DNA?

  35. Len Ornstein Says:

    John Sidles:

    Even if the “very few” “observers” includes all organisms with nervous systems, the arguments in the link still should be compelling .

  36. ScentOfViolets Says:

    A lot of people seem to object to ‘anthropic after the fact’, and I’m one of them. My question is: is there any sort of hard criteria that would specifically disallow an anthropic argument from being presented? Consider in string theory, for example, the necessity for certain dimensional sizes. One can present an anthropic argument, or one can attempt to show that for a number of reasons three large spacial dimensions is ‘just right’ for adjusting certain other parameters. iirc, the latter argument was found later, and thought to be the natural one, obviating any necessity for the former, i.e., by far the most likely number of large spatial dimensions is three, not two or four or eight. Oops! Similarly, you say it’s ‘obvious’ that facts like the Sun and the Moon having the same apparent angular size is just a coincidence.

    So how does one know that an anthropic explanation is what is called for, as opposed to it being facile theorizing in the absence of strong reasons for anything else?

  37. Raoul Ohio Says:

    ScentOfViolets:

    Interesting observation that the sun and moon have the same angular size. It is supposedly well known that the lunar tide is three times the size of the solar tide. For a brief “classical” relapse, what can you say about the relative density of the sun and moon?

  38. ScentOfViolets Says:

    I can’t think of anything off of the top of my head. Let me restate my point, which I don’t think I made very well. It is my personal belief that these sorts of anthropic arguments are very easily made in the absence of any other compelling theorization. While I have nothing against the principle itself, it is all too easy to descend into what I guess for want of better terminology would be Panglossian anthropic argments. The question is if there is any way to tell, generally, when anthropic arguments are not applicable, such as in the case of the coincidental angular sizes of the Sun and Moon. It is all very well to say, for example, that life – or observers – as we know them thrive best in three large dimensions. It is quite another to say that for purely physical considerations, three large dimensions are preferred in the Landscape (did I get the capitalization right?)

  39. Torbjörn Larsson, OM Says:

    Nice discussion of Boussou et al paper. IIRC I saw Tegmark et al using it to make some more predictions.

    I note that one reason to use the casual patch besides black hole complementarity) hasn’t been mentioned – it works. AFAIU other principles have problems with volume counting.

    I’m not sure why the question of non-gravitational entropy elimination is raised. In the first paper Boussou mentions the idea that likelihood of observers is taken to be connected to dust (and so star) production. A principle leading to radiation “atmospheres” instead of dust as a likely environment for observers would be out.

    That might be a disappointing limitation, but at least the principle isn’t connected to human type observers.

    My question is: is there any sort of hard criteria that would specifically disallow an anthropic argument from being presented?

    I don’t think that is a valid question, because one can turn that around and ask if there is any sort of criteria that would disallow a “fully constrained” theory (meaning for example the standard model) from being presented. The same exceptions would apply, for example isn’t there a theory prediction the size of Earth.

  40. Torbjörn Larsson, OM Says:

    Oops – “for example why isn’t there a theory”

  41. Torbjörn Larsson, OM Says:

    @ Len Ornstein:

    Even if the “very few” “observers” includes all organisms with nervous systems, the arguments in the link still should be compelling .

    Ooh, abiogenesis and evolution, irresistible.

    First, on John Sidles argument: I think it is wrong, as it is entropy gradients that favor life as noted earlier up the thread. Local habitability isn’t extractable out of the causal entropic argument.

    Second, on your argument: I don’t think the specific density of observers count at all. It would be enough with us.

    Btw, with all respect for you as a biologist, I also note some specific problems with your argument.

    1. You use the large number argument on biology, instead of evolution theory. There are a lot of things we don’t know, such as useful functional space for traits in different environments. Essentially all large number arguments is arguments from ignorance, and this seemed to be another one.

    2. The causal entropic argument is purposefully arguing for loose constraints, while your large number argument purposefully constrain to roughly similar solutions.

    3. You use evolution to argue observed constraining of functional space, when this is due to selection and accumulated interlocking complexity, not the unconstrained search over functional space that is the basis for a large number argument such as you started with.

    4. I dunno about bayesian bottleneck arguments, as they look completely ad hoc to me.

    But in any case, the short window of time for abiogenesis nevertheless doesn’t tell us if abiogenesis happened once or many times in parallel or series before one prebiotic system succeeded to pass the darwinian threshold. Short time means it was likely, so the conservative guess would be that it is easy.

    Likewise the genetic code is uniquely decided in the very same type of process. Evolution encourages a LUCA as soon as the hereditary mechanism passes the threshold in some population. So it doesn’t tell us it was unlikely, it tells us it was interlocking and successful.

    Photosensitivity was early re the eye, and an eye is easy to develop by adaptation. Et cetera. AFAIU today biologists asks the questions – is multicellularity likely and is intelligence likely (as you do). Aye, it is the nub.

  42. John Sidles Says:

    There are quite a few logical arguments on this thread. Their worth is hard to evaluate, which illustrates that anthropic reasoning effectively is still a heuristic method.

    But there *is* a new datum about to be added … it appears that the Phoenix Mars Lander is sitting on ice … which is fun! Because if Mars has ever had life, traces of it plausibly might be found in that water.

    Like Fermi asked, “Where is everyone?” This question is still very far from receiving a satisfactory answer.

  43. Torbjörn Larsson, OM Says:

    John, the impression I get from the underlying paper is that it isn’t an heuristic, it is a model which gives predictions.

  44. John Sidles Says:

    Torbjörn, when we engineers say “model” we traditionally mean that (1) the underlying laws of nature are reasonably well-known, and (2) we have a reasonably good understanding of the approximations in our modeling calculations, (3) we have a reasonable intention of testing our model’s predictions experimentally, and using them to guide our designs, and very importantly (4) however good our model is, there exists a reasonably clear path to making the model even better.

    Don’t anthropic calculations—in their present form—tend to “skate around” all four of these traditional criteria?

    IMHO, this doesn’t mean anthropic calculations are bad … obviously they potentially are a new way of doing science … such innovations are is always exciting and interesting … especially when traditional rules and customs are bent or even broken. 🙂

  45. Jonathan Vos Post Says:

    As to non-gravitational entropy, we do not know for sure if there is a “fifth force” related to inflation, in which case there would be a fifth-force-based source of entropy. If there is not now, we do not know if continued adiabatic cooling of the cosmos will “freeze out” such a force in the future.

    Also, if there are, say, five-dimensional black holes, as some theorists have opined, then black hole entropy is not precisely quantified by conventional models.

    Sean Carroll’s puzzle is still a hard puzzle. I merely suggest that there may be other piueces missing.

  46. Torbjörn Larsson, OM Says:

    John, that was really a good, reasonable list of what a model does.

    Well, even if it isn’t a model it wouldn’t be clear to me that it is a heuristic; the method gives (wide ranging) predictions, it doesn’t solve a (narrow) set of problems.

    But more over it is based on physical ideas. For example possibly varying parameters in cosmologies, or sets of possible solutions in string theory. Without being an expert here, still the remaining criteria seems pretty well considered to me. Especially the observational part, it is the reasonable and improving fit that was the basis for this discussion.

    Personally I’m more concerned with falsifiability. What happens if a prediction is wrong?

    We wouldn’t immediately know if it turns out some parameters are decided by fundamental theories anyway. Is it reasonable to say claim that it worked up to that point, but either other methods must address the remaining questions or the whole edifice is possibly wrong (which we will discover later)?

    I think it should be, as this is how we handle other theories. But I’m not sure.

  47. John Sidles Says:

    Torbjörn, your post was sensible and excellent!

    But who wants to read a blog where every post is sensible, and everyone’s manners are polite?

    Just to say something transgressive about modeling (and to pass the time while Scott prepares a new and hopefully transgressive topic) there is an emerging view that models serve a fifth purpose …. a social purpose … that purpose being to provide a shared and quasi-objective set of ideas about which federative enterprises can be formed.

    Modeling-based enterprises can be as engineering-oriented as a 787 jet, or as research-oriented as SynBERC. These enterprises are transgressive in the sense that they assign to modeling an explicitly social purpose.

    Of course, models have always served as nuclei for federative social enterprises … but until recently it has been considered rude to say so … and even ruder to hire social anthropologists to study and optimize this usage! 🙂

  48. John Sidles Says:

    Heck, Scott’s whole site is down to one post per day! Scott, my hope is that you’ve got another interesting article or lecture in the pipeline that explains the paucity of posts.

    Just to kick this discussion of modeling down the road—and perhaps stimulate some more posts—it is interesting that biologists have been far more transgressive in their modeling research objectives than the QIT community (as witness the Registry of Biological Parts and the Office of Biological Disenchantment).

    What is most transgressive is their goal: “To make biology into an engineering discipline.”

    To transplant these transgressive ideas and objectives into QIT is to ask questions like: (1) Could there ever be a “Registry of Quantum Parts”? (2) Could there ever be a “Office of Quantum Disenchantment”? (3) Is it feasible to “To make quantum studies into an engineering discipline”?

    These kind of transgressive questions are now being asked in pretty much every domain of mathematics, science, and engineering — not just system biology. My own opinion is that the challenges and opportunities inherent in these questions are nowhere greater than in QIT … which is why QIT forums should be a central nexus where these questions are debated, not only in narrowly defined and “academically safe” contexts like anthropicology, but in broader practical contexts too.

    And so it kind of amazes me that everyone working in QIT isn’t posting about these broadly transgressive ideas. Is the QIT community really that much more conservative than the biology community?