The ultimate physical limits of privacy

Somewhat along the lines of my last post, the other day a reader sent me an amusing list of questions about privacy and fundamental physics.  The questions, and my answers, are below.

1. Does the universe provide us with a minimum level of information security?

I’m not sure what the question means. Yes, there are various types of information security that are rooted in the known laws of physics—some of them (like quantum key distribution) even relying on specific aspects of quantum physics—whose security one can argue for by appealing to the known properties of the physical world. Crucially, however, any information security protocol is only as good as the assumptions it rests on: for example, that the attacker can’t violate the attack model by, say, breaking into your house with an ax!

2. For example, is my information safe from entities outside the light-cone I project?

Yes, I think it’s safe to assume that your information is safe from any entities outside your future light-cone. Indeed, if information is not in your future light-cone, then almost by definition, you had no role in creating it, so in what sense should it be called “yours”?

3. Assume that there are distant alien cultures with infinite life spans – would they always be able to wait long enough for my light cone to spread to them, and then have a chance of detecting my “private” information?

First of all, the aliens would need to be in your future light-cone (see my answer to 2). In 1998, it was discovered that there’s a ‘dark energy’ pushing the galaxies apart at an exponentially-increasing rate. Assuming the dark energy remains there at its current density, galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone, and signals from us can never reach them. So, at least you’re safe from those aliens!

For the aliens in your future light-cone, the question is subtler. Suppose you took the only piece of paper on which your secrets were written, and burned it to ash—nothing high-tech, just burned it. Then there’s no technology that we know today, or could even seriously envision, that would piece the secrets together. It would be like unscrambling an egg, or bringing back the dead from decomposing corpses, or undoing a quantum measurement. It would mean, effectively, reversing the Arrow of Time in the relevant part of the universe. This is formally allowed by the Second Law of Thermodynamics, since the decrease in entropy within that region could be balanced by an increase in entropy elsewhere, but it would require a staggering level of control over the region’s degrees of freedom.

On the other hand, it’s also true that the microscopic laws of physics are reversible: they never destroy information. And for that reason, as a matter of principle, we can’t rule out the possibility that some civilization of the very far future, whether human or alien, could piece together what was written on your paper even after you’d burned it to a crisp. Indeed, with such godlike knowledge and control, maybe they could even reconstruct the past states of your brain, and thereby piece together private thoughts that you’d never written anywhere!

4. Does living in a black hole provide privacy? Couldn’t they follow you into the hole?

No, I would not recommend jumping into a black hole as a way to ensure your privacy. For one thing, you won’t get to enjoy the privacy for long (a couple hours, maybe, for a supermassive black hole at the center of a galaxy?) before getting spaghettified on your way to the singularity. For another, as you correctly pointed out, other people could still snoop on you by jumping into the black hole themselves—although they’d have to want badly enough to learn your secrets that they wouldn’t mind dying themselves along with you, and also not being able to share whatever they learned with anyone outside the hole.

But a third problem is that even inside a black hole, your secrets might not be safe forever! Since the 1970s, it’s been thought that all information dropped into a black hole eventually comes out, in extremely-scrambled form, in the Hawking radiation that black holes produce as they slowly shrink and evaporate. What do I mean by “slowly”? Well, the evaporation would take about 1070 years for a black hole the mass of the sun, or about 10100 years for the black holes at the centers of galaxies. Furthermore, even after the black hole had evaporated, piecing together the infalling secrets from the Hawking radiation would probably make reconstructing what was on the burned paper from the smoke and ash seem trivial by comparison! But just like in the case of the burned paper, the information is still formally present (if current ideas about quantum gravity are correct), so one can’t rule out that it could be reconstructed by some civilization of the extremely remote future.

77 Responses to “The ultimate physical limits of privacy”

  1. Evan Says:

    Charlie Bennett gave a really interesting talk on a similar topic several years back on the question “is it possible to destroy information” and he decided that it was necessarily true that some information had to leave our own future light cone. Basically the premise is that the earth is emiting a large quantity of entropy into deep space in the form of photons due to black body radiation. Because of inflation, when they get a certain distance away from us, they leave our future light cone. If those photons are entangled with states you want destroyed, they aren’t recoverable by someone on earth. In addition, the sun is providing a vast quantity of new information in the form of sunlight. By an information counting argument it seems unlikely that a complete record of all information (at the microscopic level) can be retained on earth: some of it has to be lost forever.

    In particular, this means that your “burning a note” idea is not necessarily reversible (by someone here on earth): while the chemical reaction of burning is a reversible process, you need to reverse the light emitted as well, and that eventually becomes inaccessible.

    Of course, as a practical matter, the more “macroscopic” a piece of information you want destroyed, the more microscopic copies that exist, and the less likely it is that they would all be lost due to these mechanisms.

  2. Scott Says:

    Evan #1: Yes, thanks for adding that—I heard some version of Charlie’s talk 8 or 9 years ago and enjoyed it a lot (and I guess the information from that talk isn’t yet completely destroyed in my brain! 😉 ). I’ve also been extremely interested in the issue of the photons that fly out to our de Sitter horizon, and their possible relevance to the measurement problem (see my Could A Quantum Computer Have Subjective Experience? talk). Of course, one thing our far-future aliens could do to make the information recoverable, would be to surround the earth (or the solar system?) with perfectly-reflecting mirrors. In thought experiments that involve piecing together the information in the Hawking radiation emitted from a black hole, one also imagines surrounding the black hole with those perfectly-reflecting mirrors, then waiting ~1070 years for them to gather it all up.

  3. Peter Morgan Says:

    Re point 2, the Reeh-Schlieder theorem would say the opposite, although that requires us to examine details of exponential tails, akin to predicting the future at space-like separation by looking at details of a heat signature. Less dauntingly, by looking carefully at the past of a target, an observer can infer with good probability what the target must be doing now, at space-like separation (if we see POTUS get on Air Force 1, and we have hacked the flight plan as it existed 10 minutes ago, we know with good probability where POTUS will be for the next few hours; good probability is enough to cause trouble, and deterministic definiteness is hardly to be expected).

  4. Peter Morgan Says:

    Perhaps needless to say, my comment above largely replicates comment #1.

  5. Douglas Knight Says:

    Of course you are correct, but your use of the word “control” seems to me potentially confusing. You don’t want to physically unburn the paper. You just want to simulate the burning to do inference. Either way, you need to know the boundary conditions and you could describe putting in place the relevant detectors as a form of control, except that you contrast it with “godlike knowledge.”

    Also, under the assumptions that the aliens have infinite lifespan, they can use reversible computation to do PSPACE calculations, so they are not limited to detecting the boundary conditions and running physics backwards; they can just try all initial conditions and compare the results to the ending conditions. I don’t know if this helps much, though.

  6. Scott Says:

    Douglas #5: Yes, that’s exactly what I was thinking—that the detectors you would need to learn the boundary conditions to sufficient accuracy would, in practice, amount to complete “control” over the region.

    Also, if you knew the final boundary conditions completely, you could do an exhaustive search for initial conditions that led to them, but why would you? Far more efficient just to run the equations backwards.

    Where your approach makes much more sense, is in the realistic situation where you know the final boundary conditions only partly, not completely. Unfortunately, in that case chaos will often prevent you from learning the aspects of the initial conditions that you care about (i.e., cause them not to be decodable from the parts of the final conditions that you can measure), regardless of how much computing power you have.

  7. Jay Says:

    Let’s suppose we’re indeed surrounded by a sphere of perfectly-reflecting mirrors, but for one small patch that escape alien’s control.
    By Bennet argument we’re sure they can’t recover everything at will. But could we go futher and put some limit on “how ashy” a macroscopic information must have turned so that it can’t be recovered from remaining information?

  8. Vadim Says:

    Jay #7,

    I think that’s the same situation as if a wireless transmission dropped some bits. Can they be recovered? Depends on how strongly they’re correlated with the bits that made it through. Hey, maybe if we want to preserve human history for alien posterity, we should start embedding error-correcting codes into everything we radiate.

  9. CJ Nackel Says:

    You said “galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone.”

    I am wondering if this is true, because of the Ant On A Rubber Rope principle. Even if the galaxy was “moving away” from us at greater than the speed of light, our light would still eventually reach that galaxy, because it isn’t the galaxy moving away from us, it is the space between the galaxies that is growing. So the proportion never decreases, because the space behind the light expands at the same rate as the space in front of the light. However, the expansion of the universe is accelerating, supposedly, so the Ant On A Rubber Rope idea doesn’t really apply. So, my question is:

    Assuming that the Big Bang really did happen, there would have to be a maximum radius to the universe right now, because there is only so far that matter could have traveled since the Big Bang happened (taking expansion into account). Given that maximum radius (however big it is), is the acceleration of the expansion of the universe really enough to stop our light from eventually reaching the most distant planets?

    Google just doesn’t seem to be answering that question, and I refuse to go to wastelands that are page 2+ of a Google search, because I value my sanity.

  10. Scott Says:

    CJ #9: Yes, the upshot of the dark energy, assuming it comes from a cosmological constant, is that the accelerating expansion is enough to stop our light from reaching anyplace that’s sufficiently far away from us. If you like, for every meter that our light travels toward those places, more than one meter of new space will be “created” between us and them in the same time interval.

  11. Vladimir N Says:

    When future decisions that generate private data can be predicted from the past, observing the past is enough to get the data, it’s not necessary to observe the data directly.

    So inability to see what’s going on in your future lightcone doesn’t prevent prediction of what’s going on there, doesn’t guarantee privacy. Of course in practice it does, but it’s not a physical law, it’s easy to set up a process where that fails (just start counting; the future state of the clock can be predicted without being in its future light cone).

  12. Jay Says:

    No, I don’t think so. If the leak was purely classical, then Aliens would still be able to recover everything, wouldn’t they?

  13. Jay Says:

    CJ #9, another way to see it: if you can go halfway to some place, then you can reach this place, but there are places you can’t go halfway, because too much meters would be created in front of you as compared to those created behind you.

  14. Scott Says:

    Vladimir #11: That’s actually an extremely interesting philosophical question. Suppose a superintelligent alien studies your future parents before you’re conceived, and on that basis alone, issues predictions about the likely course of your life once you are conceived—predictions that turn out to be uncannily accurate, when they’re removed from a safe and made public at some point during your life. At no point do the aliens ever examine you or any of your choices directly. Should we then say that the aliens “violated your privacy”?

    On the one hand, the aliens’ very success at predicting some aspect of your life could be taken as evidence that you “had no choice” about that particular aspect, that it wasn’t freely willed—and therefore (one might argue), that you shouldn’t feel any guilt or shame over it. On the other hand, it’s perfectly plausible—even with today’s technology, forget about alien superpowers!—that they could predict some genetic defect you would probably have, and that that information might embarrass you if made public. And that seems like an obvious privacy violation, even though it’s about something you had no control over.

    So, OK, suppose the aliens were only able to collect information about your ancestors, a thousand years before you were born. Could they still generate predictions that would “violate your privacy”?

    My intuition says no—but on reflection, that intuition heavily relies on the thesis that aliens in that situation couldn’t generate reliable predictions about you, because of a combination of chaos, quantum indeterminacy, and the aliens’ inability to know all the relevant features of the state of the world circa 1000 AD. Indeed, they couldn’t even point to any specific future human as identifiably “you.” The best they could do, presumably, would be to run Monte Carlo simulations with the data they had, to generate a range of possible futures for the human species as a whole, and/or for the remote descendants of some particular human population. And even if their macro forecasts were uncannily accurate, I still wouldn’t feel inclined to say that the privacy of any specific individual alive in 2015 had been violated.

  15. Vadim Says:


    Aren’t we talking about classical information? I’m not familiar with Bennett’s ideas so maybe I’m missing something? E.g., If we put out our run-of-the-mill EM containing our radio transmissions, produced and reflected light, etc, in all directions, but then in the direction of the missing reflector we transmitted a signal containing a random sequence determined by the decay of some amount of radioactive material, how could the aliens learn the sequence, having only received the unrelated photons containing our Seinfeld reruns?

  16. Anon Says:

    Aliens with infinite lifespans but still limited by laws of physics still will not be able to find all initial states based on final states of em radiation that detected by them, because there is no one to one relationship between final and initial state. I think ‘non-mortal’ aliens still need to get close enough to place where em radiation is new enough.
    Similar, what about data in perfectly opaque room radiating only random heat? What process can restore information radiated from black hole? Do you imagine it or can you describe it with equations?

  17. Scott Says:

    Jay #7: The quantitative question is interesting. Let’s assume there are N bits x1…xN about the past (in total) that you’re uncertain about, that those bits are then subject to a large amount of chaotic mixing, and that finally you observe M bits about the world’s current state. (By which I really mean: M bits of mutual information with x1…xN. I don’t care how many new bits you observe that are uncorrelated with your uncertainty about the past.)

    In this case, it seems to me that the key question will simply be whether M is almost as large as N (say, N-O(1)). If it is, you’ll be able to learn something about a particular xi of interest to you, and if not, not. For the mapping from past to present will basically function as an error-correcting code, with each new bit you gain from the present cutting in half the number of possibilities for x1…xN, but in an “uncontrolled” way, where you learn almost nothing about a specific xi until only a few possibilities for x1…xN remain.

    Anyway, that’s the classical case. If we’re instead talking about qubits, then for well-understood reasons (which figure heavily in the discussion of the firewall paradox; see also this paper by Hayden and Preskill), you basically won’t be able to learn anything about any of the N past qubits, until you’ve measured at least half of all the qubits that participated in the mixing process. (Even if there are many, many more than 2N of those qubits.)

  18. wolfgang Says:

    It seems that the no-cloning theorem provides some privacy protection.
    The NSA will never be able to determine the exact internal state of all citizens without erasing them and thus the taxpayers who fund them.

  19. Job Says:

    …as a matter of principle, we can’t rule out the possibility that some civilization of the very far future, whether human or alien, could piece together what was written on your paper even after you’d burned it to a crisp.

    What if instead of a letter we had qubits in a known state encoding the message, then proceed to evolve the qubits into a superposition and hold that superposition indefinitely.

    If the qubits are still in a superposition when the future beings decide to eavesdrop, and if in their attempts they collapse the qubits, then won’t they have to reverse their own actions just to obtain the original message?

    Is it possible to reverse one’s own thread of causality?

  20. Jay Says:

    Scott #17: Thanks! Is there any measure or receipe to make sure the amount of chaotic mixing is large enough? What if it’s small or smaller?

    Vadim #15: I’m no expert so take this with a grain of salt. Bennett’s argument seems to rely deeply on the no cloning (e.g. if what escaped is entangled with the remaining qbits, we can’t reconstruct the impact of the entangled qbits without these qbits). On the other hand, if what left was purely classical (say a random sequence of binary information), then one possible strategy for PSPACE-able aliens would be (a variation of Douglas Knight’s strategy) to try every possible reconstruction while guessing the missing bits one by one.

  21. Alex Says:

    Unscrambling the egg could be a bad analogy: here – – the current research is mentioned which works on that.

    It would be great to explain the difference between galaxies always getting away from us faster than the speed of light and the necessity to not own information outside of the light cone, but I’m afraid it’s outside the scope of this.

    Is reconstruction of information requires energy? Could it be that reconstructing the paper with message from ashes – or information from evaporated black hole – requires more energy than the universe has?

  22. a Says:

    “Assuming the dark energy remains there at its current density, galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone, and signals from us can never reach them. So, at least you’re safe from those aliens!”

    Interesting. Could there be a new theory of relativistic computational limits based on this?

  23. Peter Says:

    If one has partial knowledge of some future boundary, then of course there are many possible candidates for the present time, and chaos theory says we should expect there to be a lot of variation between the candidates. But perhaps one can still hope to identify a ‘most realistic’ cluster of similar candidates.

    I’m really thinking here of the compressed sensing results, that one typically can recover ‘normal’ data from quite limited information – of course, this is a very different situation where chaos theory isn’t relevant, but perhaps there is some connection? (I also think establishing any rigorous results of the type I’m suggesting is not something we can hope for with today’s mathematics)

  24. Scott Says:

    Job #19: It’s true that aliens who measured your private quantum data could presumably never undo that measurement. However, other, much more powerful aliens who controlled the complete quantum states of the first aliens could undo it! And if so, then we can’t make a categorical statement that once your private quantum data is measured, everything except the actual measurement result must remain a secret forever.

  25. Scott Says:

    Jay #20: For various simple models of dynamics, there are theorems that tell you that mixing happens after such-and-such amount of time (for example, n log(n), or even just log(n)). Note that if full mixing doesn’t happen, the reconstruction task will still be information-theoretically possible, but now might be computationally easier.

  26. Scott Says:

    Alex #21: Cool! So it sounds like they can “partly” unscramble egg whites (meaning: restore some of the proteins, not restore the exact spatial configuration the proteins were all in before the scrambling). But they don’t say anything about yolks. 🙂

    I don’t know of any result lower-bounding the energy needed to undo a quantum measurement. And from a fundamental physics standpoint, the main issue doesn’t seem to be energy but knowledge of the full quantum state, and control over all its degrees of freedom. On the other hand, if one assumes incomplete control, realistic rates of decoherence that then have to be counteracted, etc., then maybe one can get a lower bound on energy—I’m not sure.

  27. Scott Says:

    a #22:

      Interesting. Could there be a new theory of relativistic computational limits based on this?

    People like Bousso and Lloyd have already studied the computational implications of the dark energy; see here for example. The bottom line is that, if the dark energy maintains its current density forever, and the holographic principle holds, then any computation one can ever do in our universe is limited to about 10122 qubits.

  28. fred Says:

    Hi Scott,

    sorry to be a bit off-topic, but you wrote:

    “In 1998, it was discovered that there’s a ‘dark energy’ pushing the galaxies apart at an exponentially-increasing rate.”

    It reminded me of your objection on using relativistic effects to “solve” NP-hard problems by making an exponentially long computation seem polynomial in another referential.
    You pointed out that in order to achieve this it would take an exponential amount of energy expenditure (e.g. the fuel tank of the accelerating space ship would be so big that it would take an exponential amount of time to move the fuel around).

    How does this “dark-energy” driven exponential drift fit in that context?

  29. fred Says:

    Scott #2

    I’ve only studied “information” in the context of Shannon information theory in electronics, and entropy in the context of thermodynamics (Carnot engine).
    In those limited contexts entropy appears mainly as a mathematical convenience.
    But I’m always puzzled at the way current physics seems to define “information” as some sort of absolute concept.
    Isn’t information ultimately relative/subjective?
    I thought that Kolmogorov complexity shows the difficulty in deciding whether a stream of bits is either pure randomness (max information?), totally deterministic, or somewhere in between.
    Or are we talking about the maximum amount of information that can be stored in a system perhaps?

  30. Rahul Says:

    Scott #26:

    That “unboiling-an-egg” news article was a classic example of how science gets hyped these days. Very sad.

    As far as I can tell, those researchers have nowhere come even close to unboiling an egg. All they show is one single heat-denatured protein can be made to refold back. To some degree; not sure if it refolds identical to the pristine protein.

    And this was already possible before, they just have a new method (they say a faster one).

    I’m not denigrating their achievements but it’s a huge stretch to call that unboiling an egg. By these standards D-Wave is absolutely understating their QC achievements.

    University PR & research coverage by media is absolutely broken.

  31. Scott Says:

    fred #28: The two things have nothing to do with each other, as far as I can see.

    fred #29: Well, information has both subjective and objective aspects. The number of bits that you learn on measuring a system depends on who you are, and how much you already know. On the other hand, the maximum number of bits that can be stored in a system is an objective, physical property of that system: basically, it’s the log of the dimension of its Hilbert space. And when people talk about thermodynamics in fundamental contexts (like cosmology or black holes), it’s often the latter that they really mean. Sometimes there’s a further claim that not only does a system have an N-bit state space, but its state is “maximally scrambled” within that space and “completely lacking in further structure” (so that it really achieves the maximum of N bits). Such claims are often hard to make sense of in theory but easy to make sense of in practice. And with suitable additional assumptions (say, that by “lacking in further structure” we really mean “has close-to-maximal Kolmogorov complexity with some standard encoding as a bit string and standard reference Turing machine”), we can make sense of them in theory as well.

  32. Scott Says:

    Rahul #30: OK, thanks for clarifying!

  33. jacinabox Says:

    Well, if I analyse it, the ashes of the burned paper exist in a block of space. Information of entropy is entering that space through its surface area. Also, the volume occupied by the information is expanding and its surface area along with it, as the entropy dissipates. So you could regard the situation as analogous to a cipher which is constantly growing, each addition acting as a key to encipher what came before. The greater the surface area, the faster the cipher grows. Therefore the information should become unrecoverable at the point where the rate of entropy transfer is faster than the fastest possible computing machine.

  34. Aaron Sheldon Says:

    In jest…

    How to unscramble an egg? Feed it to a chicken!

  35. fred Says:

    Scott #10

    ” If you like, for every meter that our light travels toward those places, more than one meter of new space will be “created” between us and them in the same time interval.”

    This has always puzzled me to no end…

    I just can’t reconcile this concept of space “creation” with the notion of scale (i.e. all atoms in every part of the universe have the same size).

    In special relativity, when Einstein talks about “contraction of space”, the matter that is riding the space actually contracts (or expands) with the space. Here matter is solidly anchored to the space “underneath” it.
    The analogy would be that an atom (matter) is represented on a rubber balloon (space) by drawing a spot on the surface using a black marker. Then as we “expand” or “deflate” the balloon, the actual atom will expand or shrink by the same amount as the balloon. From the atom’s point of view, nothing’s really changing, but the atoms on another balloon (another referential) may seem to actually contract/expend.

    (I understand that particles are defined as mathematical singularities, but there’s still the concept of a field of influence extending to a finite non-zero dimension).

    But when we’re talking about expansion of the universe, it seems that balloon analogy suddenly changes: the atom is now represented by a “rigid” paper cutout that’s glued on the balloon. If the balloon expands/contracts, the relative size of the atom does change. It’s as if matter is now loosely coupled to the space underneath it.

    (I understand that the second concept was introduced to explain the apparent drift of galaxies in every possible direction, i.e. there is no “center” so you just can’t explain it with normal accelerating motion).

    It seems to me that the relation between matter and space is really ambiguous.

  36. Gian Mario Says:

    fred #35

    The matter just follow geodesics of the spacetime when there are no other interactions. At the single atom level the electromagnetic force is huge compared to the gravitational “force” that is trying to separate the electron from the proton (due to the expansion of the universe), not to mention the strong force that keeps the nucleus together.
    It’s only at large distances when object can be considered almost isolated from each other that the current dark-energy effects dominate. This of course depends on the rate of expansion. If you start increasing the rate of expansion more and more at some point you will be able separate clusters of galaxies, then galaxies, starts, planets and in the end atoms etc.


  37. Gian Mario Says:

    I will make an incorrect analogy just to make it clear. Let’s assume that atoms are represented by 2 balls connected by a spring that in absence of other forces keeps the 2 balls in equilibrium at a constant distance d, like an elastic dumbbell.

    Now put the dumbbell on the surface of an expanding rubber balloon (the universe), and assume that there is friction between the balls and the surface of the balloon that increase with the relative speed.

    The friction will try to separate the balls when the balloon is inflating, but the spring will keep them together finding a new equilibrium position.

    If the friction is very small will have basically no effect, the dumbbell will be slightly stretched but of a completely negligible amount.

    Nevertheless if you start inflating the balloon faster and faster, the friction will become stronger and stronger, it will start stretching the spring more and more until it will rip the dumbbell apart.

    We also do not consider the gravitational attraction between electrons and protons when computing the ground state of the atoms, because it’s completely negligible. Both their mutual attraction and their tendency of flying apart due to the expansion of the universe are manifestation of exactly the same phenomenon: objects try to follow geodesics, and other interactions make them deviate from these “straightest” paths in spacetime.

  38. Darrell Burgan Says:

    Scott, I’m really confused about the following two items that I understand to be true:

    1. The net entropy of the universe is ever increasing.

    2. Information can never be created or destroyed.

    If both are true, then it would seem to me that there can be no fundamental relationship between information and entropy …

    But, if true, that is completely counterintuitive to me. Isn’t the very essence of increasing entropy the increase of disorder and decrease of order? I have always (mis-)understood increasing entropy to be the dilution of information within an increasing sea of randomness. Obviously I’m missing it …

  39. Scott Says:

    Darrell #38: Let’s consider the living room where I’m typing this. Lily has been energetically making a mess here, dumping out boxes, picking things up and throwing them, even taking my and Dana’s work notebooks and ripping or hiding them—in general, increasing the room’s entropy in every way she can. Even so, it remains beyond her power (for now) literally to destroy the information present in the room—for example, by setting our work notebooks on fire. “All” she can do is scramble the information up and make it less easily accessible. And actually, even if she did set our notebooks on fire, physics tells us that that would “merely” be a turbocharged version of what she’d already been doing: the information in the notebooks would still be present in the smoke and ash; it’s just that it would be much, much harder to access, so much so that no technology that we can realistically envision would do it.

    I hope this makes it clearer how the preservation of information can be consistent with increasing entropy. (I could try to give you a more technical answer, but it would ultimately amount to the same thing as the above.)

  40. Gian Mario Manca Says:

    fred #35

    OK one last message.

    To clarify, the analogy in #36 and #37 is “incorrect” because the friction is 1st order and the gravitational force is 2nd order, so the gravitational force allows you to freely go around at constant speed, friction would not.
    A part from that the analogy is pretty good.

    Also you are mixing special and general relativity. Special relativity is purely “local”, and it deals with reconciling the different measurements of observers that look at the dumbbell while moving at different speeds. But everything is happening at the SAME LOCATION of the dumbbell. “At the same location” here means a region small enough that the effect of the curvature of the spacetime (gravity) is negligible compared to the effects of other interactions.

  41. Job Says:

    Doesn’t the process of decoherence lower the entropy of a system?

  42. Darrell Burgan Says:

    Scott #39: okay then there really is no relationship between entropy and information. Information can be increasingly disordered forever, obscured, dispersed, or transformed into increasingly less useful forms, but never destroyed.

    That brings me to another question. The maximum amount of information that can physically fit into a cubic centimeter is a fixed quantity, right? Does that suggest a relationship between the expansion of space with entropy, given the information in any given cubic centimeter is forever being diluted by the expansion?

  43. Scott Says:

    Job #41: Decoherence (being something that happens in the forward time direction) is an overall entropy-increasing process. Yes, it could decrease the entropy of the specific system being decohered—for example, if that system was in a quantum mixed state—but the overall entropy of the system plus its environment always goes up, as the two get entangled with each other.

  44. Yuval L. Says:

    (Off topic) I don’t know if you do guest posts, but I was thinking of writing something about how we never know as much as we think we do. This is based on a discovery I made a couple years ago about my life that’s emerging in neuroscience. Additionally, this relates to the question of whether we can consciously choose our actions or not.

    Do you take guest posts, and does this fit in with the theme of your blog?

  45. Scott Says:

    Yuval #44: You can post it here as a “guest comment.”

  46. Braithwaite Prendergast Says:

    Scientists have a difficult task in separating the subjective from the objective. This is not easy because so-called objectivity is ultimately based on some human observer’s personal frame of reference or projected light cone. (1) The measure of entropy is discretionary. One measure of entropy is to say that it is zero for water at 0° C. When water turns to ice, it can be arbitrarily said that it has no entropy. (2) “Order” is a subjective concept that is related to a spectator’s needs, viewpoint, and intelligence. Where Inspector Lestrade saw disorder, Sherlock Holmes saw that the pieces of the puzzle were falling into place. (3) “The unavailability of energy” is perspectival. Before hydraulic fracturing (fracking) was employed, much energy that is now available was unnoticed. (4) “Information” is not an objective concept. Knowledge that is derived from data depends on the unbiased astuteness of the individual spectator. The jurors in the O. J. Simpson trial found nothing informative in evidence that was presented to them.

  47. Raoul Ohio Says:

    Braithwaite Prendergast #46:

    Are you in Colorado or Washington?

  48. a Says:

    Scott #27

    If p=np there is no secrecy. A natural converse question would be to philosophically ask what is implication of this to space time and dark energy ( non-existence of space time security)?

  49. Braithwaite Prendergast Says:

    Raoul Ohio # 47: Instead of implying that my four comments on subjectivity are the result of consuming a mind-altering drug, I would appreciate an intelligent refutation. Since you seem to take other views for granted, as though they were obvious and self-evident, I would be grateful to learn from you. I would gladly change my mind and adopt your correct view, if only I knew what it is. Why is the consideration of the observer’s role unnecessary in the measurement entropy? Why is the spectator’s frame of reference, light cone, perspective, or viewpoint of no consequence in relation to entropy?

  50. Jay Says:

    #49: Observer’s role is a tricky concept that could be interesting to discuss, but your previous post started from such wrong assumptions that it’s not inviting (e.g., that it makes any sense to define entropy as zero for water at 0° C, or that this discussion should have something to do with O.J. Simpson, fracking technics or Inspector Lestrade).

  51. Anonymous coward Says:

    “Indeed, if information is not in your future light-cone, then almost by definition, you had no role in creating it, so in what sense should it be called “yours”?”

    In what way is this “almost by definition” and not “precisely by definition”?

  52. Scott Says:

    Anonymous coward #51: Well, maybe there’s superluminal communication or wormholes or the like, but we still don’t want to revise our definition of “future light-cone” to take them into account. (Since after all, your causal future might no longer be even approximately cone-shaped in that case.)

  53. die ennomane » Links der Woche Says:

    […] The ultimate physical limits of privacy: […]

  54. Uzi Awret Says:

    Dear Scott. First, out of curiosity, I just saw the house of Sarah Aaronsohn (Nili hero) in Zichron Yaakov and wondered if there is any relationship.
    I also have a question. Using the Turing test to determine whether some putative population of computers is conscious is very problematic. Suppose we lower the bar and test for some operational ‘self-awareness’ instead by asking the computers the next two related questions:
    a) Whats your position on the Sleeping Beauty Paradox. (Halfer or thirder. David Lewis was a halfer!) . What gives the SBP its force is the way indexical concepts are based on internal actualization conditions.
    b) Are you willing to undergo destructive uploading into a vastly improved version of yourself?
    If all the computers end up as thirders that are willing to undergo destructive uploading you could conclude that they are not self-aware like us. On the other hand if you have embedded machines negotiating a real time environment it only seems natural that they would develop operational indexical concepts such as here and now. Its easy to see when you consider brains. These usually contain a system (default system) that allocates the computational resources of the brain to either a vigilant mode (online, direct interaction with environment)) or a more ‘introspective’ mode (offline, counter-factual emulations). If, say a bee, has a primitive default system and the ability to perform counter-factual emulations but quickly distinguish them from interacting with a real environment, it probably has something that parallels our concepts of ‘here’ and ‘now’.
    Someone like Derek Parfit might hold that both questions do not have an answer and actually use that to deflate the self.
    Wonder what you think about all this and also whether testing quantum computers is more likely to yield halfers.
    Enjoy your blog, Uzi Awret.

  55. domenico Says:

    I am thinking that if a programmed computer is build with some elementary particles (a mixture) with a short lifetime, that constructs an object (memory) that include some particles with short lifetime, then each measure of an observer cannot reconstruct the object, if the decay product are unstable.

  56. mjgeddes Says:

    Scott, did you hear the latest news about the cosmological constant and the fate of the universe?

    A theoretical breakthrough now has everyone saying that the acceleration will soon end, the cosmological constant will flip over and the universe will collapse into a big crunch after all! lol

    Check all physics news sites, here’s a link to a summary:

  57. Nick M Says:

    Per Jan Kåhre (the Law of Diminishing Information) information, objectively defined, is irrevocably lost on an ongoing basis. There’s a sense in which nature is continually hitting the delete button. Where in the universe can the Incan knot-code be deciphered, for example? It would require intimate knowledge of a language-culture which exists on earth only in scattered remnants. Or take earlier versions of our own language … do we seriously entertain the possibility that Anglo-Saxon still exists somewhere in its entirety?

    So far I’ve been adducing classical information. But even on the quantum level information can be deleted, just not easily. Brukner and Zeilinger discussed information-deletion in connection with a Heisenberg microscope gedanken ten years ago in “Quantum Physics as a Science of Information”:

    “Indeed, the most interesting situations arise if the path information is present at some point in time, but deleted or erased in an irrevocable way later on. Then, as soon as that information is irrevocably deleted, the interference patterns can occur again. Here, it is important to note that the mere diffusion of the information into larger systems, maybe even as large as the whole universe, is not enough to destroy the information. As long as it is there, no matter how well hidden or how dispersed, the interference fringes cannot occur.”

  58. Nick M Says:

    “the interference patterns can occur again” should have read “the interference fringes can occur again”. I have the paper in photocopy and had to key the text in. Blame internal channel noise.

  59. Kathy Rose Says:

    what is the best place to get privacy as it is not possible to live even in black hole?

  60. Serge Says:

    Might the event of our finding out about the existence of one-way functions lie outside our future light cone? If we’re not allowed to travel to any planet that lies thousands of light years away from us, doesn’t computational complexity play the role of an infinitely huge distance keeping us away from solving hard problems? In any case, it prevents us from “traveling” to their solution – be it with a rocket or with a computer. Indeed, what prevents the intelligent beings who live on the places located outside our future light cone from developing a mathematics that contradicts ours? The nonstandard models of arithmetic are potentially just standard ones which happen to be physically unreachable! Computational complexity, Kolmogorov complexity and mathematical consistence might thus turn out to be relative notions like time, mass, simultaneity, distance… Of course PCP allows you to convince the extraterrestrials that your proofs are correct by sending them only a few bits… relativity permitting!

  61. Brian Rom Says:

    Scott, I know old habits die hard, you’re back to your careless grammatical ways again. Remember: no hyphens after adverbs like ‘exponentially’.

  62. Jesse M. Says:

    CJ #9: As explained here, the ant on a rubber rope example assumes that space is expanding linearly, meaning the distance between two points increases by a fixed amount in each unit of time, and that implies the ant will always be able to get from one point on the rope to another given enough time. But in cosmology space grows exponentially or faster, meaning that if the distance between two points doubles in some time interval T, then over the next interval T the distance between those points will double again, or more than double. In this case an ant starting from one point may never be able to reach another point no matter how long it walks.

  63. Jesse M. Says:

    Scott: You said, “Assuming the dark energy remains there at its current density, galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone, and signals from us can never reach them. So, at least you’re safe from those aliens!”

    Although it’s correct that the current model of an accelerating universe means that light from events in sufficiently distant galaxies will never reach us, this statement isn’t quite right. In the paper “Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe” by Davis and Lineweaver, available here (direct link to the PDF here) on p. 100-101 they discuss “Misconception #3: Galaxies with Recession Velocities Exceeding the Speed of Light Exist but We Cannot See Them”, and on p. 101 they note: “The most distant objects that we can see now were outside the Hubble sphere when their comoving coordinates intersected our past light cone. Thus, they were receding superluminally when they emitted the photons we see now. Since their worldlines have always been beyond the Hubble sphere these objects were, are, and always have been, receding from us faster than the speed of light.”

    The trick here is that when physicists talk about the expansion of the universe “accelerating”, they mean that if you track one particular galaxy’s distance from us over time (measured in terms of proper distance, the distance that would be measured by a series of short rulers laid end-to-end), the speed at which its distance grows is accelerating; but on the other hand, if you just look at different galaxies whizzing past a fixed proper distance from us, the velocity of successive galaxies is decreasing with time. This means that the “Hubble sphere”, the distance at which a galaxy needs to be in order to be receding at the speed of light, is growing with time. And a photon emitted in our direction will be moving at (expansion velocity at its distance) – (the speed of light), as discussed in the wiki article discussing proper distance I linked to; thus, as they say on p. 101 of the paper, “photons near the Hubble sphere that are receding slowly are overtaken by the more rapidly receding Hubble sphere.” Nevertheless, the acceleration does imply that there will be events that will be outside our past light cone even at a cosmic time T=infinity, see the cone labeled “event horizon” in the third diagram on p. 98 of the Davis/Lineweaver paper.

  64. Jesse M. Says:

    Braithwaite Pendergast # : “The measure of entropy is discretionary. One measure of entropy is to say that it is zero for water at 0° C. When water turns to ice, it can be arbitrarily said that it has no entropy.”

    Can you elaborate on the justification for this? Are you thinking of a particular rigorous definition of entropy in physics? I would say the most fundamental definition of entropy is usually understood to be the statistical mechanics one, S = k log Omega, where S is the entropy associated with a “macrostate”, Omega is the number of possible “microstates” associated with that macrostate, and k is Boltzmann’s constant. The idea here is that you have some isolated system which you often divide up into subsystems (for example, your isolated system might be a box of gas with two chambers separated by a movable partition, and the subsystems could be the gas molecules in each chamber), then you pick some set of macroscopically-measurable parameters which you can use to characterize the state of each subsystem (like the total kinetic energy of the molecules in each subsystem, and the volume of each subsystem, both of which can change if the partition can move back and forth as molecules hit it from either side, causing the two halves to exchange energy and volume even if the total for each is fixed). A “macrostate” for the system is defined by the specific values of each parameter in each subsystem (a known total energy and volume for the gas on each side of the partition), and then you can analyze what macrostate would be associated with every possible microstate of the whole system (in this example, a microscopic specification of the position and momentum of every molecule, up to the limits of the uncertainty principle). Once you’ve figured that out, the entropy of a macrostate is k times the logarithm of the number of microstates associated with that macrostate.

    It’s true that there can be some arbitrariness in the choice of macro-parameters (and also in how the whole system is divided into subsystems), but relative to these choices there isn’t any ambiguity in the value of the entropy, and I don’t see any reasonable way you could formalize your notion that “ice” would have an entropy of zero (since that would imply “ice” is describing a macrostate that has only a single possible microstate associated with it, yet surely there are many different possible microstates of a collection of water molecules that we would ordinarily label as ‘ice’).

    There is also a somewhat different definition of entropy for non-isolated systems, like a small collection of molecules connected to a “heat bath” that they can exchange energy with; in that case if there are different microstates the molecules can be in, with probabilities p_1, p_2, p_3 etc., then the entropy of the molecules is given by -k times (sum over all values of i) p_i * logarithm (p_i). But it’s possible to derive this from the prior definition by imagining a larger isolated system consisting of both the molecules and the heat bath, and defining each “macrostate” of the larger system to correspond to a particular microstate of the smaller collection of molecules.

    “‘Information’ is not an objective concept. Knowledge that is derived from data depends on the unbiased astuteness of the individual spectator.”

    When people speak of ‘information’ in the context of physics, they aren’t using information in some colloquial sense like ‘knowledge that is derived from data’, but rather some formal definition drawn from the mathematical field known as information theory. Two important definitions are “self-information” and “informational entropy”; these can only be defined in the context of a probability space where there are many possible events 1,2,3,… which each have some well-defined probability p_1, p_2, p_3, … (for example, if you flip a fair coin twice and note the number of heads and tails without paying attention to the order, the possible ‘events’ are [one head, one tail] which will have a probability 1/2, and [two tails], and [two heads] which each have a probability 1/4). Then the self-information of any particular event drawn from that probability space is defined as -1 times the logarithm of the event’s probability, which turns out to be approximately equal to the number of bits that would be needed to communicate the outcome to someone under an optimal encoding scheme (Huffman coding) that used fewer bits to encode messages about more common events. Meanwhile the informational entropy is the average value (or expectation value) of the self-information over a large number of events drawn from this probability space–the average number of bits that would be needed in a series of messages communicating the result of each draw.

    These are both connected to physical entropy, because if you define your probability space by imagining drawing a microstate at random from the set of all possible microstates of a collection of a given isolated system, then if the person you are communicating with already knows the macrostate, the number of bits needed to communicate the microstate to them would be the self-information which is -log (1/Omega), or log (Omega), which is just the statistical mechanics entropy of that macrostate divided by k (keeping in mind that the macrostate is associated with Omega possible microstates, each equally likely given the assumption of drawing one at random in an isolated system). And the informational entropy is defined as -1 times (sum over all values of i) p_i * logarithm (p_i), so it’s obviously connected to the second definition of physical entropy I mentioned–in the example of the molecules connected to the heat bath, if you repeatedly pick a microstate at random from the set of all possible microstates of the larger system consisting of both the molecules and the heat bath, then that second definition of physical entropy will just be k times the informational entropy, which is the average length of a message about the state of the molecules over many repeated trials.

  65. Sniffnoy Says:

    On the topic of the black hole information problem, here’s some physicists saying basically, if you actually compute the off-diagonal terms in the density matrix, you can see that black hole radiation is unitary after all! I don’t really know enough to evaluate this, of course…

  66. Ben Standeven Says:

    @Uzi Awret #54:

    That’s true if the machines are Bayesians, but if they are frequentists, the non-self-aware ones will all be halfers: they will calculate that the frequency with which the coin is observed to be heads is 1/2. But the self-aware ones might be thirders, because they calculuate, “the frequency with which I observe the coin to come up heads is 1/3.”

  67. Ben Standeven Says:

    Jesse #64:
    Maybe he’s thinking of the fact that in classical statistical mechanics, the number of microstates is always infinite, so we can only consider its ratio to the number of microstates of some reference system. Thus we can only consider the difference of the systems entropy from the entropy of that reference system.

  68. MadRocketSci Says:

    I have a few questions about black-holes: I’ll have to work the answers to these out myself one of these days once I have crammed general relativity:

    From the perspective of an outside observer, not an infalling one, does matter ever actually cross the event horizon? I was under the impression that time dilation ensured some arbitrarily red-shifted light is still making it out for all future time. If that’s the case, then when would information be destroyed? There should be some sort of (extremely arbitrarily dim) time frozen ghost of the entire history of the black-hole stuck to the event horizon surface.

    For the classical perspective about black-holes – matter does end up inside the event-horizon somehow, so why can black-holes have charge and magnetic moments that can influence events outside the hole? Apparently some observed blackholes seem to act as if they have powerful magnetic fields. But if photons can’t escape the hole, then dynamically how does this work, if “virtual photons” are what carry changes to electric and magnetic fields from sources?

    (Even if you think of all the matter as time frozen, arbitrarily redshifted, and stuck to the horizon, it seems like a charge having the same sort of influence as in flat space-time would be problematic, since all the “virtual photons” should end up arbitrarily redshifted as well.)

  69. MadRocketSci Says:

    Braithwaite Pendergast # : “The measure of entropy is discretionary. One measure of entropy is to say that it is zero for water at 0° C. When water turns to ice, it can be arbitrarily said that it has no entropy.”

    Can you elaborate on the justification for this? Are you thinking of a particular rigorous definition of entropy in physics? I would say the most fundamental definition of entropy is usually understood to be the statistical mechanics one, S = k log Omega, where S is the entropy associated with a “macrostate”, Omega is the number of possible “microstates” associated with that macrostate, and k is Boltzmann’s constant.

    In my reading on thermodynamics, there is some arbitrariness in the choice of your measure over which you evaluate how microstates are weighted when summing or integrating.

    In classical statistical mechanics, you always have some continuous state space. If you want to compare the phase volume of two microstates, then log(Omega2/Omega1) = log(Omega2) – log(Omega1). You always end up having to compare with some reference state though to get an answer that isn’t infinite.

    (Actually, one question I’ve always had about quantum statistical mechanics: Why do you just count pure states consistent with some energy/volume? Why don’t you do something like integrate over all mixed states with expected energy volume values consistent with the macrostate energy volume? That seems to me like it would be more representative of the phase volume of the state space. All my engineering textbooks just seem to blow past the justification for this at high speed.)

  70. MadRocketSci Says:

    Here is an interesting exchange that seems to be referring to what I was trying to talk about above:

    Here, Everett is pointing out to Jaynes that you need to define a measure when quantifying Shannon entropy.

  71. Shmi Nux Says:

    It’s been almost a month since your last post, Scott, hope everything is OK with you and yours.

  72. Shmi Nux Says:

    Maybe you can comment on the research mentioned in

  73. Ryan Socha Says:

    More please. Love this.

  74. Rahul Says:

    Long break from blogging?

  75. Raoul Ohio Says:

    TSP in Popular Culture:

    Interesting variation of a TSP problem.

  76. Danielle Says:

    Scott, in your PBS “Is There Anything Beyond Quantum Computing?” article, you claimed that:

    “To accelerate exponentially close to the speed of light, you need an exponential amount of energy! And therefore, it will take you exponential time to accelerate to such a speed—*unless* your fuel tank (or whatever else is providing your energy) is exponentially concentrated, in which case it will exceed the Schwarzschild limit and indeed collapse to a black hole.”

    However, isn’t it possible to simply convert mass into a series of quantum bits, store that data into protons, then send that data out as light, then receive that light and rebuild the matter as before? This would allow movement at the speed of light.

  77. Danielle Says:

    Sorry I meant photons not protons.