## Time: Different from space

Over at Cosmic Variance, I learned that FQXi (the organization that paid for me to go to Iceland) sponsored an essay contest on “The Nature of Time”, and the submission deadline was last week.  Because of deep and fundamental properties of time (at least as perceived by human observers), this means that I will not be able to enter the contest.  However, by exploiting the timeless nature of the blogosphere, I can now tell you what I would have written about if I had entered.(Warning: I can’t write this post without actually explaining some standard CS and physics in a semi-coherent fashion.  I promise to return soon to your regularly-scheduled programming of inside jokes and unexplained references.)

I’ve often heard it said—including by physicists who presumably know better—that “time is just a fourth dimension,” that it’s no different from the usual three dimensions of space, and indeed that this is a central fact that Einstein proved (or exploited? or clarified?) with relativity.  Usually, this assertion comes packaged with the distinct but related assertion that the “passage of time” has been revealed as a psychological illusion: for if it makes no sense to talk about the “flow” of x, y, or z, why talk about the flow of t?  Why not just look down (if that’s the right word) on the entire universe as a fixed 4-dimensional crystalline structure?

In this post, I’ll try to tell you why not.  My starting point is that, even if we leave out all the woolly metaphysics about our subjective experience of time, and look strictly at the formalism of special and general relativity, we still find that time behaves extremely differently from space.  In special relativity, the invariant distance between two points p and q—meaning the real physical distance, the distance measure that doesn’t depend on which coordinate system we happen to be using—is called the interval.  If the point p has coordinates (x,y,z,t) (in any observer’s coordinate system), and the point q has coordinates (x’,y’,z’,t’), then the interval between p and q equals

(x-x’)2+(y-y’)2+(z-z’)2-(t-t’)2

where as usual, 1 second of time equals 3×108 meters of space.  (Indeed, it’s possible to derive special relativity by starting with this fact as an axiom.)

Now, notice the minus sign in front of (t-t’)2?  That minus sign is physics’ way of telling us that time is different from space—or in Sesame Street terms, “one of these four dimensions is not like the others.”  It’s true that special relativity lets you mix together the x,y,z,t coordinates in a way not possible in Newtonian physics, and that this mixing allows for the famous time dilation effect, whereby someone traveling close to the speed of light relative to you is perceived by you as almost frozen in time.  But no matter how you choose the t coordinate, there’s still going to be a t coordinate, which will stubbornly behave differently from the other three spacetime coordinates.  It’s similar to how my “up” points in nearly the opposite direction from an Australian’s “up”, and yet we both have an “up” that we’d never confuse with the two spatial directions perpendicular to it.

(By contrast, the two directions perpendicular to “up” can and do get confused with each other, and indeed it’s not even obvious which directions we’re talking about: north and west? forward and right?  If you were floating in interstellar space, you’d have three perpendicular directions to choose arbitrarily, and only the choice of the fourth time direction would be an “obvious” one for you.)

In general relativity, spacetime is a curved manifold, and thus the interval gets replaced by an integral over a worldline.  But the local neighborhood around each point still looks like the (3+1)-dimensional spacetime of special relativity, and therefore has a time dimension which behaves differently from the three space dimensions.  Mathematically, this corresponds to the fact that the metric at each point has (-1,+1,+1,+1) signature—in other words, it’s a 4×4 matrix with 3 positive eigenvalues and 1 negative eigenvalue.  If space and time were interchangeable, then all four eigenvalues would have the same sign.

But how does that minus sign actually do the work of making time behave differently from space?  Well, because of the minus sign, the interval between two points can be either positive or negative (unlike Euclidean distance, which is always nonnegative).  If the interval between two points p and q is positive, then p and q are spacelike separated, meaning that there’s no way for a signal emitted at p to reach q or vice versa.  If the interval is negative, then p and q are timelike separated, meaning that either a signal from p can reach q, or a signal from q can reach p.  If the interval is zero, then p and q are lightlike separated, meaning a signal can get from one point to the other, but only by traveling at the speed of light.

In other words, that minus sign is what ensures spacetime has a causal structure: two events can stand to each other in the relations “before,” “after,” or “neither before nor after” (what in pre-relativistic terms would be called “simultaneous”).  We know from general relativity that the causal structure is a complicated dynamical object, itself subject to the laws of physics: it can bend and sag in the presence of matter, and even contract to a point at black hole singularities.  But the causal structure still exists—and because of it, one dimension simply cannot be treated on the same footing as the other three.

Put another way, the minus sign in front of the t coordinate reflects what a sufficiently-articulate child might tell you is the main difference between space and time: you can go backward in space, but you can’t go backward in time.  Or: you can revisit the city of your birth, but you can’t (literally) revisit the decade of your birth.  Or: the Parthenon could be used later to store gunpowder, and the Tower of London can be used today as a tourist attraction, but the years 1700-1750 can’t be similarly repurposed for a new application: they’re over.

Notice that we’re now treating space and time pragmatically, as resources—asking what they’re good for, and whether a given amount of one is more useful than a given amount of the other.  In other words, we’re now talking about time and space like theoretical computer scientists.  If the difference between time and space shows up in physics through the (-1,+1,+1,+1) signature, the difference shows up in computer science through the famous

PPSPACE

conjecture.  Here P is the class of problems that are solvable by a conventional computer using a “reasonable” amount of time, meaning, a number of steps that increases at most polynomially with the problem size.  PSPACE is the class of problems solvable by a conventional computer using a “reasonable” amount of space, meaning a number of memory bits that increases at most polynomially with the problem size.  It’s evident that PPSPACE—in other words, any problem solvable in polynomial time is also solvable in polynomial space.  For it takes at least one time step to access a given memory location—so in polynomial time, you can’t exploit more than polynomial space anyway. It’s also clear that PSPACEEXP—that is, any problem solvable in polynomial space is also solvable in exponential time.  The reason is that a computer with K bits of memory can only be 2K different configurations before the same configuration recurs, in which case the machine will loop forever.  But computer scientists conjecture that PSPACEP—that is, polynomial space is more powerful than polynomial time—and have been trying to prove it for about 40 years.

(You might wonder how P vs. PSPACE relates to the even better-known P vs. NP problem.  NP, which consists of all problems for which a solution can be verified in polynomial time, sits somewhere between P and PSPACE.  So if PNP, then certainly PPSPACE as well.  The converse is not known—but a proof of PPSPACE would certainly be seen as a giant step toward proving PNP.)

So from my perspective, it’s not surprising that time and space are treated differently in relativity.  Whatever else the laws of physics do, presumably they have to differentiate time from space somehow—since otherwise, how could polynomial time be weaker than polynomial space?

But you might wonder: is reusability really the key property of space that isn’t shared by time—or is it merely one of several differences, or a byproduct of some other, more fundamental difference?  Can we adduce evidence for the computer scientist’s view of the space/time distinction—the view that sees reusability as central?  What could such evidence even consist of?  Isn’t it all just a question of definition at best, or metaphysics at worst?

On the contrary, I’ll argue that the computer scientist’s view of the space/time distinction actually leads to something like a prediction, and that this prediction can be checked, not by experiment but mathematically.  If reusability really is the key difference, then if we change the laws of physics so as to make time reusable—keeping everything else the same insofar as we can—polynomial time ought to collapse with polynomial space.  In other words, the set of computational problems that are efficiently solvable ought to become PSPACE.  By contrast, if reusability is not the key difference, then changing the laws of physics in this way might well give some complexity class other than PSPACE.

But what do we even mean by changing the laws of physics so as to “make time reusable”?  The first answer that suggests itself is simply to define a “time-traveling Turing machine,” which can move not only left and right on its work tape, but also backwards and forwards in time.  If we do this, then we’ve made time into another space dimension by definition, so it’s not at all surprising if we end up being able to solve exactly the PSPACE problems.

But wait: if time is reusable, then “when” does it get reused?  Should we think of some “secondary” time parameter that inexorably marches forward, even as the Turing machine scuttles back and forth in the “original” time?  But if so, then why can’t the Turing machine also go backwards in the secondary time?  Then we could introduce a tertiary time parameter to count out the Turing machine’s movements in the secondary time, and so on forever.

But this is stupid.  What the endless proliferation of times is telling us is that we haven’t really made time reusable.  Instead, we’ve simply redefined the time dimension to be yet another space dimension, and then snuck in a new time dimension that behaves in the same boring, conventional way as the old time dimension.  We then perform the sleight-of-hand of letting an exponential amount of the secondary time elapse, even as we restrict the “original” time to be polynomially bounded.  The trivial, uninformative result is then that we can solve PSPACE problems in “polynomial time.”

So is there a better way to treat time as a reusable resource?  I believe that there is.  We can have a parameter that behaves like time in that it “never changes direction”, but behaves unlike time in that it loops around in a cycle.  In other words, we can have a closed timelike curve, or CTC.  CTCs give us a dimension that (1) is reusable, but (2) is also recognizably “time” rather than “space.”

Of course, no sooner do we define CTCs than we confront the well-known problem of dead grandfathers.  How can we ensure that the events around the CTC are causally consistent, that they don’t result in contradictions?  For my money, the best answer to this question was provided by David Deutsch, in his paper “Quantum Mechanics near Closed Time-like Lines” (unfortunately not online).  Deutsch observed that, if we allow the state of the universe to be probabilistic or quantum, then we can always tell a consistent story about the events inside a CTC.  So for example, the resolution of the grandfather paradox is simply that you’re born with 1/2 probability, and if you’re born you go back in time and kill your grandfather, therefore you’re born with 1/2 probability, etc.  Everything’s consistent; there’s no paradox!

More generally, any stochastic matrix S has at least one stationary distribution—that is, a distribution D such that S(D)=D.  Likewise, any quantum-mechanical operation Q has at least one stationary state—that is, a mixed state ρ such that Q(ρ)=ρ.  So we can consider a model of closed timelike curve computation where we (the users) specify a polynomial-time operation, and then Nature has to find some probabilistic or quantum state ρ which is left invariant by that operation.  (There might be more than one such ρ—in which case, being pessimists, we can stipulate that Nature chooses among them adversarially.)  We then get to observe ρ, and output an answer based on it.

So what can be done in this computational model?  Long story short: in a recent paper with Watrous, we proved that

PCTC = BQPCTC = PSPACE.

Or in English, the set of problems solvable by a polynomial-time CTC computer is exactly PSPACE—and this holds whether the CTC computer is classical or quantum.  In other words, CTCs make polynomial time equal to polynomial space as a computational resource.  Unlike in the case of “secondary time,” this is not obvious from the definitions, but has to be proved.  (Note that to prove PSPACEPCTCBQPCTCEXP is relatively straightforward; the harder part is to show BQPCTCPSPACE.)

The bottom line is that, at least in the computational world, making time reusable (even while preserving its “directionality”) really does make it behave like space.  To me, that lends some support to the contention that, in our world, the fact that space is reusable and time is not is at the core of what makes them different from each other.

I don’t think I’ve done enough to whip up controversy yet, so let me try harder in the last few paragraphs.  A prominent school of thought in quantum gravity regards time as an “emergent phenomenon”: something that should not appear in the fundamental equations of the universe, just like hot and cold, purple and orange, maple and oak don’t appear in the fundamental equations, but only at higher levels of organization.  Personally, I’ve long had trouble making sense of this view.  One way to explain my difficulty is using computational complexity.  If time is “merely” an emergent phenomenon, then is the presumed intractability of PSPACE-complete problems also an emergent phenomenon?  Could a quantum theory of gravity—a theory that excluded time as “not fundamental enough”—therefore be exploited to solve PSPACE-complete problems efficiently (whatever “efficiently” would even mean in such a theory)?  Or maybe computation is also just an emergent phenomenon, so the question doesn’t even make sense?  Then what isn’t an emergent phenomenon?

I don’t have a knockdown argument, but the distinction between space and time has the feel to me of something that needs to be built into the laws of physics at the machine-code level.  I’ll even venture a falsifiable prediction: that if and when we find a quantum theory of gravity, that theory will include a fundamental (not emergent) distinction between space and time.  In other words, no matter what spacetime turns out to look like at the Planck scale, the notion of causal ordering and the relationships “before” and “after” will be there at the lowest level.  And it will be this causal ordering, built into the laws of physics, that finally lets us understand why closed timelike curves don’t exist and PSPACE-complete problems are intractable.

I’ll end with a quote from a June 2008 Scientific American article by Jerzy Jurkiewicz, Renate Loll and Jan Ambjorn, about the “causal dynamical triangulations approach” to quantum gravity.

What could the trouble be? In our search for loopholes and loose ends in the Euclidean approach [to quantum gravity], we finally hit on the crucial idea, the one ingredient absolutely necessary to make the stir fry come out right: the universe must encode what physicists call causality. Causality means that empty spacetime has a structure that allows us to distinguish unambiguously between cause and effect. It is an integral part of the classical theories of special and general relativity.

Euclidean quantum gravity does not build in a notion of causality. The term “Euclidean” indicates that space and time are treated equally. The universes that enter the Euclidean superposition have four spatial directions instead of the usual one of time and three of space. Because Euclidean universes have no distinct notion of time, they have no structure to put events into a specific order; people living in these universes would not have the words “cause” or “effect” in their vocabulary. Hawking and others taking this approach have said that “time is imaginary,” in both a mathematical sense and a colloquial one. Their hope was that causality would emerge as a large-scale property from microscopic quantum fluctuations that individually carry no imprint of a causal structure. But the computer simulations dashed that hope.

Instead of disregarding causality when assembling individual universes and hoping for it to reappear through the collective wisdom of the superposition, we decided to incorporate the causal structure at a much earlier stage. The technical term for our method is causal dynamical triangulations. In it, we first assign each simplex an arrow of time pointing from the past to the future. Then we enforce causal gluing rules: two simplices must be glued together to keep their arrows pointing in the same direction. The simplices must share a notion of time, which unfolds steadily in the direction of these arrows and never stands still or runs backward.

By building in a time dimension that behaves differently from the space dimensions, the authors claim to have solved a problem that’s notoriously plagued computer simulations of quantum gravity models: namely, that of recovering a spacetime that “behave[s] on large distances like a four-dimensional, extended object and not like a crumpled ball or polymer”.  Are their results another indication that time might not be an illusion after all?  Time (hopefully a polynomial amount of it) will tell.

### 64 Responses to “Time: Different from space”

1. Anon Says:

What about having parallel computation. We traditionally assume that we have fixed number of processors. But if we assume an exponential number of processors which run in poly time even then we can solve PSPACE problems(i hope). Doesn’t this make time similar to space. Isn’t running parallel processors like reusing time.

Are there any models of physics where we can run more threads than just a constant number.

2. Scott Says:

Anon: That’s a great point! Allowing exponential parallelism is indeed another way to make polynomial time behave like polynomial space. On the other hand, “lack of exponential parallelism” doesn’t seem like a property of time—that is, it seems like we should be able to tweak the laws of physics to allow as many parallel processors as we want, without changing anything about the concept of time. But maybe that’s just a semantic distinction.

There are results from the 70s showing that physical theories with unlimited-precision real numbers (as well as the power to read out individual bits in those real numbers’ binary representations, e.g. using the floor function) can be used to simulate an exponential number of processors, and thereby simulate PSPACE in polynomial time. To me, that always seemed like a further argument against physical theories with unlimited-precision real numbers (that can be read and manipulated at will)!

(By contrast, note that quantum mechanics is a physical theory with unlimited-precision real numbers that can’t be read and manipulated at will. Alright, complex numbers, although for computational purposes it doesn’t make a difference.)

3. Ian Durham Says:

Tom Moore clarifies the difference by pointing out that there are essentially three different kinds of time in the broadest sense of the term: coordinate time, the spacetime interval, and proper time. Invariant quantities are usually defined in terms of the latter two in some manner. So, when you say ‘t’ you’re usually talking about coordinate time.

The coordinate time between two events is that which is measured by two inertial clocks (i.e. in an inertial frame) with one clock present at each event (a single clock may be used if the two events occur at the same spatial location).

The spacetime interval is the time between two events as measured by a single inertial clock.

The proper time is the time between two events as measured by a single clock, not necessarily inertial. In this definition, the spacetime interval is the longest proper time between two events.

Obviously there are instances in which one can imagine a single clock measuring all three types of time. Nonetheless, it makes it very clear that not only is time different, but that, if you continue through the extended argument, the so-called ‘flow’ (arrow) of time is related to the ordering of events in certain coordinate systems.

4. Anon Says:

Then may be we must start with physics theories which start with discrete objects instead of the continuum. Almost all of physics deals with continuous space or time. Even the basic quantum computing assumes that. (Though if I remember correctly there are results which state that finite precision is enough for the existing non-trivial quantum algorithms.)

How do we prove that the ability to handle infinite precision real numbers is not actually a engineering issue but more fundamental.

5. Job Says:

If we restrict space to not be reusable, does PSPACE also become equal to P?

6. Scott Says:

Job: Great question! Answer: yes. Proof: exercise for you.

7. Anon Says:

It seems obvious that adding closed timelike curves should not affect the class PSPACE, but is it actually obvious, or are there any subtleties to showing that PSPACE_{CTC} = PSPACE?

8. wolfgang Says:

Scott,

does the dimensionality of space make any difference to your argument? In other words, can you make an argument that space has to have at least 3 dimensions or something like that?

9. Scott Says:

Anon: The quantum computing model does not assume continuous space and time. It assumes continuous amplitudes, which is extremely different! Amplitudes behave much more like probabilities than like physical observables.

Also, the Solovay-Kitaev Theorem and related results tell us that finite precision suffices for any quantum algorithm whatsoever. (The Threshold Theorem tells us that this finite precision can even be a constant independent of the problem size.)

My own view is that quantum gravity—in the form of the holographic bound, the Bekenstein bound, and related results—is what ultimately imposes a limit on our ability to store and manipulate real numbers to arbitrary precision. In particular, were it not for quantum gravity effects, I see no reason why we wouldn’t in principle be able to manipulate real numbers to arbitrary precision. (Saying the problem is “noise” is a technological answer, not a fundamental physics one.)

I admit that the above view seems to be a minority one among both computer scientists and physicists—but that’s just because most of them don’t take the question seriously! If they did take it seriously, they would agree with me. 🙂

10. Scott Says:

Anon: As you guessed, PSPACECTC = PSPACE. The proof is basically the same as the proof that PCTC = PSPACE.

11. Scott Says:

Wolfgang: None of this has anything to do with the number of space dimensions, which is another one of my all-time favorite questions. FQXi should have their next contest about it!

12. AwesomeRobot Says:

The puzzles in some levels of XBLA game “Braid” operate similarly to the “time traveling turing machine” you mention. There is a secondary time, and some worlds even have a tertiary time.

Also the flash game “Chronotron” allows you to solve puzzles by jumping back in time and cooperating with your past selves. The primary and secondary versions of time are even built into the scoring system there.

In both games, the other versions of time are expanded into real time so you can experience them in causal reality. Of course, they just operate by storing and repeating history, but you can sort of imagine how it could work if it were possible.

13. Cody Says:

Beyond treating a time dimension as circular or as traversable (like the spatial dimensions), can anyone think of a consistent concept of two time dimensions that would behave similar to our intuitive notions of time? I’m guessing no…? (Or maybe just what the consequences would be of having two time dimensions that each behave the way the one we currently think of appears to behave.)

14. Job Says:

With time as another spatial dimension, why would there be conservation of matter and energy along the t axis and not along the x, y or z? When walking along the x axis at every step (x1, x2, x3…) wouldn’t the 3D space given by (y, z, t) at those points have to also be equivalent in the matter and energy they contain?

15. Job Says:

So, if space isn’t reusable, then PSPACE ⊆ P since a TM can only move forward in the tape and since it writes and reads at most a polynomial number of bits, it takes at most a polynomial amount of time to finish.

Also if space is reusable then P isn’t really affected and P ⊆ PSPACE still because any TM in P still can’t write more than a polynomial number of bits. So PSPACE = P. I admit i didn’t try to figure out the answer before asking the question.

16. ScentOfViolets Says:

IIRC, Minkowski four-space does not require that all events be ordered, only that there is an invariant ordering on events inside the light cone. The question then is, can this type of metric be imposed by the guarantee that, on any given class of events, there is at least one pair for which the ordering is invariant? Certainly this is necessary, but is it sufficient?

My other question is what you mean by a backwards-in-time Turing machine, and what a ‘tertiary time’ would mean in this context. When you say ‘backwards-in-time’, do you mean that the tape-reader literally travels in a retrograde motion, or does this include time travel by going from an event at time t to an event t-k, where k>1? A discontinuous jump as it were? In this case, travel forward is always progressing from t to t+1, but travel from t to t-1 is strictly disallowed. In either case, I don’t see why one requires a tertiary time, by which I assume you mean some sort of super-time.

My mental model is something like a game of life, only instead of a square grid, it’s actually a volume of cubes, and instead of coordinates {x,y,t}, the coordinates are {x,y,z}. That is, the cells in one layer correspond to the state of the Conway game, but instead of the cells changing state in one layer, they determine the state of the cells in the layer adjacent(above) to them by the usual transition rules. In this model, as in the real universe, things happen in just one direction. But one can also set up connections so that the state of the cells in a finite range of x- and y-coordinates in layer z are transferred to the same range of coordinates in layer z-k. So one gets, to the inhabitants of this little universe at least, all of the conditions you ask for.

Am I missing something?

17. Scott Says:

ScentOfViolets: I didn’t understand your first question—what do you mean by “the guarantee that, on any given class of events, there is at least one pair for which the ordering is invariant”? I’d certainly be interested if the Minkowski metric (or something like it) could be derived from axioms purely having to do with causality.

As for your second question, I was just thinking of a Turing machine head that, at each step, can move one square left or right, and also one step backwards or forwards in the time parameter. (That is, from t to t+1 or t-1.) So basically, we treat the time dimension like just another space dimension, and have a 2D tape head. And we can implicitly define a “real” time parameter t’, which unlike t really does get incremented at each computation step.

Though presumably, we should still make one passing nod to temporality: namely, when we transition from time t to time t+1, the contents of the tape should get “copied automatically” (rather than replaced by something completely different, as they could with a spatially 2D Turing machine tape). But it’s not hard to see that this makes little difference for complexity: for we can just run T=poly(n) steps of a computation, then go back to time step 0 and run another T steps, then go back to step 0 and run another T steps, and so on until we’re finished.

But actually, now that I think about it, there is a technicality: how do we copy the intermediate results from step T back to step 0? The tape head might not be able to keep shuttling information back and forth—since every time it goes forward in time, it presumably overwrites whatever was “previously” stored at the Tth time step.

However, one way of solving this problem would be to use Barrington’s Theorem, that width-5 branching programs compute NC1 (and the corollary, pointed out by Cai and Furst, that “width-5 bottleneck Turing machines” compute all of PSPACE). This implies, in particular, that every time the tape head goes backwards in time to step 0, it “need not remember any more information than it can carry on its back” (i.e., it only needs to remember a constant number of bits, which it can store in its own internal state). Even with that restriction, such a machine can compute all of PSPACE. And in the other direction, it can clearly be simulated in PSPACE.

18. Leonard Ornstein Says:

With respect to the “Nature of Time”, and starting with axioms having to do with causality – but on ‘another’ tack:

Increase in entropy is often characterized as typically being associated with increase in ‘disorder’ and a smoothing out and ‘reducing’ of temperature differences of an isolated process which is allowed to proceed to a state of equilibrium. Such increases usually occur spontaneously. ‘Paradoxically’, entropy increase often can be associated with large increases in ordering, like the phase change of crystalization on cooling. At least this makes this ‘order to disorder’ characterization especially unsatisfying.

Born interpreted the square of amplitudes of the waves described by Schrodringer’s equation as probabilities. Shannon described information in terms of the logarithm of probabilities – and information has been ‘equated’ with entropy.

In statistical mechanics, entropy is expressed as being proportional to the logarithm of the number of possible ‘arrangements’ of the ensemble of ‘particles’ constituting a system. Increase in entropy is associated with the statistical tendency for a closed system (of an ensemble of more than a ‘few particles’) to spontaneously transition from less probable to more probable states. This formulation seems to eliminate the ‘paradoxes’.

In models of a “Cycling” steady-state universe or of oscillating universes, entropy (non-paradoxically) increases both during collapses ‘towards singularities’ as well as during expansion phases, and no zero entropy ‘origin’ is required.

Doesn’t this suggest that somehow starting with a primitive axiom having to do with causality, to remove the apparent ‘reversibility’ of classical and quantum physics, might unambiguously set the direction of time’s arrow?

19. Pascal Koiran Says:

Another puzzle about time is, why does the past behave so differently than the future ?
For instance we can remember the past but surprisingly enough, we can’t remember the future (except the fortune tellers among us, of course).

20. Vi Hart Says:

Pascal Koiran’s puzzle used to be the only difference I thought there was between past and future. Thank you Scott for teaching me better!

I forget where I learned about this, but I heard that while in our culture we see the future as ahead and the past behind us, in another they saw the past as ahead (because we can see it) and the future behind us (because we can’t).

In this sense, the answer to Pascal’s puzzle is that we can only face one direction at a time, whether spacial or temporal (but as turning around is rather difficult in one, I know this is insufficient).

Now someone who knows more about physics will have to help me out. We used to think that matter “saw” in both directions (we could use the laws of physics to figure out the future and past states of things), but now?

21. Chris W. Says:

Remark by Scott: I’d certainly be interested if the Minkowski metric (or something like it) could be derived from axioms purely having to do with causality.

1. See the literature of causal set theory, sampled here.

2. See these papers co-authored by Prakash Panangaden:

22. Chris W. Says:

PS: It appears that gr-qc/0407094 or something close to it became a book chapter:

We prove that a globally hyperbolic spacetime with its causality relation is a bicontinuous poset whose interval topology is the manifold topology. From this one can show that from only a countable dense set of events and the causality relation, it is possible to reconstruct a globally hyperbolic spacetime in a purely order theoretic manner. The ultimate reason for this is that globally hyperbolic spacetimes belong to a category that is equivalent to a special category of domains called interval domains. We obtain a mathematical setting in which one can study causality independently of geometry and differentiable structure, and which also suggests that spacetime emerges from something discrete.

23. FJD Says:

Space is reusable in the sense that a domain of finite area (polynomial in the problem size) is all that is needed to solve a PSPACE problem because you can keep reusing chunks of the space. But a PSPACE problem might use an amount of energy exponential in the problem size (it must if the amount of time is exponential in the problem size).

If time can only go forward, time is a proxy for energy because the two are related by a polynomial. Let space(t) be the amount of space the algorithm uses at time t; unitpower(t), the power per unit area; T, the total amount of processing time in our forward-only universe. Then the amount of energy consumed in solving the problem is E = \int_0^T unitpower(t) space(t) dt. If unitpower(t) is bounded by a constant (a reasonable assumption in a physical computer), then E is a polynomial in the problem size times T.

In the case of the CTC-endowed computer, energy and processing time are probably not related by a polynomial; indeed, energy might still be exponential in the problem size (I speculate), while time is polynomial (as you have found). One might then (probably many have) argue that energy, and not space or time, is the fundamental unit of computational complexity.

24. Job Says:

I wonder then if PuSPACE_PrTIME (the set of problems solvable by a machine with a polynomial-unreusable-space and a polynomial-reusable-time) should be equal to P. This because:

P – can reuse polynomial space but cannot reuse time and is restricted to use at most polynomial time
PuSPACE_PrTime – can reuse polynomial time but cannot reuse space and is restricted to use at most polynomial space.

Let me think about that for a second.

25. Job Says:

With unreusable-space a space restriction is also a time restriction (right?) because we can’t really perform EXP operations if we can only perform P read/writes – what would the operations act on?

We don’t see this with reusable space.

26. Vi Hart Says:

Ok, so we need a timeline to keep track of what’s going on in reusable space, but not of unreusable space, just like we need a new timeline to keep track of reusable time, but no new timeline to keep track of unreusable time?

Though operations using unreusable space still take time, so do ones with reusable space and unreusable time take up metatime?

(You’ll have to forgive me– I am a native English speaker and so don’t understand a lot of the above.)

27. Evan Says:

@FJD:

I am pretty sure (and I strongly suspect Scott will correct me if I am wrong) that it is known that any computation can be done reversibly with only a “small” amount of extra space and steps required. This would mean that energy can’t be a fundamental unit of complexity by itself. To have perfect physical reversibility (instead of just logical reversibility) you have to proceed infinitely slowly, but I don’t think there is a limit on how close to perfect reversibility you can achieve at a fixed speed.

28. Scott Says:

Evan: You’re not wrong. By Landauer’s principle, to get arbitrarily close to perfect reversibility (at a fixed speed) you “merely” need to cool your machine arbitrarily close to absolute zero. On the other hand, cooling the machine presumably also requires energy.

29. Moshe Says:

Let’s see if I understand the logic here: if your mathematical model (Turing machines) doesn’t satisfy one of the experimentally validated elements of reality (Lorentz invariance), surely that is a statement about the model, not about reality, right?

I also don’t see what is the big surprise, non-relativistic model yielding non-relativistic results. I happen to share your opinion that time really is different from space, but you need to give the opposite hypothesis some fighting chance.

30. Job Says:

There’s always a relative point of reference such that we’re not reusing space, since there’s a point of reference such that our computers are moving through space at some speed.

When we talk about space reusabilitty, are we talking about just the “reusability of matter”, or are we talking about the “reusability of space in general” (independently of what it contains)?

31. Job Says:

Actually a 3Ghz CPU can write a value to a register 3 billion times per second (or something like that), so unless there’s a point of reference relative to which we’re moving faster than 3,000,000,000*R per second, where R is the length of the physical space occupied by a register, we’re really reusing space. Imagine that.

32. Douglas Knight Says:

I think that the references to fundamental physics are misleading. There are a lot of unclear assumptions between fundamental physics and its interpretation. In particular, the arrow of time shows up in our models of computation, but not physical law. Maybe the arrow of time, in the form of initial conditions of exploitable order, is the only ingredient needed to get from fundamental physics to models of computation, but I’ve never seen it spelled out.

In other words, Scott did not “leave out all the woolly metaphysics about our subjective experience of time, and look strictly at the formalism of special and general relativity”; quite the opposite.

33. Carter Pewterschmidt Says:

I think some of the confusion (arbitrariness) regarding the minus sign on the time dimension is partially resolved through the framework of geometric (Clifford) algebra as popularized by David Hestenes (although the book I personally found tremendously useful was Geometric Algebra for Physicists by Doran and Lasenby.)

With geometric algebra, the minus sign arises naturally, whereas it is somewhat ‘forced’ in the standard derivations. Something to do with the correspondence between spinors and even elements of a Clifford algebra, and somehow needing 4pi radians to complete a rotation. Maybe someone else can explain better…

34. Scott Says:

Moshe, I must not have expressed myself well. I wasn’t in any way questioning Lorentz invariance, or claiming that reality has to conform to some computational model that isn’t Lorentz-invariant.

By the way, the Turing machine model is neither Lorentz-invariant nor non-Lorentz-invariant! The question simply never comes up, since you never need to consider more than one reference frame. But certainly Turing machines are 100% happy to live in a relativistic universe.

I was mostly just trying (unsuccessfully, I guess 🙁 ) to do a little pedagogy about how time behaves differently from space, in computational complexity as well as SR. In both cases, the difference seems to boil down to one kind of dimension being “reusable” and the other not. This is an elementary (but important) point, which I don’t think has anything directly to do with Lorentz invariance. The same difference between time and space shows up in a Galiliean universe as in a Lorentzian one.

35. Scott Says:

In particular, the arrow of time shows up in our models of computation, but not physical law.

Douglas, I don’t agree with the above claim. All the standard complexity classes are perfectly happy to be defined using reversible operations only—in which case, the arrow of time makes no more explicit appearance in computer science than it does in physics. (In other words, you could choose your input at the Big Crunch and read off the corresponding output at the Big Bang, exactly the same way you could solve a physics problem by choosing final conditions and then working backwards.)

36. Moshe Says:

Scott, I get the point about the re-usability and sympathize with it, I am mostly concerned about the “fighting chance” issue. Only reason you would want to make parallels between space and time is Lorentz invariance, and in a context without Lorentz invariance the parallels are not fully displayed.

Incidentally, in my mind Turing machine is non-relativistic, though I can also see the view point of it being a-relativistic. Basically in a Turing machine the currency for space and time represents a fixed structure (different for space and time) which is independent of the state your Turing machine finds itself in (which in a model for reality would include information about its state of motion). This makes it morally Newtonian for me, just semantics I guess.

37. Scott Says:

Hi Moshe,

I am mostly concerned about the “fighting chance” issue.

I try to be a decent sportsman—I gave polynomial time with CTCs a fighting chance to equal some complexity class other than PSPACE! On the other hand, setting aside the “relativistic Turing machine” (which spends billions of years on Earth solving your computational problem as you fly around at relativistic speed), I’m really not sure how to incorporate SR into complexity theory. Complexity theory just doesn’t seem to care too much about the questions SR answers, the same way it doesn’t care too much about details like the number of spatial dimensions. Any ideas?

Anyway, it seems like we basically agree, differing only in what we like to emphasize. Personally, I’ve read so many popular books and articles that wax poetic about the “unity of space and time,” that I thought it was worth calling some attention to the separate personalities they still retain in SR…

in my mind Turing machine is non-relativistic, though I can also see the view point of it being a-relativistic.

I’d suggest “irrelativistic.” 🙂

(Likewise, some of us have taken to calling complexity results “irrelativizing” if they neither relativize nor fail to relativize.)

38. Moshe Says:

Yeah, I do like your central message, I’d phrase it as “timelike is different from spacelike”, but that’s only nitpicking. anyhow, compared to all the things that people wax poetic about in popular books and articles, unity of space and time is pretty tame I would think.

39. Douglas Knight Says:

Yes, you can ask these questions, but viewing time as a resource to be conserved has something to do with our subjective experience. Our ability to perform certain operations, in particular, our ability to set input, as opposed to output, reflects the arrow of time.

Let me retreat to a less wooly topic:
Yes, time is special in that there’s only one time dimension, hence two directions, but you never really made use of the fact that the directions of space are connected. What if we lived in 1+1 dimensions? Wouldn’t there be complete symmetry between space and time?

40. Scott Says:

What if we lived in 1+1 dimensions? Wouldn’t there be complete symmetry between space and time?

No, there wouldn’t. (You’re right that the number of space dimensions never enters the picture here.) I assume you were asking: given that the interval in 1+1 dimensions is (x-x’)2-(t-t’)2, why can’t we (at least formally) apply some symmetry transformation that interchanges x and t? Basically, because the laws of physics are not symmetric under negation of intervals (i.e., replacing spacelike intervals by timelike intervals and vice versa). Timelike intervals allow the propagation of causal effects, and spacelike intervals don’t, and that’s a real, genuine physical difference between the two.

Our ability to perform certain operations, in particular, our ability to set input, as opposed to output, reflects the arrow of time.

No, it reflects the definition of input as that which we set! 🙂 Even if the “input” is in the future and the “output” is in the past, if we’re setting the “input” and reading the “output”, then the “input” is still the input and the “output” is still the output.

41. Douglas Knight Says:

But what is causality, really?
You can set initial conditions on a space-like hypersurface and causality tells you how those initial conditions propagate. But I think that you could equally well set “initial” conditions on a time-like hypersurface. Of course, x-like hypersurfaces only exist when there’s only one non-x direction, but 1+1 has two notions of causality.

42. Scott Says:

Douglas: No, it doesn’t work that way! Try it yourself. Pick some reversible 1-dimensional cellular automaton rule, choose “initial conditions” on a timelike hypersurface, and then try to extrapolate in the left and right directions. You’ll find that you can’t always do it: the solution might be overdetermined or underdetermined, which would never happen extrapolating forward or backward from a spacelike hypersurface.

More explicitly, consider frictionless Newtonian billiard balls in a (1+1)-dimensional universe. The physics of this universe is clearly deterministic and time-symmetric. But suppose you’re a creature in this universe, and you throw a billiard ball away from you, and a little while later it returns to you at the same speed. Why did that happen? Did it happen because your billiard ball collided with another ball with the same mass that was traveling at the same speed in the opposite direction? Or because your ball collided with (say) a much heavier ball that was traveling much more slowly in the opposite direction? Since you’ll never get to see that other ball, you’ll never know. So you can’t in general extrapolate from your “timelike hypersurface” (i.e. your worldline) to the rest of the universe.

43. Dan Says:

I believe that time is a human made unit, just as the meter or foot is, for reference in our own lives. It helps us describe things. To take a step back, time is simply the transfer of energy, which is always moving forward. Time can stand still, when there is no energy transfer, but can never go backwards, always forwards. You can put units to the rate of energy transfer if you would like, but at its most basic principal, time is just the transfer of energy. Thinking about time in this sense brings an entire new realization to our world.

44. Lucy Says:

Scott: I’d certainly be interested if the Minkowski metric (or something like it) could be derived from axioms purely having to do with causality.

Yes i’ve been interested in that. There’s this old paper, ‘Causality implies the Lorentz group’ by Zeeman, seems to do what it says on the tin, in a very abstract algebra-y kind of way…

http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JMAPAQ000005000004000490000001&idtype=cvips&gifs=yes

(sorry for stupidly inelegant link, I don’t know what I’m doing…)

45. Chris W. Says:

Lucy, thanks for mentioning that much-cited paper by Zeeman. It is part of the background of more recent work on causal sets.

Also see The class of continuous timelike curves determines the topology of spacetime (David B. Malament).

46. Chris W. Says:

PS: Also see section 3.3, “Recovering Global Geometric Structure from ‘Causal
Structure’,” of Malament’s 2005 review paper Classical General Relativity (gr-qc/0506065).

47. Douglas Knight Says:

OK, I’ll concede:
(1) that objects travel slower than light is definitely an asymmetry between space and time.
(2) this probably breaks causal symmetry between space and time.

But I still insist that many physical laws are symmetric between space and time. I suspect that some fragment of the standard model is both complicated enough to worry about causality and yet still has double causality in 1+1 dimension.

I’m pretty sure that Maxwell’s equations are symmetric between space and time. Can you derive elastic billiards from electrostatic repulsion? How can we reconcile these claims? Perhaps I’m just wrong about the symmetry of Maxwell’s equations in the presence of charge, but I suspect that the answer is that an elastic-in-the-limit collision emits light, determining the mass of the other ball.

48. harrison Says:

Slightly off-topic:

In general, it seems like there’s not much that one can do (starting with polynomially time–bounded resources) to get past PSPACE. IP = PSPACE, BQP_CTC = PSPACE, etc. MIP is the only exception I can think of off the top of my head.

Is there some “deep, philosophical” reason why this should be true? Or does it just have to do with the behavior of polynomials over finite fields?

49. harrison Says:

On a related note, I guess, what’s the conventional wisdom on QIP = PSPACE? The Zoo doesn’t even give any oracles relative to which they’re the same, or different.

50. John Says:

Can’t you travel back in time to an event by reassembling the physical makeup of that event?

51. Greg Kuperberg Says:

But I still insist that many physical laws are symmetric between space and time. I suspect that some fragment of the standard model is both complicated enough to worry about causality and yet still has double causality in 1+1 dimension.

The standard model is Lorentz-invariant and it makes no direct statement about causality. Causality is instead part of the interpretation of any dynamic equation in physics: Maxwell’s equations, Einstein’s equation, the standard model of particle physics, etc.

In fact there is no unique correct analogue of the standard model in 1+1 dimensions. If you wrote some analogue, then it would in principle be two different physical models, depending on which direction you took as space and which as time. I’m not sure that both models are necessarily well-posed; maybe only one of them is. A test case for that question would be Lorentz-invariant classical PDEs in 1+1 dimensions.

52. anon Says:

In principle, closed time like curves can exist in some solutions of general relativity, although time is still “different” because of the minus sign. So, I guess one conclusion of the work is that there is no connection between the difference between PSpace and P and the minus sign in special relativity, which is what most people would expect anyway.

You mentioned Euclidean gravity at the end. I will make a few comments on this but if anyone is an expert please correct me:

The reason people study Euclidean quantum gravity, or Euclidean quantum theories in general is to get rid of the i in quantum mechanics so that the path integral is exponentially decaying instead of exponentially rapidly oscillating. It has nothing to do with relativity; however, if the theory we are working with is relativistic, in effect it makes the metric ++++, so we use the word Euclidean.

The point is the path integral is not defined as it stands (with the i in the exponent), so we need to use imaginary time to do any sensible calculations using path integrals. I think, in principle, these calculations could have been done directly using unitary evolution of some state (rotating qubits) instead of path integrals (summing over paths) and we would not need imaginary time at all. But, path integrals are more convenient and practical for calculations in quantum field theory, so we have to introduce imaginary time for calculations in quantum field theory.

So what is the connection to causality? The transition to Euclidean metric makes some operators elliptic (like in electrostatics) rather than hyperbolic (like the wave equation), which results in some conceptual simplifications regarding the choice of propagators and uniqueness of solutions to the differential equations in classical field theory. All this is sort of related to causality, but officially, this is just a side effect, and not the motivation for using imaginary time. For most theories (at least those without gravity), I think all these causality-related ‘effects’ of euclidean time are artificial and can be obtained by carefully solving the original hyperbolic differential equations with the appropriately chosen boundary conditions. (For, some boundary conditions you might have trouble going to imaginary time, so it is probably not trivial.)

Sometimes undergraduates ask if there is any connection between the i in front of t in quantum mechanics and the minus sign in relativity. As far as I know, the answer is no, aside from the coincidence mentioned above. It could turn out some connection exists, but looking for one would probably just lead to idle speculation.

53. Daniel Says:

Scott, regarding your 1D cellular automaton example:
If I’m not mistaken, you say that you can set up some configuration on a spacelike curve, and extrapolate a consistent spacetime from that. And you say that the same is not true for timelike curves.

I think this difference is somewhat superficial. Note that it would not even be true if you didn’t choose to talk about reversible automata. But more importantly, in any kind of physics, even your simple example, it is not true that an _inside_ observer can set up a spacelike curve to be in any specific state she wishes. Some spacelike curve configurations are reachable, some are not. The same with timelike curves. I think this similarity goes deeper than the difference you talked about. 🙂 Please correct me if I misinterpreted something.

54. Douglas Knight Says:

Greg Kuperberg,
Do you equate causality with well-posedness on a space-like hypersurface? This is what I did, since I don’t know what else to do.

55. Chris W. Says:

From a completely different angle, and by no means entirely off-topic, see this wonderful essay about the late David Foster Wallace.

56. Greg Kuperberg Says:

Do you equate causality with well-posedness on a space-like hypersurface? This is what I did, since I don’t know what else to do.

I sort-of agree. I think that a reasonable way to express causality is to have some framework to have a second law of thermodynamics. Whatever observables there are should be organized into an acyclic graph, or maybe some quantum or relativistic variation of that. It has to be acyclic because entropy increases along it. In practice that has meant what you say, well-posedness on a space-like hypersurface. It is not clear that it will always strictly mean that.

You know, for many years quantum empiricism was a topic about which there seemed to be more to say, but people weren’t sure what. Eventually it became quantum information theory and quantum computation. Maybe causality is another question that could turn into a new research area, somehow. At the very least it’s a question that string theory or whatever should eventually explain, or if not explain, supplant.

57. P Dickinson Says:

Scott,
Nice post. I’ve used it to explain some of these notions to non-CS and they mostly seem to get it.
I hope you’re enjoying MIT!
Cheers,
Paul

58. tomate Says:

Consider the correspondece between symmetries and conservation laws. It seems to me that non-reusability of time (time flows from one instant on and not backwards) should give rise to lower-boundedness of the energy spectrum (energy defined from some point on), whereas the same doesn’t happen for space-momentum conjugate variables. When free negative-energy particles, moving back in time, were discovered, it was ASSUMED it couldn’t be so, and antiparticles were postulated.
This might have to do with information flows and the fact that energy and momentum are the observables that allow to define privileged reference frames (I “see” homogeneous time until I measure an energy growth or loss greater than some finite tolerance). Does this make sense? Do you know of any theorem connecting such things?

59. Job Says:

One other difference between Time and Space which i think was mentioned already (causality) is how (x+1, y, z, t) doesn’t follow from (x, y, z, t) as much as (x, y, z, t+1) seems to follow from (x, y, z, t). In other words given one end of my table at time t i can’t predict what the other end looks like (also at time t) as well as i can predict what the table will look like at time t+1 given the table at time t.

If we could anyway, then i would be able to reversibly compress the table by cutting it in half. The other half would logically follow so i’d be able to recreate it when needed. It just doesn’t work.

60. Job Says:

Why doesn’t the spatial plane going through x+1 at time t logically follow from the plane through x?

61. Scott Says:

Harrison: As far as I can tell, the reason why it’s hard to get beyond PSPACE is simply that PSPACE is an extremely robust class (“like a rock,” as John Watrous once put it), which contains an enormous range of nontrivial operations. Nondeterminism, randomness, quantumness, counting, etc. all might require an exponential increase in time, but can be simulated with only a polynomial increase in space. The ways of (potentially) getting beyond PSPACE that I know about typically involve either games and interaction, or else multiple non-communicating parties (besides QIP and MIP, you should also look at APSPACE, RG, and QMA(2)).

I don’t think there’s a consensus on whether QIP=PSPACE. The question hinges on whether a certain subclass of semidefinite programs can be approximated efficiently in parallel (i.e., in NC); I know some very smart people who’ve worked on this problem without success. Currently, it’s not even known whether QIP is closed under complement, which would be a prerequisite to its equaling PSPACE.

62. Scott Says:

Can’t you travel back in time to an event by reassembling the physical makeup of that event?

John, how do you reassemble the physical makeup of a past event—particularly if most of the information about the event is now in thermal or gravitational degrees of freedom, or has even receded past your causal horizon? (I’m sure many historians would love to find out… 🙂 )

63. Scott Says:

Why doesn’t the spatial plane going through x+1 at time t logically follow from the plane through x?

Job, see my comment #42: it readily generalizes to n spatial dimensions and 1 time dimension for any n (for just confine everything to a (1+1)-dimensional subspace).

64. CW Says:

The first paragraph of Roger Ebert’s review of 12/23/08:

“The Curious Case of Benjamin Button” is a splendidly made film based on a profoundly mistaken premise. It tells the story of a man who is old when he is born and an infant when he dies. All those around him, everyone he knows and loves, grow older in the usual way, and he passes them on the way down. As I watched the film, I became consumed by a conviction that this was simply wrong.

By “wrong,” of course, he means fundamentally unworkable as a premise for a film or a novel. (The film is based on a short story by F. Scott Fitzgerald.)