Which is to say: I had assumed that to put an upper bound on a WPO’s type was to constructively prove that it is a WPO, for some appropriate constructive definition of a WPO.

But, I don’t actually know if there *is* a standard constructive definition of a WPO! I had assumed there was, but the nLab doesn’t give one, only a classical definition. (Since I’m not actually a logician, I wouldn’t know if there is one if it’s not on nLab. 😛 )

So maybe at least the analogue is true for a WFPO? That to put an upper bound on a WFPO’s height is to constructively prove it to be well-founded? There is, at least, a standard constructive definition of a WFPO, so at least that isn’t a problem here.

(Note that a constructive definition of a WFPO does *not* immediately yield a constructive definition of a WPO via the power set definition, because, well, my understanding is that constructivists don’t like power sets. 😛 )

But somehow I failed to notice a way that this is trivially false — say you have a countable WFPO (which you have proved to be a WFPO in a nonconstructive manner). Then, obviously, its height is less than ω_1. That’s an upper bound, that is easily proved, but in no way provides a constructive proof that your order was well-founded. Oops.

IDK; it still might be possible to rescue this. Like, say we restrict to computable orders, and require that the upper bound be a computable ordinal. Maybe it could be true then? IDK, I am really not a logician.

So, while Diana Schmidt certainly did show in 1979 that the type of the WPO of plane trees is at most the small Veblen ordinal, this may not in fact constitute any sort of constructive proof of any form of Kruskal’s Tree Theorem. I may have just been wrong about that.

(…I’d kind of hope it does, but…)

]]>Yeah thinking about it more what you’re asking for definitely seems impossible. Consider what this translates to for **N**; what you’re asking for, in that context, would be an algorithm that takes in a proof that a subset of **N** is finite, and returns its maximum element. That’s impossible. Like, say, for a turing machine T, we define S(T) = {0} if T halts on empty input and S(T) = {0,1} otherwise. (So we didn’t even need to go to ω, we only needed to go to 2.)

Or, to make it more like the original context, with the context being 2 rather than ω, our two possibilities could be {1} and {0,1} so that they’re upward-closed instead of downward-closed. Regardless, it’s not possible.

And if you try to rescue this by requiring that the input be a constructive proof, well, then the algorithm is unnecessary, right?

]]>As always, if I did want to look at this, I’d probably start not with trees or with graphs but with **N**×**N**… or maybe just with **N**… I think **N** itself might be enough to give you trouble here. In short, while this is definitely getting outside of my expertise, I’m skeptical that this is possible.

>Like, what would it even mean to make that constructive? You’d have to give some procedure for, given an upward-closed set in your WPO, finding its set of minimal elements.

I would buy an algorithm that eats a formal proof that some set is closed under minors, and spits out its set of minimal forbidden minors. Obviously “constructive” is relative to “proof”: The weaker the permitted proof system / axioms in the input, the easier such a task would be.

Are you aware of any know results there?

]]>True, but if you try to approximate a causal PDE by a CNF, the result should be a Horn formula and thus easily solvable given initial conditions. Doing the same with a PDE describing CTCs should introduce additional constraints representing violations of causality and should describe general hard CNFs. If you add randomness to the model as Deutsch did, then CTCs should be described by Papadimitriou’s ‘Games against Nature’ SSat which is PSPACE-complete. (One would hope that you characterize BPP/BQP with quantified Horn formulas, but there are difficulties with doing so.)

“I have a feeling that your line of thought is motivated by some specific hope or idea to do with hypercomputation, possibly fallacious, but I can’t quite make out what it is.

”

Almost. hypercomputation in general relativity likely requires P=PSPACE for there to be strong enough error correction to prevent ‘hyperstructures’ from being destroyed by noise. My actual thinking is from realist/classical interpretations of quantum mechanics (can we think of QM as broken time travel?) and that any computations should actually take place somewhere in spacetime.

If PSPACE computations in time travel don’t take place in spacetime, where do they happen? Is it just magic or is there something beyond spacetime where the calculations are done?

“Is this an argument about quantum gravity in all logically possible forms, and not just quantum gravity as constrained by empirical data?”

It’s an argument for quantum gravity defined by a spin 2 gauge field and not something that just approximates it. A starting point for a proof might be showing that what makes quantum gravity hard is that conventional methods think P!=NP (and perhaps gravity is finite in string theory because strings “know” that P = NP).

I imagine that P = NP = PSPACE, but that the simplest algorithms for solving NP problems are significantly more complicated and more vulnerable to noise than the naive algorithms and solving PSPACE problems requires another leap in algorithmic complexity over solving NP problems.

The “specific hopes” I have are more like this:

1) QM is formed by “causal turbulence” in the presence of CTCs. There is an energy cascade to smaller and smaller length scales and the resulting fractal structure is exponentially large. None of this complexity can be used for a computational advantage, however, because the difficulty of nature finding a polytime SAT solver means that noise destroys this complexity and you are confined to unitary evolution.

2) Gravity can solve NP-hard problems, but gauge fields can’t. So, gravity will be much weaker because P=NP algorithms are fragile and the gravitational constant will be zero if P!=NP.

3) The emergence of spacetime likely depends on P=PSPACE. Simulating different spacetime dimensions or other data structures seems like it would require arbitrarily complex structures with good error correction, which SSat. It seems natural to try to explain the emergence of spacetime with fixed dimension and signature as a product of PSPACE being “harder” (to find a polytime algorithm) than NP. (Imagine an n-dimensional Turing machine simulating a m-dimensional Turing machine. Doing so in spacetime at all length scales seems like it would require really good error correction.) I also suspect that violations of conservation of energy predicted in GR will be destroyed by noise without P=PSPACE algorithms.

]]>