If some test of some predictor fails, how can one tell whether the predictor mightn’t just the same be a faithful model, with the failure due to inadequate precision in the measurement of one of the inputs? It seems impossible, even in principle, to separate a test of the predictor from a test of the adequecy of a proposed input set.

]]>the other idea which would be like a Witten-Tao-Perelman type of thereom would be first proving a totally disconnected topological space in 3+1 with “realistic” isomorphism to GR or QM equations. which means thats the space would have to converge with lets say poincare UV radiation at some frequency thats only mathematically computable as an example.

http://en.wikipedia.org/wiki/Totally_disconnected_space

which still leads to this structural-unpredictability duality which is the chaos-structure wavefunction, its possible when measuring past current boundries, except it cant be a transfinite space where the calculation is infinity breached, the exteriors in the probability distribution basically have to converge with the displacement of totally disconnected space for one case where the boundary (lets say 10^35) is actually proportional with the connected space isomorphism. this isnt really non-commutativity, but instead something inbetween at a particular scaling except the thing escapes in loops, recursions, twistor states.

the last way would be to prove transfinite space from physics equations then it would be implicitly solved. which would be basically using the infinity gaps to build a structure instead.

]]>I guess if you cut off all the senses to the brain, you could ask can i predict what the brain will be thinking 10 minutes from now. But the same argument could be made about the freebits.

So again the point is that freebits are just as unpredictable as any input to the brain. Can I predict what I will be thinking in the next instant (before I get any more inputs)? Maybe, but I won’t get any freebits before then either.

]]>But OK, suppose classical chaotic effects *did* make it “obviously impossible” to duplicate the relevant information in a brain, as many people seem to think. In that case, we’d get the extremely interesting conclusion that uploading our brains to digital computers, as envisioned by the Singulatarians, should be impossible! And of course, other people have criticized me starting from the premise that brain-uploading should obviously be *possible*. If there’s a central thesis of my essay, it’s simply that those two groups can’t both be right, let alone obviously right. 🙂

Thanks. ]]>

Thanks. ]]>

Therefore, while it’s true that “determined doesn’t imply predictable,” as far as I can see the only realistic way to break the link between the two, is if the *information* needed to determine future behavior isn’t available to the would-be predictor even in principle.