*reads the article carefully*

… so decoherence time is basically limited to ~4ms. I think we would need to improve this by *three orders of magnitude* before a “protect it and forget it”-type approach even becomes feasible. I’m fine with QC researchers moving underground now.

Dmitri Urbanowicz #41: It is well known PSPACE machine can iterate over all “quantum histories” to compute the “Feynman path integral” corresponding to any particular output amplitude, so this is what I would have written were I required to write a polynomial-space quantum simulator. However, perhaps the recursive approach is more efficient.

Also, according to this paper that I just found, it turns out that non-abelian topological quantum field theories can solve #P-complete problems in polynomial time.

]]>I think this is exactly what you need: https://arxiv.org/abs/0812.3675

In short, you can simulate a single shot of QC while maintaining definite state at every step of the computation. If the gate is classical, then you simply transform the state. If it is, say, a Hadamard gate, then you need to take into account all possible histories that could lead to both possible resulting states and then choose randomly in accordance with their computed amplitudes.

In other words, the universe simulated like this don’t need “measurements”. Its inhabitants never experience superpositions or world-split directly.

]]>https://www.sciencedaily.com/releases/2020/08/200826113716.htm

]]>Also, possibly important observation: Even in a world where vanilla NP-complete problems (such as 3SAT, graph coloring, etc.) could be solved efficiently, it is relatively well known that P is *not* equal to NP relative to a random oracle, which suggests that unconditionally provably hard cryptography does exist relative to a random oracle. Indeed, Optimal Asymmetric Encryption Padding, combined with RSA**, is provably secure***. So, assuming that computer scientists can find unconditionally provably hard cryptography relative a random oracle, and organizations can coordinate with each other enough to set up a public, trusted, unhackable random oracle, we can enjoy our security in peace even if NP oracles start raining from the heavens.

*this is ok, right?

**this is not proven to be hard, although popularly conjectured to be

***under the assumption that RSA itself is secure

]]>Yes, Scott’s Ghost In The QTM is insightful, but I favor some kind of classical effect to supply the Knightian uncertainty, rather than quantum mechanics. I’ve said it once, I’ll say it again: I don’t think we understand classical physics!

The AdS/CFT correspondence should have been the tip-off. Exactly the *same* quantum rules can give rise to two *different* valid classical descriptions of spacetime! That shows you that even if our understanding of quantum mechanics is exactly right, it’s still logically possible for our classical theories to be badly wrong!

I think what’s missing is the understanding of compositionality, or how many simple parts combine to form complex systems.

I agree that the notion of intelligence as ‘set of possible rational choices and then application of some optimizing scheme’ is laughable. There’s simply no separation between the goal system and the world model. I see the alignment problem and I also see the outline of the solution at this point. The preferences (values) of minds are directly related to the complexity of the world models (ontologies), there’s no question. The value system is generated by the compositionality of the ontology (how the ontology grows over time).

At this point, I believe that I am now having faint perceptual awareness of actual post-human consciousness. It’s super-weird, as we expected.

In a nutshell, I think that it’s time that’s fundamental, and there’s a much more general theory of complexity than ordinary computational complexity. Time itself is multi-dimensional and the number of dimensions is not fixed, but varies according to the scale that you look at it. It’s something like the growing block-universe: the past is fixed, but not the future.

]]>Some thoughts I had while reading your paper-

Einstein’s apparent lack of understanding of QM is surprising and his use of “Spooky…“ to describe the lack of knowledge of initial conditions at the time of entanglement has had long lasting consequences. It created an entire industry for physicists to publish popular mysterious quantum mechanics books and it consigned Alice and Bob to a horrible life of reading secret messages, opening strange boxes with dead cats inside, and maybe worst of all now being thrown into a black hole. Their treatment has been inhumane. I have thought that of course God played dice since if not it isn’t very interesting.

My view has been more along the lines that the observables of human cognition are actions in the classical world that follow from something akin to wave collapse. Just as with quantum observables it is impossible to deterministically predict the action. It could even be the case (as you discuss) that the person has chosen to use some well known external non deterministic process to help decide his action such as use of a fair coin or probabilistic quantum measurement.

The schemes I have seen that depend on some set of possible rational choices and then application of some optimizing scheme really not even close to capturing the range of individual human action. In the case of Newcomb’s Paradox I would expect that the forecaster’s email would be hacked and the forecaster himself held at gunpoint until he divulged his prediction.

Thanks for the interview and the paper. You absolutely didn’t seem like a member of the Red Guard in the interview 🙂 and freebits and Knightian Uncertainty provide interesting subjects for thought.

Also I realty liked Agent 3203.7’s last comment and regret that I didn’t note that under the guest post.

]]>*Nevertheless, I suspect the author would be pretty lonely at a Trump rally, where logical thought is about as common as a facemask.*

Just another sneer, just another mindless “boo, my outgroup” comment.

]]>