The *simplest* way to see the problem, is the one Alan Turing already pointed out in 1950: namely, there seems to be no reason whatsoever, neither from first principles nor from the actual, hit-or-miss history of set theory, to imagine that human being have a knowably sound procedure for deciding the consistency of an arbitrary formal system. So an AI that passes the Turing Test wouldn’t need such a procedure either. But then there’s no Gödelian obstruction to such an AI being written.

Or, to put the same point another way: there seems to be no mathematical obstruction to an AI *behaving indistinguishably* from Erdöos or Grothendieck or Gödel himself or any other human mathematician who ever existed. The only obstruction is to an AI behaving like a hypothetical omniscient and infallible mathematician.

Or, to put the same point a third way: *if* we insisted on faulting an AI for not being able to assent to things like,

“The AI cannot assent to the truth of *this* mathematical sentence,”

then to be consistent, we should also build a wiring diagram for a given human mathematician’s brain, and then use that wiring diagram to build a mathematical sentence corresponding to

“The human mathematician cannot assent to the truth of this mathematical sentence.”

In no case, without saying anything more about the nature of the human brain, etc., have we said anything to *differentiate* humans from machines, as long as we’re scrupulous about applying the same rules to both.

“I also think that the Penrose-Lucas argument, based on Gödel’s Theorem, for why the brain has to work that way is fundamentally flawed.”

Could you please elaborate a little bit on this? What is your main reason for dismissing the Penrose-Lucas argument? Many people have raised objections to the Lucas-Penrose argument (e.g. Bringsjord & Xiao 2000; LaForte, Hayes and Ford 1998). However, different critics usually attack different points in the argument allowing that the points objected to by other critics are in fact correct. Thus, the opponents are often contradicting each other and it appears to me that the Gödelian argument has not been refuted. What is your opinion about that?

]]>To me it seems quite clear that there are random and uncontrollable/unpredictable fluctuations in our brains on a much bigger scale than the quantum scale. And I think their effect (and the effects on all smaller levels, including quantum effects) is more or less zeroed out (be it by some error-correcting mechanisms or more likely just by the very nature of randomness to average itself out).

And even if you could convince me that through some freak chaos-theory butterfly effect a certain bump in the brownian motion of two atoms or a certain collapse of the spin of an electron caused me to change my decision on a question, I would still think that is the exception, and not the rule of how my brain works. I want to think my brain is a consistent, logical, and so completely deterministic and predictable system instead of a set or quantum dice or any other “magic” that we will be forever unable to understand.

Jr Re #66: there were at least three ways you can want to pay software developers even if software didn’t have legal or technical copy protection.

1. You pay software developers to develop software that is so useful for you that you are willing to pay for it even if other people can use the same software. There will never come a time when there is already perfect software for everything you want to do.

2. You pay software developers to develop software, and run the software on your own computers as services such that other people physically don’t have access to the program, only some of its input and output.

3. Or you pay developers to provide understanding or support for existing software when you have the program and can in theory decode it, but don’t have the time or ability to do it.

In all of these cases, “you” can mean a company, not an individual, and “develop software” can mean modifying existing software.

]]>(The main relation that I can think of offhand: according to the No-Cloning Theorem, the set of quantum operations that succeed in cloning an unknown state, is equal to the empty set. 🙂 )

]]>In brief: (1) William Wootters’ and Wojciech Zurek’s “The no-cloning theorem” (2009) emphasize the interlocking roles of causality, relativity, gauge-invariance, and entanglement; (2) Aaron Fenyes’ “Limitations on cloning in classical mechanics” carefully extends definitions of cloning, and (3) Nicholas Teh’s “On classical cloning and no-cloning” (2012), which is the longest of the articles, emphasizes the contrasting roles of symplectic versus metric isomorphism in the context of Kahlerian/Hamiltonian dynamical flows.

These authors provide abundant references and plenty of concrete suggestions for further research; moreover their suggestions have the great merit of being reasonably student-accessible, and finally these articles, in aggregate, beautifully illustrate the synergistic roles of physicality, naturality, and universality in quantum research.

Specifically in regard to the feasibility of demonstrating quantum superiority, versus the infeasibility of said demonstration, it is reasonable (as it seems to me) to foresee that considerable advances will be required in our understanding of quantum cloning — including advances along the specific lines that Wooters, Zurek, Fenyes, and Teh survey — as a precondition for there to be any very substantial likelihood of settling this question experimentally and/or theoretically and/or mathematically, anytime soon.

That’s one reason (among many) why it’s very good news (as it seems to me), for quantum researchers of all levels and all subdisciplines, that this marvelously stimulating no-cloning literature is out there.

]]>I suppose you can look at it like this: you can prepare any quantum state you wish, so maybe the one you prepare happens to equal the state of some system whose state you don’t know.

As to whether it’s the universe’s way to ensure [the plausibility of] free will and personal identity, meh. People just don’t want to abandon animistic instincts, apparently.

]]>