The Toaster-Enhanced Turing Machine

Over at Theoretical Computer Science StackExchange, an entertaining debate has erupted about the meaning and validity of the Church-Turing Thesis.  The prompt for this debate was a question asking for opinions about Peter Wegner and Dina Goldin’s repetitive diatribes claiming to refute “the myth of the Church-Turing Thesis”—on the grounds that, you see, Turing machines can only handle computations with static inputs and outputs, not interactivity, or programs like operating systems that run continuously.  For a demolition of this simple misunderstanding, see Lance Fortnow’s CACM article.  Anyway, I wrote my own parodic response to the question, which generated so many comments that the moderators started shooing people away.  So I decided to repost my answer on my blog.  That way, after you’re done upvoting my answer over at CS Theory StackExchange :-), you can come back here and continue the discussion in the comments section.


Here’s my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science’s dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can’t toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a “point”, but it’s a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a Turing machine to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster’s internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn’t have been obvious to Turing himself.

So, I’d say the reason why there’s been no “followup” to Wegner and Goldin’s diatribes is that theoretical computer science has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing’s notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages existin the first place, that ensures they can always be compiled down to machine code? Could it be … err … THE CHURCH-TURING THESIS, the very same one you’ve been ragging on? To clarify, the Church-Turing Thesis is not the claim that “TURING MACHINEZ RULE!!” Rather, it’s the claim that any reasonable programming language will be equivalent in expressive power to Turing machines — and as a consequence, that you might as well think in terms of the higher-level languages if it’s more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Update (Sept. 6): Check out this awesome comment by Lou Scheffer, describing his own tale of conversion from a Church-Turing skeptic to believer, and making an extremely apt comparison to the experience of conversion to the belief that R, R2, and so on all have the same cardinality (an experience I also underwent!).

97 Responses to “The Toaster-Enhanced Turing Machine”

  1. Ungrateful_Person Says:

    A humorous post Scott.
    Upvoted your answer !

  2. Pavel Sanda Says:

    I just want to point out the papers from Martin Davis I read some time ago and enjoyed them very much. They have broader topic than just interactive computation VS Church-Turing thesis but I think they are clearly formulated stance related to this discussion:
    “The myth of hypercomputation” (Turing Festschrift, 2004), “Why there is no such discipline as hypercomputation” (AMC, 2006).

  3. Steven Says:

    I recommend reading a few of the comments. “As Martin said, there is a difference between interaction and toast…”

  4. Johan Says:

    Where can I order one of these TETM’s?

  5. Sasho Says:

    I was trying to figure out why smart people make the claim that “higher-order functions” fall outside the reach of Turing. If we view function calls as something that happens inside a physical computing system, the only way I know of making a function call happen is by passing a reference to the function to be called. Of course this is easily modeled with Turing machines.

    Now, of course there are questions about higher-order functions that are clumsy to even ask let alone answer when we model higher-order functions by TMs accepting TM encodings. But to me it seems that the argument being made was not that TMs are a clumsy model for a particular type of questions, it was that TMs are fundamentally incapable of modeling higher-order functions. But if we talk about a real computational system and not about abstract algebra, higher-order functions can be reduced to something TMs can handle.

    So apparently, the “higher-order” people think of computation as algebra. From the point of view of a programmer that may very well be a great idea. Most programs (i.e. not systems programming) and issues around them are well understood as abstract systems of maps. But the Church-Turing thesis only makes sense in the real world and not in the “algebra” world of the process of programming. In the real world programs actually compile, and when they compile, functions calls are resolved to simple jumps. Jumps are well within the grasp of TMs

  6. Vijay D'Silva Says:

    I managed to lose my comment somehow.

    Scott,

    I think the issue some of us have is not with the Church-Turing Thesis. A general problem we face when discussing models of computation is that computer scientists (outside of theory) seem to blindly believe that all models of computation are equivalent *in every way* to Turing machines.

    Most of us from programming languages and logic are not agreeing with articles that bash the Church-Turing thesis. We are using the situation as an opportunity to emphasise that there are fundamental aspects of computation that are not captured by Turing machines in a satisfactory fashion.

    Consider for example, Martin Berger’s comment, which I believe overlaps with your points, but has been less controversial.

    —-
    I think the issue is quite simple.

    1. All interactive formalisms can be simulated by Turing machines.

    2. TMs are inconvenient languages for research on interactive computation (in most cases) because the interesting issues get drowned out in the noise of encodings.

    3. Everybody working on the mathematisation of interaction knows this.
    —-

  7. Vijay D'Silva Says:

    Also, if I take your second update literally, I don’t believe the *design* of a higher-order calculus requires or involves and explicit consideration of Turing machines at any point. We can study the Pi-calculus or any other calculus without worrying about whether it is a Turing-complete model. Moreover, people interested in expressivity may reduce higher-order computations to computations in the lambda-calculus, or some other model.

    So, these calculi can exist without any requirement of being compiled into machine code. Further, the computational issues can be studied and understood (and are understood) without making explicit connections to the specific computational device that is a Turing machine.

    Again, I’m not disagreeing with your fundamental points, but I am probably failing to communicate my own point.

  8. Vijay D'Silva Says:

    Let me try again. If I try to take an objective, outside view of the discussion, this is what I could conclude are the main points.

    1. There is more to the study of computation than Turing machines.

    2. Everyone who studies computation seriously knows this.

    3. A lot of people who do not study computation seriously do not know this.

    4. A discussion about computation (even if it is a misunderstanding of the Church-Turing Thesis) is a good opportunity to emphasise that there is more to computation than Turing machines.

    5. Though you are clarifying the misunderstanding or the misrepresentation of CTT, in the context of a discussion where people are emphasising the inadequacies of Turing machines, your post is not helping the general agenda.

    Or even more succinctly, they’re saying “there’s more to computation than Turing machines” and it appears to them that you’re saying “there isn’t more to computation than Turing machines”. Unfortunately each party is using “more” with a different semantics.

    I’ll stop (for now).

  9. Scott Says:

    Vijay: I completely agree that “there are fundamental aspects of computation that are not captured by Turing machines in a satisfactory fashion.” But are any applied computer scientists actually unaware of that fact? If they were, why would they spend so much time arguing which programming language is best? Indeed, why wouldn’t they forget all their bread-and-butter concepts (recursion, pointers, abstraction layers, invariants, objects, garbage collection, type-checking, namespaces, permissions, semaphores…), and just write all their code in Turing machine? 🙂

    In any case, if you want to teach people the importance of a good formalism for modeling interactions, all power to you! But my friendly advice would be to decouple that worthy message completely from cringe-inducing attacks on the “myth of the Church-Turing Thesis.” You need the Church-Turing bashers on your side like the environmental movement needs the Unabomber!

  10. keith Says:

    It occurs to me that if you’re taking positions in arguments that I, a layman, could easily take, then you’re either wasting your time or indulging a hobby.

  11. Scott Says:

    keith #10: Why not both?

  12. Zac Says:

    This, of course, was a radical new insight 60-75 years ago.
    I guess it’s still a radical new insight to each individual when they first start to understand it.

  13. Lou Scheffer Says:

    Scott #9 states:

    “Indeed, why wouldn’t they forget all their bread-and-butter concepts (recursion, pointers, abstraction layers, invariants, objects, garbage collection, type-checking, namespaces, permissions, semaphores…), and just write all their code in Turing machine? ”

    This has already been tried. No less a computer scientist than Don Knuth wrote his examples in assembler, which is rather closer (and easily mapped) to a Turing machine.

  14. Kenneth W. Regan Says:

    Scott, your “Rather, it’s the claim that any reasonable programming language will be equivalent in expressive power to Turing machines” is ostensibly short of what philosophers call the full CTT. I call the above the “CTT For Competent Programmers” when I lecture, and I say it’s pretty much universally accepted (especially in light of David Deutsch’s 1985 challenge to it being rebuffed). Of course, CTTFCP is all you’ve needed to be concerned with for the argument represented in the thread and this post.

  15. Joy Christian Says:

    Hello Smarty-pants!

    Have you resigned from FQXi yet?

    Or was it just another one of your egotistical bluffs—a cheap stunt to feed your own inflated ego?

    Remember you bragged: “I call on FQXi, in the strongest possible terms, to stop lending its legitimacy to this now completely-unmasked charlatan. If it fails to do so, then I will resign from FQXi, and will encourage fellow FQXi members to do the same.”

    Well, Smarty-pants, how is this for legitimacy: http://lccn.loc.gov/2012001131 ?

    Where is your resignation, Smarty-pants? Where is your resignation?

  16. Henning Dekant Says:

    Clearly Turing machines are inadequate as they cannot run Joy Christian’s algorithm that disproves the Bell theorem.

    Sheez, isn’t that self-evident?!

  17. Ajit R. Jadhav Says:

    @Vijay:

    Though it was not a part of my C-DAC’s six months’ diploma in 1994-95, in order to understand CS fundas better, I browsed through a few introductory texts on CS, and came to know about the Turing machine. (If I am not wrong, perhaps, there also was some mention of TMs in one of the popular science books by Penrose published before mid-90s.) A doubt which occurred to me back then, has been lingering on ever since. Please see if you (or anyone else) wish(es) to address it.

    Question: Does the idea that a TM can compute anything that a real (i.e. a build-able and workable) computer can compute, extend to analog computers? How? Any good (conceptually _not_ _in_adequate) links/references?

    … If it helps clarify my question: I still remember, I had picked up the soap-film membrane as “solving” or “computing” the Poisson-Laplace equation, as the model analog computer. Could the Turing machine model this kind of a computation, I had wondered. And, if not the Turing machine, at least an _easily_ extended version of the same, extended only so much that it could still be recognized as a Turing machine.

    Thanks in advance for clarifying the issue. [My email (no spaces etc.) a j 1 7 5 t p At y ah oo #dot# co? in.)]

    BTW, I, just right now, did a quick Google search and found this one: http://www.eetimes.com/electronics-news/4036696/Analog-computer-trumps-Turing-model. Comments on the same would be welcome too. (One possibly relevant bit expectedly unknown to most any CS theorist anywhere in the world: The soap-film model _had_ been put to practical use by engineers (off-hand, I suppose, during the WWI) for determining or “computing” torsion in rods of irregular cross sections, e.g, a shaft of circular c/s but with longitudinal slots of rectangular c/s.)

    Ajit
    [E&OE]

  18. jonas Says:

    And if convenient high-level programming languages come up, let’s spend this comment to remember John McCarthy and Dennis Ritchie, two great people who proves that those high-level languages were actually practical, both of whom passed away almost a year ago.

  19. Scott Says:

    Joy #15: Thank you for reminding me! Max Tegmark and Anthony Aguirre told me that they’d investigate the matter and decide what to do. Since I like and respect them, I decided to let things rest for a few months; then (frankly) I forgot about you entirely. But you’re right: it’s time for a decision! I’ll talk to Max as soon as I can. (In the meantime, I was happy to see that Perimeter and Oxford did apparently both take some action to dissociate themselves from you.)

  20. Joy Christian Says:

    Nonsense! You are nothing but an egotistical Bell-extremist in desperate and constant need of feeding your own overinflated ego. Your stunt was nothing but a stunt. As is your 100k offer.

    “In the meantime, I was happy to see that Perimeter and Oxford did apparently both take some action to dissociate themselves from you.”

    Dream on, Smarty-pants! As always, you are getting your info from the Enquirer (a tabloid in the US).

  21. wolfgang Says:

    The toaster-enhanced Turing machine is interesting, because toasting bread is a robust solution to a non-trivial problem.
    Try doing the same with a microwave + Turing machine and you will only get warm, soggy bread. Even the best microwave-equipment + supercomputer would not solve the problem as well as a cheap 10$ toaster.

    My point is that there are computationally difficult problems (think turbulence) and yet in many cases we stumble upon working solutions to our problems (think airplanes).

    So imho the really interesting question is Why are we able to find good solutions to many of our problems living in an NP-difficult world?

  22. Neel Krishnaswami Says:

    Dear Sasho, there are two issues here.

    1. First, there’s the Church-Turing thesis itself. By this, I mean “all computable functions f : ℕ → ℕ cam be implemented on Turing machines”.

    Now, note that there’s nothing especially sacred about the natural numbers. So we can might hypothesize a generalized Church-Turing thesis: as “for all types A and B, all computable functions f : A → B can be implemented on Turing machines”.

    This generalized thesis is false. The reason it is false is because different models of computation realize different sets of higher-order functions. For example, consider the parallel-or operation. This operation has the type por : (unit → bool) → (unit → bool) → bool. It takes two computations, and is specified to return true if *either* of its arguments returns true.

    This function is realizable with Turing machines, since you receive two Godel codes as arguments, and so you can dovetail their execution. That way, if either one ever returns true, you can return true. With the lambda-calculus, you cannot examine the syntax of a lambda-term — the only thing you can do is to apply it. As a result, you cannot do dovetailing, and so there is no implementation of parallel-or.

    This means that under a TM-model of higher-type computation, the type (unit → bool) → (unit → bool) → bool has inhabitants that it doesn’t have under a lambda-model. This means, in turn, that there are functions of third order ((unit → bool) → (unit → bool) → bool) → bool which do exist in lambda-models, but don’t exist in TM-models.

    An example of such a function is the operation uniform, which is specified to take a second-order function p, such that if for all arguments a and a’ we have p a a’ = b, then uniform p = b, and it diverges otherwise. In a lambda-model, we know that p can only return a value for all inputs *only* if it doesn’t look at its arguments at all, and so we can implement unif as p ↦ (p ⊥ ⊥), where ⊥ is a looping computation. But that implementation doesn’t work for a TM-model, since you can implement timeouts through dovetailing, and so you can construct arguments p that terminate for both p true true and p ⊥ ⊥, but return different values.

    John Longley’s “Notions of Computation at Higher Type” is the go-to reference for this stuff. According to him, there are about half-a-dozen mutually incomparable notions of sequential higher-type computation. It’s really weird that the Church-Turing thesis, which is ridiculously robust at first order, falls apart so comprehensively at higher type.

    2. Second, don’t confuse functions with their implementation. There are plenty of implementations of higher-order functions which have nothing to do with references or jumps. Two interesting classes of this arise from vector space models, and the geometry of interaction.

    Recall that the category of finite dimensional vector spaces and linear maps between them has duals and tensor products. As a result, you can represent linear maps A ⊸ B as elements of B ⊗ A^*. Since you can represent elements of vector spaces with matrices, this gives you a representation for programs involving linearly-typed higher-order functions that does not have pointers in it anywhere.

    This is actually useful in program analysis, since it lets you turn lambda-terms into matrices, and GPUs are much happier with matrices than the pointer indirections of usual ASTs.

    The geometry of interaction (or Int construction) is a way of getting a compact closed category out of a monoidal category that supports recursion. The basic idea is that you can represent types as pairs of inputs and outputs, and then you can create duality by swapping inputs and outputs. Then you can represent a function space in the same way as with vector spaces. Formally, the geometry of interaction construction creates a compact closed category out of a traced symmetric monoidal category. (This is a categorification of the Grothendieck group construction, which you can also use to get integers from naturals.)

    Dan Ghica has applied this idea to circuit synthesis. Since you can’t pass around references to circuits — you can only send signals on wires – the Int construction gives you a way of representing higher-order functions without having to pass references to them.

    In deference to our host’s interests, I should mention that the combination of these ideas has applications to quantum computation: identifying closed structure in the way I outlined above gets you a long way towards Coecke and Abramsky’s work on graphical notation for quantum computation. I only know this work vaguely, though.

  23. anon Says:

    > the moderators started shooing people away

    Andrej is not a mod, but he is right about the discussion being appropriate somewhere else like this blog post.

  24. Vijay D'Silva Says:

    I’d like to be more generous to my fellow computer scientists Scott, but recent experiences made me believe otherwise. Some of what you say is true. I would expect (hope?) that most computer scientists are aware that standard programming language constructs and mechanisms compile into Turing machines.

    I have encountered post-PhD level computer scientists who also believed that all paradigms for structuring computation are *abstractions* of Turing machines. That everything compiles into a Turing machine eventually and has to be implemented on something like a von Neumann architecture.

    I do not agree with this view because it focuses on the result of a computation and ignores the structure of the computation. Maybe the distinction between an abstraction and an encoding is a theoretical (or pedantic) one.

    Also, I agree with staying away from Church-Turing bashing. I didn’t view that situation as an opportunity myself but was trying to explain what the debate looked like from an external point of view (sympathetic to the other side).

  25. Paul Beame Says:

    Appreciation of the implications of the Church-Turing thesis, and the proofs that give evidence for it was not always universal, even among Turing Award winners!

    When I was a grad student at Toronto, I heard an anecdote about a talk by Ken Iverson given at U of T shortly after he received the Turing Award in 1979 for his development of APL. In the talk he apparently claimed that there were programs that you could express in APL that you simply couldn’t express in any other programming language. In response to questioning from someone (Charlie Rackoff?) about this inexpressibility in other languages, he affirmed that he really meant that it was impossible to write programs in other languages to do this. The questioner brought up the simulation by Turing machines but Iverson still would not acknowledge that this refuted his claim. Things became heated and the talk became notorious as having been a complete fiasco.

  26. Somebody Says:

    @Paul #25:

    Thanks for sharing :). In Ken Iverson’s defense though: not all Turing award winners are necessarily (complexity) theorists. And just because it’s a Turing award does not mean that they are required to know the CCT. Its “desirable” but its okay if they have never heard of it! And of course, when that is happening (at least in the early days of CCT), not having a universal acceptance of the power and implications of CCT is not that surprising. Rather, it is actually an expected event!

  27. Somebody Says:

    Ohh by the way, I just can’t help but laugh hard at the “Joy-Christian and Scott Aaronson” stuff ! This Joy guy, ohh my god. I guess Scott did well not to respond directly to his further comments.

    Scott, in honour of Joy-Christian’s “victory”, may I suggest that you change your name on your official homepage from “Scott Aaronson” to “Smarty Pants”, just for this long weekend? You’ll give him that much, right?

  28. Noam Zeilberger Says:

    I just want to repeat, though it was already said several times, that no one on that thread–or in the original article– was questioning the validity of the Church-Turing thesis in its precise semiformal mathematical form as a statement about computable functions on the naturals. The point that some of us were making is that it is unproductive and at times damaging to try to apply the CT thesis in situations beyond its original scope (analogous to the way it is disappointing to see wild claims “deduced” from Gödel’s incompleteness theorems). In some cases because these generalizations are false, as Neel explained above. And in other cases because they give a formal veneer to statements which are too imprecise to have real mathematical content. For example, I view the claim that “any reasonable language will be equivalent in expressive power to Turing machines” as such a statement: it’s true if we interpret the terms in a certain way, but that requires taking a very limited view of concepts deserving more rigorous scientific thought, such as “expressive power”.

  29. Scott Says:

    Paul Beame #25: LOL! Thanks so much for sharing that story. I suppose Iverson would feel proud that even today, folks like Wegner and Goldin are carrying on the intellectual legacy of his lecture.

  30. Scott Says:

    Noam Zeilberger #28: Thanks for clarifying! I understand your position but strongly disagree with it. Let T be the statement, “all reasonable programming languages are equivalent in expressive power.” Then I’d say that the sense in which T is true is a thousand times bigger, more profound, more surprising, more exciting, and more interesting than the various senses in which T is false.

    Now, why would I say that? Because, when I was an 11-year-old first learning how to program in BASIC, it seemed self-evident to me that T would be totally, 100% false—not imprecise, not missing some technical qualifications, but just false in the sense that Ken Iverson apparently believed it was false. That is, I took it as obvious that, once I learned a “grownup” language like C, I’d be able to write programs of a kind that were flat-out impossible to write in BASIC. I dreamed of inventing the most powerful programming language of all time, one so awesome that it would let you write programs no one could ever have written before. I thought that that would be the key to artificial intelligence. So, given those misconceptions, the Church-Turing Thesis hit me as a sort of life-changing revelation; I was far more impressed by its fundamental truth than by the various caveats (e.g., that one programming language might have better built-in graphics facilities than another, that one might support UTF16 where another only supports ASCII). Maybe I would’ve felt differently if my experience had been different—but in that case, maybe I would’ve been a completely different person!

    Incidentally, I don’t think statement T misrepresents the content of the Church-Turing Thesis, any more than “given any formal axiomatic system powerful enough to encompass natural-number arithmetic, there exist true arithmetical statements that can’t be proved in that system” misrepresents the content of Gödel’s theorem. Yes, protest when someone translates a technical claim into plain English in a way that mangles its essence—I’ve often done it myself! (“There are statements that are both true and false” is one of many common Gödel-manglings, and not even the most egregious one.) But it’s not sporting to protest when someone translates a technical claim in a way that more-or-less does capture the essence. I’ve found that one of the worst things you can do, when teaching, is to encrust everything you say with so many details and caveats that you never even get across the original claim that the details and caveats were modifying. Sure, it feels “rigorous” to talk that way, but the price is to turn science into a narrow game played with invented words—one that never dares to state firm conclusions about anything outside the game, even when (as in the case of statement T) those conclusions are something like 98% right.

  31. Smarty-pants Says:

    Somebody #27:

      Scott, in honour of Joy-Christian’s “victory”, may I suggest that you change your name on your official homepage from “Scott Aaronson” to “Smarty Pants”, just for this long weekend? You’ll give him that much, right?

    I don’t want to throw off Google, but fine—in honor of Joy, I will post to this blog under the handle “Smarty-pants” for a little while. (Smarty-pants is now registered as an official Shtetl-Optimized administrator.)

    Personally, I was amused that Joy chose this particular post to drop his comment-turd on, rather than any other. Did he think it would on-topic, given the general theme of “endlessly-repeated, egregiously-wrong claims to have overturned major 20th-century scientific insights”?

    PS to “Somebody”: I know who you are, dude.

  32. Joy Christian Says:

    @Smarty-pants:

    “…major 20th-century scientific insights”

    LOL

    You moron, Bell’s so-called “major insight” is a joke of the past century. Do you ever read anything apart from your plumbing manuals?

    Bell claimed that for local functions of the form

    A(a, L) = +1 or -1 with 50/50 chance for any a in R^3

    and

    B(b, L) = +1 or -1 with 50/50 chance for any b in R^3,

    together with

    AB(a, b, L) = A(a, L) x B(b, L) = -1 always when b = a,

    it is mathematically impossible to reproduce correlation of the form

    E(a, b) = -a.b.

    Now for this claim to have any physical meaning at all Bell’s prescription A(a, L) has to be *complete* in the sense defined by EPR and accepted by Bell (read the title and first para of Bell’s famous paper). But this is possible if and only if the co-domain of the functions A(a, L) and B(b, L) is a parallelized 3-sphere. And when Bell’s A(a, L) and B(b, L) are thus completed, it is not only possible but inevitable that they produce the correlation E(a, b) = -a.b. Although not for idiots, the proof of this is elementary and can be found in my one-page paper as well as in my book. It is scandalous to continue to believe in Bell’s so-called theorem despite my explicit one-page proof reproducing exactly what Bell thought was mathematically impossible to reproduce.

    By the way, FYI, Lucien Hardy has now verified and confirmed my calculations in explicit detail. Don’t take my word for it. Go ahead and ask him yourself.

    Better still: A top experimentalist is now working on performing my proposed experiment. And, FYI, as I have said before, I will publicly admit that I was wrong if the result of my experiment goes against my prediction.

    Now go and resign from FQXi if you have any honor left in you.

  33. Hersch Says:

    Are you sure comments #15 and #20 really are Joy Christian? They sound like someone doing a parody of him. I mean really, “your stunt was nothing but a stunt”?

    There’s also something to be said about people interpreting your own comments as parodies of yourself.

  34. Scott Says:

    Hersch #33: Yeah, that’s Joy. He had the same repetitive, spittle-laced eloquence in his previous comments (go read them if you’ve forgotten), and even sounded similar when I met him in real life.

    And yes, this is I. Sorry if I sometimes sound like a self-parody, but what can I do, when I wake up in the morning and find people again trashing the Church-Turing thesis, Bell inequality, etc.? If they’d stop doing that for a year or so, maybe I could develop new sides to my online persona. 🙂

  35. spacetime Says:

    Scott,

    (This from a believer in Bell but not so totally in Turing.)

    How can one REALLY BE CERTAIN that any given NP-hard problem would be tractable given appropriate resources (temporal, maybe spatial)? Or am I ignorant to believe that’s what CTT maintains?

  36. Smarty-pants Says:

    spacetime #35:

    1. What exactly does it mean to be “not so totally” a believer in Turing? Do you not believe that Turing existed, or that computers exist, or … ? 🙂

    2. The original (computability) Church-Turing Thesis has nothing to do with the difficulty of NP-hard problems—it talks only about which functions are computable, not about how much time is needed. The extended (polynomial-time) Church-Turing Thesis does have something to do with that, but even there the relation is indirect: it says that the limit of what’s efficiently computable in the physical world coincides with P (or in the modern version, BQP). Now, it’s also widely and very plausibly believed that P and even BQP don’t contain NP, but those are separate mathematical conjectures.

    3. The most important point is that “How can one REALLY BE CERTAIN…?” is the wrong question in science. Outside (the settled parts of) pure math, it’s always a question of likelihoods: you can’t “REALLY BE CERTAIN” even that the earth is round, but you’re still an idiot if you entertain serious doubts on that issue. In the case of the intractability of NP-complete problems, I think a convergence of evidence from everything we’ve learned about physics makes it an extremely plausible conjecture—not as secure as the earth being round, but probably comparable to the Second Law of the impossibility of faster-than-light signaling. For more, check out my NP-complete Problems and Physical Reality essay.

  37. Ram Says:

    Cmon Scott, why you degrade yourself by allowing Joy here?
    Please ignore him completely.

  38. Express Says:

    Scott, you wrote that the CT thesis is “the claim that any reasonable programming language will be equivalent in expressive power to Turing machines”. The problem with this statement is that it’s too vague, since it relies on an intuitive notion of expressive power. It would raise the quality of our exchange if the participants were clear what they meant by expressive power.

  39. Garl Eskimo Says:

    Amusing post and an entertaining discussion!
    I also mostly agree with Scott about CTT.

  40. Noam Zeilberger Says:

    Scott #30: observe that your statement T, “all reasonable programming languages are equivalent in expressive power”, is different from the one you originally wrote (let’s call it S), “any reasonable programming language will be equivalent in expressive power to Turing machines”. I find T much less objectionable than S, because it is transparently an informal, open-ended statement. And there are clearly deep and important senses in which T is true, of which the Church-Turing thesis provides *one* partial explanation. On the other hand, to assert T as a question *settled* decades ago in theoretical computer science is to display profound ignorance of the problem of semantics of programming — and in particular, ignorance of our ignorance, because we don’t *know* what are the right mathematical definitions of “programming language” or of “expressive power” upon which such an assertion could be rigorously evaluated.

    Suppose that we replaced “reasonable programming languages” with “human languages”, and asserted:

    T’: all human languages are equivalent in expressive power

    Linguists and philosophers have debated this question for a long time (it’s a sort of contrary to the Sapir-Whorf hypothesis). Would you be so confident as to say that “the sense in which [T’] is true is a thousand times bigger, more profound, more surprising, more exciting, and more interesting than the various senses in which [T’] is false”? Perhaps. But you would have to acknowledge that this is merely an inspiring *insight* you have (based on whatever evidence) into a question which is not fully formed, and which will probably not be fully formed within our lifetimes (though indeed, perhaps this insight could lead you to taking steps towards clarifying the question).

    I view statement T as on par with statement T’.

    The reason I object strongly to S is because it gives the appearance of providing a nice and easy “solution” to T (just apply transitivity and symmetry!), when in fact it merely scratches the surface of a much deeper and more inspiring *open question*.

  41. Sam Tobin-Hochstadt Says:

    In fact, the notion of “expressive power” for programming languages can be made precise, and it shows that not all programming languages have the same expressive power. Further, these differences are not about silly things like ASCII vs Unicode or graphics libraries, but about the programmer’s ability to abstract or express powerful concepts locally. The key work in this area is Matthias Felleisen’s paper “On the expressive power of programming languages”, available at http://www.ccs.neu.edu/racket/pubs/#scp91-felleisen .

  42. David Speyer Says:

    To add a concrete example to Scott’s discussion: I definitely remember taking a programming course in PASCAL when I was 11 or 12 and asking the instructor how to make an array whose size expanded as I needed it to. I had written a game of life simulator, and I wanted to make it so that, if my pattern grew to the edge of the board, the board would expand to give it more room. The instructor told me that I would need a language with pointers, like C, to do that and I couldn’t do it in PASCAL. (Oddly, I just googled for it now, and it turns out that PASCAL does have pointers. I guess he didn’t know?)

    Undeterred, I wrote a function which generated a game of life instance of size N and, if that instance reached the edge of the board, recursively called itself to generate an instance of size 3N. (Not surprisingly, this ran slowly and produced frequent stack overflows.) I felt very relieved that the designers in PASCAL, when trying to lock me away from unlimited memory, had forgotten to ban recursion. But I still had the idea that you could have a language which was reasonably powerful but didn’t allow dynamic memory access; I just thought that the PASCAL designers had screwed up.

    I also remember a number of years later reading the Art of Computer Programming and coming across the explanation of how any code which can be written recursively can be written iteratively. I was completely disbelieving and started thinking of all the recursions I had written, and going through Knuth’s explanation of how to unroll them into iterative code. It was mind blowing!

    So I really do think it is not obvious that “low level” languages can achieve the same things that “high level” ones can.

  43. Smarty-pants Says:

    David Speyer #42:

      So I really do think it is not obvious that “low level” languages can achieve the same things that “high level” ones can.

    I completely agree, it’s not obvious — that’s what makes it not merely a truth but a big, important one!

    Regarding array size: even if Pascal had provided neither pointers nor recursion, you could also define an array to have the biggest (fixed) size allowed, then use that array as a virtual memory for the rest of the computation. In that case, the issue would become essentially the same as an issue that’s always present when interpreting the Church-Turing Thesis: that in the real world, we only ever have finite memories, not unlimited Turing-machine tapes. (There is, of course, the quantitative issue that some languages let you access more of the memory that is there than others. QBASIC only lets you access 64k!)

    Actually, I have (arguably) some even more extreme examples of hacky workarounds to falsely-believed limitations of a programming language. When, as a 12-year-old, I wanted to write a music-playing program in QBASIC, I didn’t even know the concept of an array, let alone a dynamically-allocated array. So I’d store the entire song in one string (which BASIC treats as a single variable, with special commands for pulling out substrings that I happened to know). A friend of mine, who also didn’t know about arrays, did something even more extreme. He encoded an array into the prime factorization of a single integer, and factored the integer every time he wanted to read out an array element. (As I’d later learn, this wasn’t so different from the hacky solution Gödel found in 1931!)

  44. Smarty-pants Says:

    Express #38, Noam Zeilberger #40, Sam #41: By “expressive power,” I meant probably the first formalization that a reasonable person would think of — namely, which mappings from inputs to outputs can this programming language implement? (Where we make allowances for the finiteness of the accessible memory, and also for transformations of the inputs and outputs that are “trivial” but beyond the capabilities of the programming language for, e.g., character encoding reasons.) I agree that there are other notions of “expressive power” according to which different languages are not equivalent. But as I said in comment #30, I regard such inequivalences as obvious, even just from surface features (e.g., some programming languages support objects, others don’t). It’s the equivalence in the face of such differences that’s deep, surprising, and worth shouting from the rooftops! Indeed, it’s only Church-Turing equivalence that lets us compile the “more expressive” languages (the ones with objects, etc.) down to machine code in the first place.

  45. Noam Zeilberger Says:

    Smarty-pants #44: I am well aware that this was the notion of expressive power you had in mind. Are you aware that many people (across different disciplines in computer science, mathematics, philosophy and linguistics) have spent a lot of time trying to understand and address the limitations of the “first formalization that a reasonable person would think of”?

    I urge you to spend a bit of time trying to formally define “inputs” and “output”, and then think about Neel’s examples (#22) of counterexamples to the generalized CT thesis.

  46. Travis Says:

    Smarty-pants #43: Your anecdotes remind me of when I didn’t know about arrays when learning javascript back in middle school. My workaround was to create a bunch of variables foo1, foo2, foo3, …, and access arbitrary ones with eval(“foo” + i).

  47. Anon Says:

    His name is Peter Wegner, btw. Or just “Professor Wegner” to you, Smarty-pants!

  48. Express Says:

    Smarty-pants #44: Your proposed formalisation is immediatly false, because non-terminating programs, such as the operating systems of modern computers, are not describable in this way.

  49. Smarty-pants Says:

    Anon #47: Sorry, fixed.

    Express #48: No, I beg to differ. If it were merely the set of terminating programs (i.e., mappings from finite inputs to finite outputs) that didn’t care whether we used Pascal, C++, Java, Python, or Turing machine to express the mapping, that would already be striking enough. But the same equivalence extends even if we generalize our notion of “input” and “output” so that both can be infinite strings, or include interaction with an oracle (including a dynamic oracle that changes as the result of each interaction). How much more do you want? 🙂

  50. Jiav Says:

    Scott, are there two (Turing-equivalent) languages ​​such that we cannot translate from one to the other in linear or low-polynomial time?

  51. asdf Says:

    There are three types of programs one can write in a Turing-complete language:

    1) Those known to halt, e.g. do some finite calculation and print the answer
    2) Those known not to halt, e.g. an OS or server that runs “forever” until one pulls the plug or sends a “stop” command
    3) Those unknown to halt or not, e.g. unbounded search for a counterexample to Goldbach’s conjecture.

    There is some sentiment in the programming-language community that Turing-completeness is overrated and that type 3 (above) programs aren’t useful all that often. By giving up the ability to write type 3 programs we can gain various other nice benefits from a PL perspective.

    See: http://www.jucs.org/jucs_10_7/total_functional_programming

    David Turner, “Total Functional Programming”.

  52. Neel Krishnaswami Says:

    By “expressive power,” I meant probably the first formalization that a reasonable person would think of — namely, which mappings from inputs to outputs can this programming language implement?

    Given this simple idea of expressive of power, the claim that all Turing-complete languages are equivalent in expressive power is false. It is only true when inputs and outputs are numbers (or other simple inductive type). I don’t mean false in some esoteric philosophical sense, either: I mean there are counterexamples, some of which I’ve posted in this very thread.

    Note that higher-type inputs and outputs have a lot of practical applications, too, since they form the mathematical foundation of linking and loading, without which it is basically impossible to build a computer system. So the fact that the higher-type generalization of the Church-Turing thesis fails is of immense interest, both theoretically and practically.

  53. Scott Says:

    Neel #52: I disagree with the idea that we can or should worry about “higher types” when formulating what the Church-Turing Thesis is supposed to mean. From the perspective of the end user, a computer program is something that takes various strings of information as input and produces other strings as output. The entire concept of “higher types” only makes sense from the perspective of a veteran programmer (or programming theorist)—i.e., someone who’s already internalized the Church-Turing Thesis in the sense that I’m talking about, and now wants to write code that’s more modular, linkable, etc. I’d say that, when formulating a principle as basic and enduring as the Church-Turing Thesis, we have no right to weigh it down with concepts that only make sense “internally,” within certain kinds of programming languages. To do so is to beg the question, no less than with my toaster or character-encoding examples! We ought to be able to explain the CT to a 12-year-old who hasn’t yet learned any programming language, and wants to know what can be done in one language that can’t possibly be done in another. I would explain to such a 12-year-old that some programming languages seem more powerful than others: for example, in BASIC, there’s no way to write a function that takes another function as input and modifies it, whereas in LISP there is. But the amazing truth is that in BASIC you can write a LISP interpreter, then have all the higher-typed fun you want! And more deeply, were there no way to “unravel” sophisticated languages to less sophisticated ones like that, no LISP interpreters could ever have been written in the first place.

    Now, I don’t mind if someone formulates a “Higher-Type Church-Turing Thesis” and then knocks it down. But they should make it clear that their new, false thesis has nothing to do with what Turing was writing about in 1936, or with (what I think is) the clearest way to understand the Church-Turing Thesis today: as a falsifiable claim about what kinds of computers can and can’t be built in physical reality.

  54. Scott Says:

    Jiav #50:

      Scott, are there two (Turing-equivalent) languages ​​such that we cannot translate from one to the other in linear or low-polynomial time?

    That’s an extremely interesting question! Certainly we could construct such pairs of languages artificially (for example, by having one of them require unary encoding, or by defining a “programming language” whose programs are obtained by starting with C programs, interpreting them as giant integers, and then factoring the integers—or applying any other invertible mapping of your choice). But I can’t think of any “real-life” examples of inherently-inefficient translation between programming languages. Even if you want to convert (say) a quantum Turing machine to a classical one, the conversion itself can be perfectly efficient (linear-time or close to that). It’s only running the converted program that will be exponentially slow! Can anyone else suggest an example?

  55. Express Says:

    SmartyPants #49, see you’ve already moved from one model, TMs, with finitary inputs and outputs to another, Oracle-TMs with infinitary tapes. This leads to further complications

    1. Which infinitary computations would you allow as computable? What is an accepting infinitary computation?

    2. What kind of infinities are you permitting?

    3. What is the interpretation of an oracle tape? Is it a channel for
    interaction (e.g. like a name in process calculus)? In this case your
    model cannot express express computation with dynamically varying numbers of communication channels. If you code up communicating channels on a single oracle tape, exactly how is this done? Or are oracle tapes modeling input. In this case, what are the outputs?

  56. Neel Krishnaswami Says:

    Scott, the concept of type is not tied to a programming langauge, or indeed even to computation — they were invented before computers were! (Indeed, neither the lambda calculus nor your favorite universal Turing machine has an intrinsic language-specific notion of type.) Types serve to structure the purely mathematical concept of equality, which is the concept upon which your formalization of expressive power relies.

    The interpreter loop of most languages today lets you build arbitrary values, and so those bright 12-year olds can construct and interact with non-elementary values (such as file handles, processes, and function values). The natural notion of equality for these gadgets is a higher type one, which makes your claim about expressive power is equivalent to the false higher-type Church-Turing thesis, and not to the original Church-Turing thesis.

    It is accurate to say that you can write an interpreter for one Turing complete language in any other, but we can’t extend that to a general statement about expressive power. But in my experience, there’s no need to — bright 12 year-olds usually find interpreters intrinsically fascinating, and we don’t need to act like used-theorem salesmen.

  57. Jiav Says:

    Scott #54

    I don’t get how these two artificial languages can prevent efficient translation. Of course a “word to word” translation would be inefficient, but it seems efficient strategies still exist for these two cases (e.g. for unary langague, replace each number above some threshold by a (far shorter) subroutine that will produce this number while running the program; for the language based on invertible mapping, add the inefficient mapping as a subroutine to be run latter). Do I miss something here?

  58. Lou Scheffer Says:

    Jay #50 and Scott #54,

    I believe such language pairs (no polynomial translation) exist, and we are currently using them!

    In C++, programs with template specialization can take exponential time to compile. This is illustrated by the joke C++ program that, given any number, prints (correctly) “Prime” or “Composite” in small and constant time. The trick is that the number must be specified at compile time, and the compilation may take exponential time.

    If you tried to translate this to a language without templates (such as original C or Pascal), a literal translation will also take exponential time, and end up with one of two programs, one that prints “Prime” and one “Composite”.

  59. Jay Says:

    Lou Scheffer #58

    Thanks, but you’re actually answering to “Is there a compiler that doesn’t work well on some input?” while the question is closer to “Is it impossible to get a compiler that work well in all case?”. The former is indeed easier: “Loop for an exponential time then do actual job”. 😉

  60. Scott Says:

    Express #55: The nice thing about Church-Turing being a thesis rather than a theorem is that I don’t need to answer your questions! Well, I can answer the question about infinity: I meant Aleph0. But I don’t have to explain what “acceptance” means for infinitary TMs, etc. etc.—I can leave all that stuff to you! I can then challenge you to give any way of formalizing those concepts, such that what you can or can’t compute depends interestingly on the choice of reasonable programming language—i.e., in a way that can’t be trivially remedied by fiddling with the definitions.

    (So for example, if we restrict to polynomial-time computation, then quantum computers seem able to do interestingly more than classical computers, not merely more because we tie the classical computer’s hands behind its back by restricting the form of its input and output. That’s an example of a “real” separation between computational models.)

  61. Scott Says:

    Jiav #57: This is exactly the issue. With virtually any “practical” programming languages, you can save the task of solving hard computational problems for runtime, and thereby make the “translation” itself extremely efficient. That’s why, in defining my artificial language, I stipulated that the integer had to be factored even to write the program in the first place. You’re not allowed to save the the factoring for runtime; if you feed the compiler a composite number then it returns an error message.

  62. Scott Says:

    Neel #56: I fear we’re never going to agree, simply because we start from such different intellectual traditions. Your viewpoint—that the logicians’ abstract concept of a “type” comes prior to talking about computation—is one that’s extremely alien to my way of thinking. (If it were so, how could I have spent a whole career worrying about computation without ever needing ‘higher types,’ as far as I remember or know? 🙂 ) For me, the notion of computation comes before types, before categories, before groups, rings, and fields, even before set theory. Computation is down at the rock bottom, along with bits and integers. Everything else, including types, comes later and is built on top.

    As I said, I don’t begrudge the type theorists if they want their “higher-level” versions of the Church-Turing Thesis that are false for higher-level reasons. All I ask is that they let the original Thesis be, for the people who think about computation as an actual, potentially-physical process acting on bits, rather than an abstract transformation of types.

  63. Jay Says:

    Scott #61

    Thk I see it now! Here is a reformulation to avoid this odd trick:

    Is there a Turing-complete language such that one cannot translate it into a standard Turing machine in linear or low-polynomial time?

  64. John Sidles Says:

    Jay asks: “Is there a Turing-complete language such that one cannot translate it into a standard Turing machine in linear or low-polynomial time?”

    Turing machines whose atomic instructions decide cryptic languages cannot be translated into any standard programming language. Whether cryptic languages exist is an open problem.

  65. Scott Says:

    Jay #62: Once again, we could define such a language artificially. For example, let C’ be the language whose programs are obtained by starting with C programs, then applying a nasty one-way function. To run a C’ program, you have to invert the OWF, then run the original C program.

    An obvious problem here is that C’ programs not only can’t be converted back to C programs in polynomial time (assuming that the OWF is indeed hard to invert), they can’t even be run in polynomial time! But this is inherent: if the C’ programs could be run in polynomial time, then whatever we were using to run them, we could use that same thing to get a polynomial-time translator back to C programs (or Turing machines, etc).

  66. Noam Zeilberger Says:

    Scott #61, I think you should keep an open mind. I can assure you that many type theorists think about computation as an actual, potentially-physical process, and are interested in type theory/category theory as a tool to help them organize these thoughts. (Neel #22 already mentioned a few of them, and heck, you have been speaking with several more on this thread!) Type theory does not force you to put types before computation, or to choose high-level languages over low-level ones—it is just a mathematical theory for organizing certain widely recurring patterns in computational systems.

    As for why you haven’t yet found a need for worrying about type theory…perhaps because it is a well kept secret? 🙂

  67. Fernando Pereira Says:

    “For me, the notion of computation comes before types, before categories, before groups, rings, and fields, even before set theory. Computation is down at the rock bottom, along with bits and integers. Everything else, including types, comes later and is built on top.” Kronecker is supposed to have said (a German version of) “God made the integers; all else is the work of man.” Fans of Dedekind and Cantor won’t forgive him to this day. There’s more than a whiff of that old feud but unresolved in the present discussion.

  68. Neel Krishnaswami Says:

    Scott #61: I don’t disagree with the Church-Turing thesis at all — it’s just that the common explanation of it as the claim that all Turing-complete languages have equal expressive power is incorrect. That’s because the claim about expressive power is actually equivalent to a claim (the higher type Church-Turing thesis) that has been proven false.

    You’re free to think about computation as acting on bits — I do, too — but for those bits to do us any good, they have to actually represent something (e.g., data structures). And that’s where the error in too-glib claims about expressive power lies.

    Let X be some mathematical set, and let Ω represent basic data (e.g., bit strings, numerals). Then we can define an implementation relation as a set ⊧ ⊆ Ω × X, so that ω ⊧ x is read to mean that ω ∈ Ω is a representation of x ∈ X. For this to be a proper representation relation, we’ll need some additional conditions on ⊧, such as ω ⊧ x and ω ⊧ x’ implies x = x’, and for each x ∈ X to have an ω ∈ Ω such that ω ⊧ x, but those are mostly intuitive and you can probably guess them.

    Then, the right formalization of expressive power is, given two representable sets (X, ⊧) and (Y, ⊢), which functions f : X → Y can be realized by a program p, such that for every x ∈ X and ω ⊧ x, p takes input ω and computes an output ω’ such that ω’ ⊢ f(x)?

    Different models of computation give different sets of expressible functions. So if you think about data abstraction at all, then the expressive power of different Turing-complete programming languages can indeed differ.

  69. Alexander Vlasov Says:

    In relation with contest on most weird way to implement
    arrays I may say that before I’ve learned about arrays
    in Python I wrote something like:
    if x==1: y=a1 else: if x==2: y=a2 …
    In relation with initial question, the implementation of interactivity maybe indeed resembles implementation
    of arrays, because instead of single input we may write array:
    Input = (initial input, input after first interaction,
    input after second interaction, … )

    Yet here may be some conceptual question. Let us consider simplest interactive TM with only one interaction is acceptable, i.e., the “1-interactive TM” starts with some tape x, calculates f(x), waits for the some new input y and finally calculates g(f(x),y). Certainly, we may use instead usual TM calculating G(x,y) = g(f(x),y) or simply G(Y) with Y = (x,y).

    Sorry, for this well known staff, but let us recall, that y is reaction of some “environment” on output f(x) and even for trivial reaction y=f we should suppose that such interactive TM is equivalent with usual TM, if as an input for this TM may be used Y=(x,f(x)) despite at this moment the f(x) has still not calculated. But if we may obtain pair (x,f(x)) from some mysterious source for any function f (because we did not suggest any limitation for such function), that is a reason for concept of TM at all?

    In fact, it resembles argumentation against some ancestors of idea of computability used around 1929 (seems, before invention of TM) by J. E. Littlewood. His idea was to define function as “the pair (x,y)” instead of “the method to obtain y from x”.

  70. Jay Says:

    Scott #65

    >C’ programs not only can’t be converted back to C programs in polynomial time (assuming that the OWF is indeed hard to invert)

    C’ programs can of course be translated back efficiently, for reasons you explained yourself at length: there is no need to invert the OWF, just construct the C program as saying “invert the OWF then run it”.

    PS: my answer #59 to Lou Scheffer is awaiting moderation

  71. Scott Says:

    Jay #69: Duhhhh, yes, you’re right! Provided the translation from C to C’ is computable, some translation in the other direction can be done in linear time. (But of course, the retranslated C program will in general be horrendously less efficient than the original one—if you want a program of comparable efficiency, then the translator needs to solve a hard computational problem.)

  72. Scott Says:

    Fernando Pereira #67:

      “For me, the notion of computation comes before types, before categories, before groups, rings, and fields, even before set theory. Computation is down at the rock bottom, along with bits and integers. Everything else, including types, comes later and is built on top.” Kronecker is supposed to have said (a German version of) “God made the integers; all else is the work of man.” Fans of Dedekind and Cantor won’t forgive him to this day. There’s more than a whiff of that old feud but unresolved in the present discussion.

    It gives me no pleasure to agree with Kronecker about the exclusive divine creation of the integers, given Kronecker’s totally unjustified persecution of Cantor. Cantorian set theory might indeed be a “work of Man,” but it’s one of the greatest works of Man ever, and Kronecker was foolish if he didn’t recognize that!

    Actually, there are echoes of lots of other debates here, in what some might consider a relatively-narrow debate about the CT Thesis. For example, I’ve seen some variant of the following conversation dozens of times in my life:

    “Naïve” person: Given an arithmetical statement like Goldbach’s conjecture, clearly it’s either true or it’s false, even if we can never decide which.

    “Sophisticated” person: Haha, you don’t understand logic! Even the truth or falsehood of Goldbach’s conjecture could be model-dependent—it might be “true” for the standard integers but “false” in a nonstandard model of Peano arithmetic.

    Personally, I think the “naïve” person is right here and the “sophisticated” person is wrong. For when we use words like “true” or “false” in arithmetic, there’s a presumption that we’re talking about the standard, pre-model-theoretic integers (and that we already know what those are—otherwise, how could we even be discussing math?). The nonstandard models of PA are best thought of as formal artifacts of Gödel’s theorem—i.e., they simply reflect the fact that no recursively-enumerable set of first-order axioms can capture the set of true statements about those standard, pre-model-theoretic integers. (For whatever it’s worth, this was also Gödel’s view.)

    Now, in a precisely analogous way, I’d also say that when we talk about computation and the Church-Turing Thesis, unless we specify otherwise, there’s a presumption that we mean the “naïve,” pre-type-theoretic kind of computation that maps bit-strings to other bit-strings—not the kind that acts on abstract “higher-level objects” such that we’re not allowed to access those objects’ representations as bit-strings. Obviously Neel and Noam disagree with me here.

  73. Itai Bar-Natan Says:

    #50, #54:

    As a matter of fact, there is a industry-used programming language which is Turing-complete, but that you cannot effeciently compile programs into this language. Programs in this language consist of Javascript programs signed with Google’s private key. Note that there is a polynomial-time compiler, it’s just not effecient in practice.

  74. John Sidles Says:

    Here is a complexity-theoretic riff upon Scott’s #72, following Scott’s reasoning rather closely, but with the argument transposed one full octave lower upon the practical scale, so as to sound themes that are due very largely to Doron Zeilberger’s Opinion 125:Now that Alan Turing turned 100-years-old, it is time to have a Fresh Look at His Seminal Contributions, that did lots of Good But Also Lots of Harm

    —————–
    Chanting the Light of Insight

    “Actually, there are echoes of lots of other debates here, in what some might consider a relatively-narrow debate about the CT Thesis. For example, I’ve seen some variant of the following conversation dozens of times in my life:”

    ‘Fundamentalist’ person:  The very notion of a language’s complexity class is flawed, since only oracles can assess the runtime complexity even of concrete Turing machines.”

    ‘Naïve’ person:  Haha, you don’t understand algorithmic complexity theory! Given that a Turing machine is in P, clearly it has a runtime exponent, even if we can never determine that exponent.”

    “Personally, I think the “Fundamentalist” person is right here and the “Naïve” person is wrong. For when we use exponents “n^1” or “n^3” in algorithmic complexity theory, there’s a presumption that we’re talking about provable runtime exponents (and that we have the proofs at-hand — otherwise, how could we even be discussing math?). The standard models of complexity theory are best thought of as formal artifacts associated to oracles — i.e., they simply reflect the fact that P contains cryptic languages that are recognized efficiently by no Turing machine having a decidable runtime exponent. (For whatever it’s worth, this was (?) (is ?) also the view of Hartmanis/Zeilberger.)”

    “Now, in a precisely analogous way, I’d also say that when we talk about computation and algorithmic efficiency, unless we specify otherwise, there’s a presumption that we mean the “naïve,” pre-oracular kind of Turing machine whose runtime properties are explicitly proved, such that we access these machines as concrete bit-strings whose properties we rigorously grasp. Obviously Scott disagrees with me here! :)”

    ——————

  75. Vlad Says:

    Scott 62: I cannot comment on the CTT, but in regard to the Goldbach conjecture, is it possible that it sheds light on the inadequacy of our intuitive ideas about integers. As such I guess the “sophisticated” person may have a point…

  76. Jon Sneyers Says:

    > Scott, are there two (Turing-equivalent) languages ​​such that we cannot translate from one to the other in linear or low-polynomial time?

    It is always possible to translate two Turing equivalent programming languages to one another in linear time. Let’s call the languages A and B, and you want to translate A to B. Take a compiler of A to Turing machines, written as a TM program (this thing must exist), call this C_A. Now take a TM simulator written in B (this thing must also exist), call this TM_B. Now to translate an A program P_A to a B program, just output TM_B(C_A(P_A)). Since TM_B and C_A are fixed, this output is linear in P_A.

    For typical imperative programming languages there are also more direct translations that are still linear time, because you can essentially just implement each statement in one language in a couple of lines of the other language. For languages in other, higher level paradigms a direct translation is not possible or at least not known, e.g. you can’t convert LISP to Prolog in a direct way, you’ll have to write essentially an interpreter of one language in the other language.

    The statement “all reasonable programming languages are equivalent in expressive power” crucially depends on how you define “reasonable”: there are a lot of interesting and useful programming languages that are less than Turing complete by design, e.g. because they want to keep termination decideable. Also there are (theoretical) “programming languages” that are stronger than Turing machines (e.g. they can compute with infinite data or do infinite steps), obviously these have no practical implementation but they are still interesting to study. So in a way, one could consider those languages to be “reasonable”, but then the statement becomes false. For clarity, it’s probably better not to use the words “reasonable programming languages”, but rather something like “programming languages that you can execute (i.e. there is a compiler or interpreter for it) and aren’t too weak (i.e. you have something like recursion or an unbounded loops, basic arithmetic (inc and dec is enough), and conditional branching).”

  77. Lou Scheffer Says:

    This is somewhat off topic, and if the moderator cans it I’ll understand, but this debate brings to mind one of the more interesting differences between the hard sciences and other fields. This occurs when you firmly believe A, someone makes a compelling argument, and within a few seconds you firmly believe not-A, to the point of arguing for not-A with even more vigor that you used for A just a few seconds ago.

    I mention this here since this happened to me with this exact question. I was largely self-taught in programming, and the first three languages I learned were Fortran, assembler, and APL. It was obvious to me they had very different capabilities. But then one day someone pointed out that each of them was plenty capable of running an instruction set emulator, and hence could run not only these languages, but *any* language that could run on *any* other computer. Immediately language choice became convenience and not capability.

    Likewise I knew there were different sizes of infinity, and assumed that one was for integers, one for points on a line, the next one for points on a plane, then points in 3D space, and so on. Someone who knew math told me this was wrong, and I did not believe them – it was obvious that planes had more points than lines. They trotted out the old interlace-the-digits argument. Immediately it was clear, that although counter-intuitive, that the number of points in spaces of various dimensions was exactly the same.

    In yet another case, when structured programming was first introduced, the instructor made the outlandish claim that any algorithm could be rewritten to give *EXACTLY* the same results with just while loops, if-then-else, and returns at the bottom of routines. Having looked at (and written) lots of spaghetti-code Fortran, I thought this could not be true. Sure, it might be preferable, and you could write similar routines, but I could not imagine that all the junk I had seen, with branches straight into the middle of nested overlapping loops, and returns eight levels deep in if statements, and so on, could be *exactly* reproduced with this limited vocabulary. The other students were skeptical as well. However the instructor pointed out that you could mechanically transform the code by numbering the statements, introducing a virtual program counter, and having each statement specify its successor. Then you just need an if-then-else chain inside a big while loop to execute them all. I was immediately convinced and from then on helped the instructor convince others.

    I cannot imagine a similar occurrence in law, religion, politics, or philosophy where in the middle of a debate, someone makes a compelling argument, and the other person switches sides and becomes an advocate for the view they argued against just a few minutes ago. Even if it did happen, it would be scorned as weakness rather than as a rational reaction to a good argument. To me this seems a hallmark of fields that are little more than opinions, rather than those founded on facts.

  78. Scott Says:

    Lou Scheffer #77: Thanks so much! I just highlighted your comment in the OP; it was too good to leave buried.

  79. Raoul Ohio Says:

    Lou,

    Usually in law, religion, politics, philosophy, etc., you can argue either side of an issue, depending on who you are arguing with.

    A related situation is when there is someone who is adamant on some issue. Even if you agree about 98% with them, it is hard not to want to poke them with a sharp stick when they “go off” on the topic.

  80. Dr. Ajit R. Jadhav Says:

    @Lou #77:

    1.
    >> “…each of them [ed: Fortran, assembler, and APL] was plenty capable of running an instruction set emulator, and hence could run not only these languages, but *any* language that could run on *any* other computer.”

    I had a bit of a difficulty parsing this sentence. Languages don’t run an instruction set emulator; programs written in a language run on top of an instruction-set executing machine (say, an emulator), right? But, here, you allude to a language running an emulator. What did you mean, exactly?

    Let me make the reasonable assumption that you meant that since all the three languages (and similar others) supported compiling to (or interpreting in reference to) the same machine instruction set, therefore, all these languages were ultimately all “the same.”

    I think the argument is insufficient. Suppose that there is a “Turing++” computer, say an analog + digital device. It sure is capable of running both a Turing language program, as well as the additional programming capabilities supported by and specific to Turing++. Both Turing and Turing++ do support compiling to the same instruction set. Do they therefore become equivalent? Obviously not.

    So, the missing ingradient is this: all the three (or similar) languages not only should compile to the same machine code but _each of them should also support the *entirety* of that machine instruction set_.

    2.
    Your instructor’s reply addresses the Fortran (F) to structured programming (SP) mapping, not the other way around. The doubt about the so few constructs in SP being sufficient by themselves, however, made reference to the SP -> F mapping. I wonder if anyone had raised this particular objection and if the instructor had a simple enough way to convince him on that count, and if so, what his answer was.

    3. Both the above two points of dealt with the ideas of necessity and sufficiency. Now, let me venture to the argument concerning the number of points in lines vs in planes—the idea of different sizes of infinity. If by “the old interlace-the-digits argument” you mean things like the Dedekind cut, then, I think I have found a loophole in that argument. I think that I do have a good argument to show that planes carry more points.

    4.

    As to the other fields you mentioned:

    Law: I hardly know anything, but lawyers conceding the case once fresh and compelling evidence is offered, should be commonplace. After all, even Pakistanis now admit that Kasab indeed was the guy involved in 26/11. Initially, they had denied it.

    Religion: Religion basically depends on faith, not on reason, and so one shouldn’t expect debates to do anything really useful or essential in those matters.

    Politics: Sudheendra Kulkarni, an IIT Bombay graduate, was initially a card-carrying communist and then, switched over to the Hindu right-wing BJP. It might be fruitful to enquire him if he could share insights regarding whether he could have done it right in the middle of a debate.

    Philosophy: Aristotle is a counter-example, if you allow debates entirely within a man’s mind and also allow also one’s own discoveries or arguments as changing one’s views in those debates. He was initially a Platonist, and then became an Aristotlean.

    But yes, you are right. Facts do inestimably matter, though not just the facts. (I am not being a philosophic nuissance; it’s just that matter of necessity and sufficiency, once again.) It’s also concepts, logic. In both physics, as well as in philosophy. Assuming honest people, it would be the conceptual complexity of different fields which would be really responsible for their difficulty in switching sides. Not just the absence or presence of facts. The ease with which people are able to reason something _new_ out right on the fly—in the middle of a debate—does depend on such things as the scope of the argument, the complexity of the conceptual chains, the intensity of the psychological process required to untangle and hold them, etc.

    But, yes, you are right. Facts do matter—and they matter in any field of knowledge. However, it’s true, certain fields seem to have more entrenched traditions of not changing once-held opinions.

    As to opinions and their changes, I personally often have difficulty imagining Americans (e.g. academics, researchers, etc.) switching sides at any time at all—let alone right in the middle of a debate—despite a compelling argument, if the argument is being provided by an Indian, esp. if he is settled in India. I mean, in serious issues, not as a matter of taunts and mockings. (And, I mean, if that Indian is a man of his own mind, not merely a hiddeen stooge of an influential American.) Please supply serious counter-examples, if you know any.

    – – –

    Tch. No, Scott, I will never agree that the ability to switch sides or change opinions is a hallmark of objectivity or even of active-mindedness.

    Enough for the time being.

    Ajit
    [E&OE]

  81. John Sidles Says:

    Lou Scheffer #77, your remarks are excellent in that they articulate a Great Truth … and so they induce us to reflect upon researchers who have articulated the dual Great Truth that is its opposite.

    Wolfgang Pauli’s description of Rudolf Peierls to Hans Bethe, in the early days of quantum theory, provides a humorous start:

    Peierls thinks so fast, when I finally start to understand what he is telling me, he is already explaining to me why it is wrong.

    What Pauli was complaining about is not mistaken reasoning by Peierls, but rather the instability of Peierls’ postulates.

    The point is that mathematicians, scientists, and engineers change their minds readily when shown mistakes in reasoning, and yet they are as likely as members of any other community to cling inflexibly to postulates learned in youth.

    Deplorably, it commonly happens that students are provided with little or no rationale for the postulates of an academic discipline. Even in grade-school we encounter the mnemonic “Minus times minus equals plus; the reason for this we will not discuss.” Ouch!

    Moreover, it commonly happens that the postulates of a field are clearly understood only after decades or even centuries of study — Hamiltonian mechanics is an example. An à propos observation is Paul Dirac’s self-criticism of Dirac’s renowned 1930 textbook The Principles of Quantum Mechanics

    It’s not a bad book, but the first chapter is missing.

    Even today, 82 years later, we are still struggling to conceive that first chapter, for classical and quantum mechanics alike.

    It is natural to reflect, what postulates associated to the Church-Turing thesis might be substantially amended in the 21st century? One reasonable candidate for amendment is the postulate that Turing Machine tapes are infinitely extended; Doron Zeilberger (for example) argues in his Opinion 125 that explicitly finitistic postulates are preferable. Another reasonable candidate is a postulate restricting assertions regarding Turing Machines (TM) and/or a TM-decidable languages to assertions that are provable; Juris Hartmanis (for example) argues for this restriction in his 1978 monograph Feasible Computations and Provable Complexity Properties.

    Conclusion  Differing reasoning processes are readily reconciled; differing postulates are not. Moreover, it is neither necessary, nor feasible, nor even desirable that researchers embrace a strictly uniform set of postulates, on grounds stated by Mark Twain’s immortal character Puddin’head Wilson:

    It were not best that we should all think alike; it is difference of opinion that makes horse-races.

    🙂

  82. Dr. Ajit R. Jadhav Says:

    Correction. Read hidden, not hiddeen, in my above comment.

    Continuing on conceptual complexity and all, as touched on in my above comment. I think Lou does have half of a point, namely the role of facts. (I call it a half of a point because he contrasts facts and opinions, not facts and abstractions.) It does seem true that the closer to the world of concretes a field is, the easier it is for the conceptual linkages not to break, to hold together, and therefore easier on a(n honest) debater. For instance, I think it should be easier to imagine biologists, archeologists or chemists as being able to switch sides, even in the middle of a debate, rather than physicists, computer scientists (and I do mean scientists, not programmers) and mathematicians (the last three, in the increasing order of worseness). Indeed, the first kind would have difficulty having any well-defined side as such, in the first place. The ability to switch sides or to change one’s opinions, therefore, in a way, seems to exist in inverse proportion to the ability to have a well-reasoned side in the first place, keeping all other things (like honesty) constant.

    Ajit
    [E&OE]

  83. Scott Says:

    Ajit #80:

      As to opinions and their changes, I personally often have difficulty imagining Americans (e.g. academics, researchers, etc.) switching sides at any time at all—let alone right in the middle of a debate—despite a compelling argument, if the argument is being provided by an Indian, esp. if he is settled in India. I mean, in serious issues, not as a matter of taunts and mockings. (And, I mean, if that Indian is a man of his own mind, not merely a hiddeen stooge of an influential American.) Please supply serious counter-examples, if you know any.

    Uh, may I offer myself as a counterexample? My opinions were changed about countless issues by my adviser Umesh Vazirani—as well as by Ketan Mulmuley, Manindra Agrawal, and countless lesser-known Indians and Indian-descended friends thoughout my entire friggin’ life. You, on the other hand, are heading for troll territory and a temporary blog ban.

  84. nagatoism Says:

    I think the debate is rather performed linguistically ,not based on the scientific intuition that we need to build the theories which we are talking about.

    I check the wiki about the goodness of second order logic.The wiki tells me that the SOL could express the property as countably infinity. I think it is reasonable to understand this as If we need SOL to express something that First Order logic cannot express, we actually involved the infinity.
    A little more about infinity. Does “infinity” exist? This would be a philosophical problem ceaselessly debated between formalists and Bourbakist. But, in another way of understanding, I could affirm that the infinity does exist because you have see this seven-letter-word infinity.

    This really happens on the second order logic.Please image that if you want to check if a SOL identity s right.
    I do not know the language of logician, but I think it would be more or less likely to check if the result of the calculus done on the left equals to the right. But note that if you are doing the calculus ,what you are actually doing has fell in the
    First Order logic. You see that you cannot process the entity “infinity” yourself, and you replace the entity infinity with the
    symbol “infinit y” and then do the calculus. This calculus procedure you have do , in the meaning of calculus,indeed has nothing to do with the abstract notion “Second Order Logic”.

    I think Scott want to address that the CT thesis do fail on the higher order logic, but anyone who want to check an higher order logic identity should admit that such “checking” could be stimulated by Turing Machine and CT thesis prevail here.

    CT thesis is rather about practical computation not” the pure theoretical calculus”.And if anyone want really do the pure theoretical calculus, I state that Turing Machine would be competent enough to fulfill this task.

  85. Vlad Says:

    Lou, it is true that it is wonderful when people are able to conduct a discourse based on rational thought and mutual respect.

    Ajit, it sounds to me you have been burned in the past. Racism persists in every society, but blogs such as Scott’s do a wonderful job of giving everyone a voice. It is my hope that the Internet will help us all to foster open conversations, ones in which we come with a willingness to learn rather than a desire to blungeon other people into submission.

  86. Dr. Ajit R. Jadhav Says:

    @Scott #83:

    Still no answer to my questions on the other thread?

    Would Umesh help?

    LOL!

    I know Ketan possibly couldn’t, because I know his collaborator Milind Sohoni, and came to know through Milind that neither of them could (which was in 2003/4). I don’t think they might have progressed in this matter sufficiently, in the meanwhile. And, separately, I do know, Manindra couldn’t—and that he would be ready to admit that he couldn’t.

    As to one of those lesser known Indians, _if_ they fulfill the other important criteria, e.g. that they are not stooges of Americans (and, if I may expand, also Brits/Russians/etc.), sure do feel free to ask them. The matter is simple enough for highly honest and reasonably intelligent people.

    Despite what so many Indians settled in the USA might make it look like, here, in India, an assuringly many number of Indians are far more thoughtful about such things. They do acknowledge the existence of so many Indians’ willingness to trade integrity for mere material comforts or powerlust as found in the advanced countries, the existence of stooges (a routine matter), etc. A relatively small minority of Indians settled in the USA also is thoughtful about such things, and do reach the same judgment about stooges and (really) influential Americans’ sick use of such stooges, etc. (Vivek Randive might be a borderline case of those who reach the right judgment in this issue, though personally I tend to have my own doubts. Vinod Khosla certainly isn’t one to fall on the better side. So many IITians very certainly aren’t. They would rather willingly obfuscate issues to hide the said trade of integrity with material comforts, powerlust, etc.)

    As to people of Indian descent. If you mean those raised in the USA and all, I have not met too many of them to be able to form a judgment about them as a group. I know one Standford CS graduate girl of Indian descent who seemed a bit ill at ease while interacting with Indians (I mean people born and raised in India), esp. in front of the white folks of Abrahamic religions, esp. males. And, overall, I think she represents a trend, though I can’t be too sure—just too small a sample size, just too few interactions with these folks (popularly known as ABCDs). On the other side of the spectrum, I know of one Canadian girl of Indian descent (a Saradaran) who seemed perfectly at ease with people of all kind: Indians/NRIs/people of Indian descent, and white/black Abrahamics/others. I remain open, on this count—not even a temporary judgment thus far.

    Let me check out if this comment makes it or not—sure, it is _your_ blog, Scott, not mine. But, yes, would Umesh help? Would it be possible for him to help? I do wonder.

    Ajit
    [E&OE]

  87. Vijay D'Silva Says:

    This discussion reminds me of a perspective I have taken on one or two other topics and I wonder how to deal with them too.

    I recall my experience of learning formal logic. Prior to my university education, I would never have dreamed or imagined that natural language could be given a mathematical semantics. Learning formal logic was one of the most tedious experiences of my sophomore year of university and to date, I think the presentation of a lot of logic texts does not help this state of affairs. Still, it was completely amazing that the difference between “some child likes all chocolates” and “all chocolates are liked by some child” can be turned into a mathematical theorem. Even more amazing was the realisation that it was, in principle, possible to codify the language of mathematical theorems and proofs entirely in logical terms. This universality lead to the mind-blowing results of Goedel. Mind-blowing for me, not because of their content, but because it was possible at all to make statements that were about an entire area of mathematics, and not just about specific objects.

    That was cool, but it is infeasible and feels rather inhuman to work purely with logical symbols (though I have observed people doing that, including a few programming language researchers). It is more comfortable for most working mathematicians and computer scientists to work with natural language, while being aware of the formalist underpinnings.

    Moreover, once I understood better, the results of Goedel didn’t seem as universal. Gentzen for example did prove a completeness result.

    The second example is about the independence of the continuum hypothesis from the Axiom of Choice. I was fascinated to learn that there were models of set theory that could be constructed using forcing. This in turn led to the realisation that the ZF and ZFC axioms for set-theory are axioms like any other and the universe of sets we work in in day-to-day mathematics is a model of a set of axioms, like any other model. There are many axiom systems possible and many models.

    Here’s the point of the reminiscence. Learning formal logic was a bit tedious but it felt like a rite of passage to a world where I could continue to do mathematics but with a far deeper understanding of the internal structure of that mathematics. The same applies to set-theory. At some point I realised that the kind of sets and operations on sets that we use are just specific models and there are many more possible and alternative interpretations of the membership predicate, for example. Once I knew better, the results I found mind-blowing initially remained fascinating, but definitely seemed less universal.

    In a discussion of these areas and results, it is not clear to me how much this matters. Turing machines and the Church-Turing thesis feel similar to me. Learning to reduce between computational models and about the Church-Turing thesis feels like a rite of passage and one that reveals something deep and fascinating about computation. But the Church-Turing thesis also has a context, which from a specific perspective is not as universal as it may have first seemed.

    And here’s where I feel at a bit of a loss. In the case of certain logical theorems, or independence results in set theory, or the Church-Turing thesis, there is a context in which those results are true and for most computer scientists that context is always in the background, but for some it is not. Does anyone see what I’m trying to say?

  88. Vijay D'Silva Says:

    I forgot to add, where I feel at is loss is regarding when it is relevant to bring up and emphasise that some results that seem universal are not as universal if you take the context of those results very seriously. For majority of the population, there is only one context, so it seems bizzare to do anything like discuss different contexts.

  89. Gil Kalai Says:

    Itai Bar-Natan #73, very nice comment! How nice to see you here.

  90. John Sidles Says:

    In tribute to both Lou Scheffer (post #77 above) and Gil Kalai (many terrific posts!) here a couple of quotations that apply equally to complexity theory and its phylogenetic partner theoretical physics:

    “Common-sense must be used here to avoid embarking on an over-axiomatized, and hence misguided, piece of theoretical physics. We [\hellip;] should not be trapped into axiomatizing theoretical ideas out of existence.”
       — Christopher Isham (1984)

    (quoted in Matthias Blau’s popular on-line lecture notes Symplectic geometry and geometric quantization (1992).

    Isham’s remark descends from

    “If one finds a difficulty in a calculation which is otherwise quite convincing, one should not push the difficulty away; one should rather try to make it the centre of the whole thing.”
       — Werner Heisenberg (1971)

    (which a Google search finds quoted in many places)

    Here the point is that a great deal of heat commonly is generated — especially in the blogosphere — when postulates are mistaken for and/or claimed to be deductions. And the rhetorical motivation for such claims is evident: the correctness of deductions can be verified by more-or-less irrefutable reasoning … the correctness of postulates is susceptible to no such checks!

    Testbooks on complexity theory (example: Arora and Barak Computational Complexity: a Modern Approach) and books on quantum theory (example: Nielsen and Chuang Quantum Computation and Quantum Information) generically begin with a statement of postulates. Three questions then are natural.

    Q1  What is the likelihood that the reasoning of standard textbooks is globally and seriously flawed? Both common-sense and history indicate that this likelihood is vanishingly small.

    Q2  What is the likelihood that, in coming decades, the postulates of 20th century textbooks will be appreciated as mathematically limiting and/or physically unrealistic? Both common-sense and history indicate that this likelihood is sufficiently great as to amount to a near-certainty.

    Q3  What new and/or amended postulates, regarding complexity theory and/or quantum dynamics, will the great textbooks of the 21st century embrace? We are very unlikely to collectively appreciate the answer to this question … until someone writes these textbooks.   🙂

    It is a pretty safe bet — if we are guided by Isham and Heisenberg — that the amended complexity/quantum theoretic postulates of the 21st century will originate in physical concepts rather than reasoning from axioms (Isham’s principle) and that these postulates will embrace as central, difficulties that at present are regarded as peripheral (Heisenberg’s principle).

    Conclusion  Should we be skeptical of deductions that follow from the CT thesis? Reasonably, no. Should we be skeptical that the CT thesis provides uniquely optimal foundations for complexity theory? Reasonably, yes.

  91. plm Says:

    Thanks for the post.

    What do you people think about nondiscrete models of computation as possible counterexamples to the Church-Turing thesis?

    http://edelstein.huji.ac.il/staff/shagrir/papers/physical_computation.pdf
    (section 3)

    I remember other papers but cannot find the references.

    I think the Church-Turing thesis for those models is that continuous computation modulo finite precision measurement and setup of computing apparati is equivalent to classical computation.

    And do you think that continuous models of computation with beyond-Turing capability (before imposing feasibility assumptions) are useful?

    Any reference?

  92. Anima Ex Machina » Blog Archive » Hypercomputation in A Computable Universe - The blog of Hector Zenil Says:

    […] Other models of hypercomputation are just fundamentally wrong, such as Peter Wegner’s model of “interactive computation” that is claimed to break Turing’s barrier. For a recent commentary pointing out the the flaws of yet another hypercomputational claim, you can consult this entertaining blog post by Scott Aaronson and his Toaster-Enhanced Turing Machine. […]

  93. David Harris Says:

    I think I can give an example where Turing machines do not have the expressive power of something like C++.

    Certain programs have the property that they manipulate their input data in a “type-safe” way. That is, if the encoding of the input data is changed, then the program will continue to run correctly as long as the data is accessed through the proper channels. This is an extremely important property in practice, and one of the major focuses of high-level language design is ensuring type safety. In fact, I would say one of the most important things that a program expresses, perhaps more important even than its semantics, is the interaction of its data structures.

    In a Turing machine, all the data accesses are “flattened out” to pure bit manipulation. In this way, a Turing machine can never be type safe, because it lacks the formalism to specify which data accesses are properly controlled.

    This is an example of a important aspect of computation that can be expressed only in high-level languages, not Turing machines.

  94. Jeffrey Shallit Says:

    The questioner brought up the simulation by Turing machines but Iverson still would not acknowledge that this refuted his claim.

    This sounds really, really doubtful to me. I knew Iverson reasonably well and worked with him at the IBM Philadelphia Scientific Center. He knew an awful lot of mathematics and computing, and I’m absolutely sure he knew about Turing.

    Iverson was concerned with notation and expressibility. The claim above has all the hallmarks of a misunderstanding that has been repeated multiple times and changed in the telling.

  95. Peter Shor Says:

    Actually, for the very best toast, you need to enhance your toaster with a time machine.

    “Timing Toast”, by Piet Hein:

    There’s an art of knowing when.
    Never try to guess.
    Toast until it smokes and then
    twenty seconds less.

  96. richard mullins Says:

    I love this post (the toaster-enhanced turing machine) and had read it before a couple of years ago. I looked again today and found references to good papers (e.g. The myth of hypercomputation) by Martin Davis which I downloaded. Davis is 3 or 4 years older than Peter Wegner.

    I did not come here to discredit Wegner. I loved the book he wrote in the early 1970’s with the sections on SECD and lambda calculus. Perhaps all Wegner is saying is that it is good to look at concurrent programming – if so many would heartily agree with him – presumably this is what Dijkstra and Wirth were saying in 1970, writing on cooperating sequential processes and also what Ken Thompson was saying in 1970, in inventing Unix.

    At present it seems a lot more important for me to read the Davis papers than to puzzle over papers by Wegner.

    Perhaps pedagogically there is something wrong with much of the presentation of Turing machine material. If Wegner has difficulties with it, how much more so will the general reader. So Wegner’s rants may ultimately have a beneficial effect in encouraging people to restate results in a better way.

  97. What We Talk About When We Talk About Computation Says:

    […] Aaronson, who represents the TOC view, replies: […]