Busy Beaver Updates: Now Even Busier

Way back in the covid-filled summer of 2020, I wrote a survey article about the ridiculously-rapidly-growing Busy Beaver function. My survey then expanded to nearly twice its original length, with the ideas, observations, and open problems of commenters on this blog. Ever since, I’ve felt a sort of duty to blog later developments in BusyBeaverology as well. It’s like, I’ve built my dam, I’ve built my lodge, I’m here in the pond to stay!

So without further ado:

  • This May, Pavel Kropitz found a machine demonstrating that $$ BB(6) \ge 10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}}}}}}} $$ (15 times)—thereby blasting through his own 2010 record, that BB(6)≥1036,534. Or, for those tuning in from home: Kropitz constructed a 6-state, 2-symbol, 1-tape Turing machine that runs for at least the above number of steps, when started on an initially blank tape, and then halt. The machine was analyzed and verified by Pascal Michel, the modern keeper of Busy Beaver lore. In my 2020 survey, I’d relayed an open problem posed by my then 7-year-old daughter Lily: namely, what’s the first n such that BB(n) exceeds A(n), the nth value of the Ackermann function? All that’s been proven is that this n is at least 5 and at most 18. Kropitz and Michel’s discovery doesn’t settle the question—titanic though it is, the new lower bound on BB(6) is still less than A(6) (!!)—but in light of this result, I now strongly conjecture that the crossover happens at either n=6 or n=7. Huge congratulations to Pavel and Pascal!
  • Tristan Stérin and Damien Woods wrote to tell me about a new collaborative initiative they’ve launched called BB Challenge. With the participation of other leading figures in the neither-large-nor-sprawling Busy Beaver world, Tristan and Damien are aiming, not only to pin down the value of BB(5)—proving or disproving the longstanding conjecture that BB(5)=47,176,870—but to do so in a formally verified way, with none of the old ambiguities about which Turing machines have or haven’t been satisfactorily analyzed. In my survey article, I’d passed along a claim that, of all the 5-state machines, only 25 remained to be analyzed, to understand whether or not they run forever—the “default guess” being that they all do, but that proving it for some of them might require fearsomely difficult number theory. With their more formal and cautious approach, Tristan and Damien still count 1.5 million (!) holdout machines, but they hope to cut down that number extremely rapidly. If you’re feeling yourself successfully nerd-sniped, please join the quest and help them out!

100 Responses to “Busy Beaver Updates: Now Even Busier”

  1. Shawn Ligocki Says:

    For anyone interested in exploring the inner workings of this record breaker, I wrote up an analysis on my blog in June: https://www.sligocki.com/2022/06/21/bb-6-2-t15.html . It requires a bit of number theory (Euler’s totient theorem) to prove since this number is not only too long to run any simulator for directly, but also to large to even write all the digits directly!

  2. Scott Says:

    Shawn Ligocki #1: Thanks so much!!

  3. Justin Kee Says:

    I have a question for Scott.

    If the BB problem is uncomputable, how do people compute upper and lower bounds for the runtime? Isn’t the whole point that the BB problem is uncomputable and thus the answer cannot be computed to exactitude without it being full run, i.e. computed.

    If there are shortcuts to provide bounds, doesn’t that imply that there exists computable aspects to the actual termination points that exist without doing the ultimate computations?

    I am genuinely interested in this answer.

  4. Scott Says:

    Justin Kee #3: For BB(n) to be uncomputable means that there’s no general algorithm that takes n as input and always halts after finitely many steps with BB(n) as its output. It doesn’t mean that we can’t hope to figure out BB(n) for any specific values of n. We can—and it’s precisely the general uncomputability that makes the quest fun and interesting.

    More concretely: for each n, there are only finitely many Turing machines with n states. You can try running all of them in parallel. Whenever one of the machines halts, that tells you a new lower bound on BB(n). Eventually, the last n-state machine ever to halt will do so, and then you’ll “know” BB(n) itself.

    But wait, how is that consistent with BB’s uncomputability? Well, the trouble is that you won’t know that you know the correct value! For what about all the machines that haven’t halted yet: how do you know none of them ever will? That requires a proof of non-termination, a separate one for each non-halting machine. Turing tells us that there can be no systematic method to find such proofs (if there were, then BB and the halting problem would be computable), and Gödel tells us that, in any fixed axiomatic theory, the proofs don’t even always exist.

    Nevertheless, by hook or by crook, by a combination of automated methods and case-by-case cleverness, people managed to find proofs of non-termination for all the non-halting 3-state machines (in the 1960s) and 4-state machines (in the 1980s). They’re now trying to do the same for the 5-state machines, and thereby prove that a 5-state candidate discovered by Marxen and Buntrock in 1990 is actually the Busy Beaver. Even if they succeed, it’s entirely plausible that our civilization will never be able to do this for the 6-state machines. Or maybe the superintelligent AIs that take over from us will manage, but then they’ll struggle forever with the 7-state machines. 🙂

  5. pku Says:

    A philosophical question: You’ve shown before that there’s an N (iirc about 700) for which BB(N) is undefined under ZFC. Would that mean it’s not a concretely specified finite number? Would you accept BB(1000) as a contestant in a “who can name the biggest number” contest? what about BB(BB(1000))?

  6. Scott Says:

    pku #5: The correct answer to your philosophical question 🙂 is that BB(n) is perfectly well-defined for every value of n—it’s just that the particular theory ZFC doesn’t prove its values after some finite point, and you need to learn to distinguish those two concepts, and the BB function itself is as good a place as any to learn.

    I mean, consider this: for every k for which the statement is true, ZFC actually does prove that BB(1000)≥k, since you just need to simulate the appropriate 1000-state Turing machine for k steps and see that it doesn’t halt. The problem is “merely” that there are other 1000-state Turing machines that run forever, but for which ZFC can’t prove it (as shown by Stefan O’Rear’s improvement to my and Yedidia’s result).

    But I don’t see why that problem should in any way disqualify BB(1000) from the biggest number contest. After all, if the other contestant writes anything that’s naïve to computability theory—Ackermann(1000000) or whatever—then not only will it be true that BB(1000) vastly exceeds that, but by the above argument, it will actually be a theorem of ZFC!

  7. fred Says:

    I can’t wrap my mind around the type of math/proof required to come up with numbers that big. At some point the number “15” has to come up as a parameter, no?

    Btw, isn’t there a compact notation to represent (…(((((10^10)^10)….^10), like 10@15?
    [edit] I saw in that first comment link that it’s 10↑↑15.

  8. pku Says:

    Hm, is there some concrete finite computational problem in this style that you could make undecidable? E.g. using the beeping busy beaver numbers, I think you’ve shown that for large k BBB(k)>BB(k+1). But is it decidable to, say, compare BBB(1000) and BB(1002)? (Or if there’s some bound that solves that specific one, is there some way to build non-zfc comparable finite numbers using these methods)?

    I guess this isn’t really any worse than “is 1>x, where x is 2 if the continuum hypothesis is true and 0 otherwise”. But it feels like BB numbers have a specific definition in ways that CC doesn’t, so it feels weirder.

  9. Nick Drozd Says:

    In the Bigger Number Game, contestants write down descriptions of numbers on index cards. Assuming a minimum character size, there are only finitely many such descriptions, and in principle they could all be enumerated and compared.

    Scott proposed the Busy Beaver function as a way to win the BNG. But we can also think of each BB value as being its own Bigger Number Game played out, where the number of states available to be used correspond to the size of the “index card”.

    This version of the BNG is not a duel; instead, we can imagine that each program in the particular Turing machine class has been submitted by a different contestant. Adjudicating this contest requires figuring out which of the halting programs runs the longest. But it also requires figuring out which ones don’t halt at all.

    The number of contest entries gets out of hand fast, growing like O(n^n) (what do you call that kind of growth?) with the number of states. Even with just three states it isn’t feasible to look at each entry individually. Instead, the review process has to be automated. You write up some code that will detect certain patterns, typically patterns that prove non-halting. Then you pass each entry through that detector, and any matches can be discarded. Anything left over afterwards gets reviewed manually.

    But the difficulties have only just begun. Any “holdout” programs that have made it to manual review by definition exhibit novel or unusual behavior, for otherwise they would have been detected earlier. So the holdouts are an especially difficult set of programs to analyze. And for any given automated detector, the number of holdouts will also grow exponentially with the number of states.

    And remember, it’s not known whether the holdouts halt or not. For all that you (the judge) know, some of them might halt! In principle a program that actually does halt can be proved to halt, but in practice this can be quite difficult. Look at Shawn #1 to see how the sausage is made. Proving non-haltingness is even worse, since that might be uncomputable even in principle. And that’s just for a single entry to be considered! Again, there are an awful lot of them to go through.

  10. dankane Says:

    Scott #6:

    Actually, I think that whether or not BB(n) is well defined depends on your philosophical interpretation. If you are a formalist, then the fact that BB(10,000) is independent of ZFC means that there are different models of set theory in which BB(10,000) is actually a different number. According to this view, the correct answer to the question of what BB(10,000) is would be to ask what exactly the questioner means by a Turing machine running forever.

  11. f3et Says:

    Oh lucky day ! Another subject (beside go and go programs) where I can boast of expert knowledge. So first, as mind-boggling as is the Ackerman function, it is dwarfed by the fast-growing herarchy (https://en.wikipedia.org/wiki/Fast-growing_hierarchy), itself dwarfed, of course, by things like the function TREE (and similar things for which the pertinent ordinal is not known ; see https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem#TREE(3) ). Then come non computable functions like BB, and the winner (for the game of the biggest number) is Rayo’s number : https://en.wikipedia.org/wiki/Rayo%27s_number.

  12. Scott Says:

    pku #8:

      But is it decidable to, say, compare BBB(1000) and BB(1002)?

    Yes, it should be! ZFC and even PA almost certainly prove that BBB(1000) is vastly larger. Crucially, to do so, they don’t need to determine the actual value of either number. They just need to prove that 1000-state beeping TMs can suitably simulate (and thereby dominate) 1002-state ordinary TMs for the purpose of running for huge finite amounts of time. Admittedly, I don’t know off the top of my head exactly how to do that, but I strongly conjecture that it can be done. It’s actually a great question. Does anyone here have ideas? 🙂

    “In practice,” in a bigger-number contest, if one contestant used a more powerful naming formalism than the other, then the hope would be that the contestant who used the weaker formalism would simply concede, seeing how untenable their situation was. If they actually demanded to see proof that their number was smaller, in many cases that would require a large and tedious mathematical effort, but it should normally be possible in principle.

  13. Scott Says:

    dankane #10:

      Actually, I think that whether or not BB(n) is well defined depends on your philosophical interpretation.

    Yeah, I’m asserting that the philosophical interpretations according to which BB(n) isn’t well-defined are simply wrong. 😀

    If we’re unwilling to say that there’s a fact of the matter about whether a given TM halts or runs forever on a blank tape, then there also need not be any fact of the matter about Goldbach’s Conjecture, the Riemann Hypothesis, or any other arithmetical statement. Or rather, it’s only a proof or disproof that would create a fact of the matter about these statements. But why on earth would we want to talk in such a convoluted way—one, indeed, that seems to lead to an infinite regress, where there need not even be a fact of the matter about whether a given statement is provable, whether it’s provable that it’s provable, and so on? Why not just distinguish the set of statements that have definite truth-values (which includes, at the least, all arithmetical statements), from the smaller set of statements that have definite truth-values that we can prove from a given system of axioms? That, at any rate, was Gödel’s position, and he was right. 🙂

  14. Josh Jordan Says:

    dankane #10:

    Why do you need ZFC to define what it means for a Turing machine to run forever? You should be able to define that even in Peano Arithmetic with something like: a machine halts on a tape if there exists a natural number N such that the machine is in the halt state after N steps (starting with the initial tape). Now Peano Arithmetic may not be able to prove that a particular machine runs forever, but that’s a separate issue from whether halting is well-defined in the first place.

  15. Scott Says:

    Josh Jordan #14: Dan would presumably reply that in PA, you can’t distinguish whether N is a standard or a nonstandard integer. The response to that is that if you talk about arithmetic in the straightforward, realist way that Gödel did and I do, then there’s no need to treat “nonstandard integers” as anything more important than artifacts of the incompleteness of various formal systems. Or rather: if you didn’t already have a clear enough conception of what a “standard positive integer” was, prior to any formal system, then why were you doing math in the first place? For that matter, why did you even have a clear conception of what a formal system was? 🙂

  16. dubious Says:

    Nick Drozd #9: What you write depends on unbounded external information: accepted definitions and notations. This would include even the symbols you can write. What if we define `BB_n(m)` as `BB(BB(BB…(m))`? What if we come up with something else that in turn uses this? I take “index card” to be a constraint intended to make one think sooner of better options that a long sequence of 9s.

  17. dankane Says:

    Scott #13:

    In order for those statements to have definite truth values, you need to first specify exactly what model of the natural numbers you are talking about. For any given model of PA, Goldbach is either definitively true or definitively false, but given that we don’t know a proof or disproof it is plausible that there are valid models in both camps. If this were known to be true, the formalist response to the question “is every even integer at least 4 the sum of two primes” would be “what exactly do you mean by ‘integer’?”

    Now for statements about the integers, there is at least a reasonable philosophical argument that there really is a natural set of natural numbers that we just can’t uniquely specify using any decidable set of axioms. However, for things like the continuum hypothesis, it seems like whether or not it is true really does depend on what exactly you mean by a set.

  18. asdf Says:

    Pku #8, I think if M is a 1000 state TM, then there is an obvious 1001 state TM whose initial state simply jumps to the initial state of the 1000 state TM. So if the longest running halting 1000 state TM halts after N steps, the corresponding 1001 state TM halts after N+1 steps. Therefore BB(1001) is least 1 greater than BB(1000).

  19. Scott Says:

    dankane #17: Already addressed the point about nonstandard models (see comment #15). When people say “integer,” it should always be understood to mean “standard integer” unless specified otherwise.

      Now for statements about the integers, there is at least a reasonable philosophical argument that there really is a natural set of natural numbers that we just can’t uniquely specify using any decidable set of axioms.

    I’d say not merely “reasonable,” but utterly compelling! Again: if this weren’t true, then why should we even trust our manipulations of first-order statements, which already presuppose an understanding of basic arithmetic (e.g., to check whether the parentheses are balanced or not)? In this way, the extreme formalist position, the one that denies the reality even of N, seems to me to collapse in internal inconsistency.

    In other words: it’s simply a mistake to think that a formal system like PA could “make the positive integers meaningful,” and not only because of Gödel. If the positive integers weren’t already meaningful, then we couldn’t even work sensibly with PA itself.

      However, for things like the continuum hypothesis, it seems like whether or not it is true really does depend on what exactly you mean by a set.

    Aha, here I completely agree with you! Note, in particular, that my “sawing off the branch you’re sitting on” argument no longer applies once we’re dealing with transfinite statements like CH, which could indeed be plausibly argued to have no meaning outside the formal systems we use to discuss them.

    Of course, this has no effect on the well-definedness of the BB function (by, for example, the Shoenfield Absoluteness Theorem, which shows that the truth or falsehood of statements like AC or CH has no effect on purely arithmetical questions).

  20. Shawn Ligocki Says:

    I can slightly lower the upper bound for Ackermann dominance:

    $$ BB(16) >> A(16) $$

    Specifically, Daniel Nagaj found a 16-stage TM which runs more than Graham’s number steps (\(\approx A^{64}(4) >> A(16)\)). I wrote up a detailed analysis last month : https://www.sligocki.com/2022/07/11/bb-16-graham.html . This upper bound could be easily lowered several more steps by simplifying this machine since beating Ackermann is much easier than beating Graham’s number.

    As for prediction on the actual value where BB passes A, I agree with Scott’s prediction that \(BB(7) > A(7) \approx 2\uparrow^5 10\) and could certainly believe that \(BB(6) > A(6) \approx 2\uparrow^4 9\) . However, I’m very worried we will never be able to prove these things. For example, in April I found some BBB(5) (Beeping Busy Beaver) machines that seem like they will probably quasihalt, but require solving a Collatz-like problem in order to actually prove. If they do quasihalt, they would be the current BBB(5) champions. (See https://www.sligocki.com/2022/04/03/mother-of-giants.html ). I expect this same sort of problem to come up in BB(6) or BB(7) and then we’ll need some new math to make any progress!

  21. dankane Says:

    Scott #19:

    You can check whether parentheses are balanced or not without knowing exactly which model of PA you are working in. The truth of any statement of the form “Turing machine T on input X runs for exactly N rounds and returns Y” should be independent of your model of arithmetic.

    In fact, you can do the kinds of finitistic manipulations needed to check proofs in PA without using anywhere near the full power of PA (just like you *also* don’t need to use ZFC to check the validity of proofs).

    As for whether you should always be assumed to be talking about the “standard integers”, the issue is that actually defining what you mean by the “standard integers” is impossible to do rigorously.

  22. Scott Says:

    Shawn Ligocki #20: Thanks so much for the updates!!

    I completely agree that determining BB(6) or BB(7) would surely require solving ridiculously-hard Collatz-like problems, but what about just proving that they dominate A(6) or A(7), if that’s indeed the case?

  23. Scott Says:

    dankane #21:

      You can check whether parentheses are balanced or not without knowing exactly which model of PA you are working in. The truth of any statement of the form “Turing machine T on input X runs for exactly N rounds and returns Y” should be independent of your model of arithmetic.

    So maybe the core of the matter is that I’ve never understood the point of denying the Law of the Excluded Middle. If there’s no n for which it’s a fact that my TM halts in n steps, then I’m just going to go ahead and declare it a fact that my TM never halts—regardless of whether that fact can ever be known to humans. Do you agree that, once we’re willing to make that move, we’re then basically Platonists about N?

      As for whether you should always be assumed to be talking about the “standard integers”, the issue is that actually defining what you mean by the “standard integers” is impossible to do rigorously.

    My answer, again, is that the goal is not merely impossible, but misguided. I.e., even supposing there were first-order axioms that picked out N and only N, a skeptic could then object that you didn’t define the primitives of first-order logic in terms of yet more basic concepts, and so on, trapping you in an infinite regress.

    For this reason, I maintain that we might as well start with N, a clear-enough conception of which appears as soon as the Sumerians (or a toddler) start engaging in mathematical reasoning at all.

  24. dankane Says:

    Scott #23:

    This isn’t the law of the excluded middle, which is about being able to prove “X OR (NOT X)” whether or not one can prove either “X” or prove “NOT X”. This is about omega-completeness. And the issue of course is what do you mean by when you say that *for each n* it is a theorem that your TM doesn’t halt in n steps.

    As for the naive conception of the integers, I’m not convinced that the ancient Summarians or toddlers have a conception that allows them to think about
    10^10^10^10^10^10^10^10^10^10^10^10^10^10^10
    in a way that usefully distinguishes it from a supernatural number.

  25. Scott Says:

    dankane #24: Alright then, the “generalized” Law of the Excluded Middle, which you can’t actually apply as an inference rule (since it involves countably many input statements) but which nevertheless holds Platonically.

    If I couldn’t even say that any given TM either halts or doesn’t halt, then I’d give up and not even try to do math or TCS.

    But I think we’ve already made our respective positions clear!

  26. asdf Says:

    Dsnkane #21, whether the TM halts can depend on your model of PA. Maybe it does not halt in the standard model, but halts after N steps in a nonstandard model, where N is a nonstandard number.

  27. Nick Drozd Says:

    dankane #21

    As for whether you should always be assumed to be talking about the “standard integers”, the issue is that actually defining what you mean by the “standard integers” is impossible to do rigorously.

    They may be impossible to define rigorously, but they can certainly be defined in an I-know-it-when-I-see-it sense. Here are some of the official rules of the Busy Beaver game, as defined by Rado:

    iii. [The contestant] submits their entry, as well as the shift-number s, to any member (in good standing) of the International Busy Beaver Club.

    iv. The umpire first verifies that the entry actually stops exactly after s steps. Note that this is a decidable issue; the umpire merely operates the entry, persisting through not more than the specified number s of shifts. If the entry fails to stop after s shifts, it is rejected…

    An entry program has to be accompanied by a number, and the program has to halt within that many steps. I can tell you that if you attempt to submit a nonstandard number, the International Busy Beaver Club will not be impressed. How would you even specify it? Is it possible to meaningfully refer to a particular nonstandard number?

  28. Mario Says:

    How long does it take to simulate the current BB(5) champion?

    I have been wondering this since the present champion is at 47,176,870 steps, which
    is big, but not unreasonable big considering modern hardware. Of course, I have dozens of minutes in this subject so I may have missed something.

  29. Ben Says:

    pku #8

    It is provable that BB(n) < BB(n+1) – just take the busy beaver for n and add a starting state that transitions into it.

    But if you have 2 slightly different formal machines, it is likely impossible to prove whether the variant of BB for the one machine is larger or smaller than the variant of BB for the other machine.

    Scott #13
    We disagree on philosophy. In particular I believe that concepts are only meaningful to the extent that we can understand them. Formalization is a method of understanding. Therefore finite is only a meaningful concept to the extent that we can formalize it. BB(n) requires us to accept the finiteness or not of something for which a human-understandable formalization cannot possibly exist. This is as meaningless a concept as asking whether the omnipotence of God allows Him to make a rock that He can't move.

    And my invocation of God is relevant. Gödel, who you invoked, firmly believed in the existence of God, and settled the question of what was REALLY true on God's infinite wisdom. As an atheist, this approach to accepting infinite and unknowable things as facts is not something I can accept.

  30. Shawn Ligocki Says:

    Scott #22

    Yeah, I certainly agree that proving that BB(n) > A(n) is much easier than proving the exact value of BB(n) (or even evaluating the runtime of the longest TM if the Busy Beaver God hands it to you). My concern is: What if BB(6) > A(6), but all 6-state TMs which run longer than A(6) have some complex Collatz-like behavior that makes them unanalyzable to us for the foreseeable future?

    I think it could go either way. As it is, we sort of got “lucky” with the current BB(6) champion. It happened to have rules that we were able to accelerate using Euler’s totient theorem magic. But if you tweak those rules slightly you get the “Mother of Giants” type stuff that will require solving Collatz-like problems. Of course, when I say that we were lucky, I really mean that we were only able to search effectively in the space of machines which could be accelerated like this. Pavel has said that he thinks that we might have found all the machines which were acceleratable using this procedure in the 6-state space, so my concern is that most/many of the remaining machines that beat it might be “Mother of Giants” types.

    In other words, perhaps BB(6) > A(6), but maybe the best we can do is prove BB(8) > A(8) without having to solve a Collatz-like problem. Of course, who knows, maybe next year, someone will be able to find an analyzable TM that beats A(6) 🙂

  31. Scott Says:

    Ben #29: I enthusiastically endorse the motto of Paul Erdös: “You don’t have to believe in God, but you should believe in the Book” (the one wherein God records the most beautiful proof of each theorem). 😀

    More seriously, though, regardless of the Platonic definiteness of the BB function, would you at least agree that there’s a clear meaning to statements like “BB(5)≥47,000,000” or “BB(n+1)>BB(n) for all n” or “BB(1000) is independent of ZFC,” and that we can know each of these statements to be true by means of proving them?

    If you don’t, then your philosophy actually gets in the way of (a corner of) my mathematical research, so I have no choice but to fight back! 🙂

  32. Scott Says:

    Mario #28: Simulating the current BB(5) champion is easy enough that I was able to do it in about 10 seconds on my laptop, using QBasic. If you want to be precise, though, it takes 47,176,870 Turing machine steps. 🙂

  33. Shawn Ligocki Says:

    Mario #28

    I can simulate the 5-state champion in 0.2s. Simulating it is easy, but proving that no other TM halts at a later time, that is the trick! At this point I have about 7500 5-state TMs that have not been proven to ever halt. Maybe about 100 are really quite hard for a human to figure out. The concern is that one of these might run a googolplex steps and then halt or something similar.

  34. Justin Kee Says:

    Scott, thanks for the answer to #3 above.

    I am trying to understand what process causes that TM to actually halt. From the linked page, it appears that there is some modulo mathematics going on and when the remainder is 0 the TM takes the branch into the halting state. Would that be a fair assessment?

    Also, the decrementing of the exponents in the values of A sub n and K sub n, in the chart under the section Finishing Evaluation, reminds me of the decreasing ordinals used to demonstrate that the generalized Goodstein sequence terminates, even if that fact is unprovable in ordinary arithmetic. Would that be roughly analogous here?

    Disclosure: I am not a mathematician by any stretch.

  35. Scott Says:

    Justin Kee #34: I’m going to hand you off to Shawn Ligocki or the other experts on this thread—both because

    (1) while I took the time to understand exactly what the BB(5) champ is doing, I haven’t yet taken that time for the BB(6) champ, and because

    (2) I just arrived in Toronto for the CQIQC conference—my first trip to Canada since before covid—and I need to run to the banquet!

  36. Shawn Ligocki Says:

    Justin Kee #34

    Yes, at a high level, my analysis of this machine is that it is essentially iterating a Collatz-like function (incidentally, basically all known Busy Beaver candidates simulate Collatz-like functions). Simulating a Collatz-like function is equivalent to saying: Branch on the remainder (mod some n) and evaluate a different function on each branch (or halt). In this case, as you say, it halts if the remainder (mod 4) is 0 and applies some exponential function \(\approx k \to 3^k\) otherwise.

    In this case, this TM got “lucky” in that it took 15 iterations of the Collatz-like function before hitting a 0 (The expected time to hit a 0 is 4 assuming it’s effectively randomly choosing a remainder each time). Of course, we will tend to find the “luckiest” machines since we’re looking for the outlier here (the machine which runs the longest before halting).

    The challenging math necessary to prove this TMs runtime is the question, how do you evaluate \(3^N \pmod{4}\) for very large N? How about \(3^{3^N} \pmod{4}\)? Etc. In this case, these questions turned out to be solvable using Number Theory results discovered by Euler 300 years ago. But if you tweak the TM behavior slightly, you can get problems that basically unsolvable with today’s cutting edge mathematics (requiring solving problems akin to the Collatz conjecture (3x+1 problem)).

    As for the Goodstein sequence comparison, I don’t see the connection. The proof of halting for this machine is much simpler than the Goodstein proof. We don’t need to assume the well-orderedness of any large ordinals, instead I’m basically just applying a fancy Number Theory algorithm 15 times and getting remainder 0. The decreasing exponents in the \(A_n\) and \(k_n\) columns are just an artifact of how the number theory works here. Knowing \(A_n \pmod{2^m}\) allows us to evaluate \(A_{n+1} \pmod{2^{m-1}}\).

  37. Shion Arita Says:

    @asdf#26:

    I’m not really sure how that could be, or how it could have any physical meaning. For one, if you actually ran the machine there would be a definite outcome. And also, due to the way the integers work, and turing machines work, all the step counts will be standard integers, right? I suppose you could construct a different kind of turing machine that operated in the other PA with nonstandard integers, but in that case, it would be a different turing machine, wouldn’t it?

  38. mls Says:

    @ Dr. Aaronson, dankane

    Generally speaking, intuitions about “true arithmetic” are actually grounded in the categorical second-order Peano axioms. One cannot simply pretend that this can be maintained when speaking of first-order arithmetic.

    Recursion theory, however, does not use first-order logic. The very fact that it deals with the meaningfulness of expressions by including a form of equivalence setting all meaningless expressions “equal” to one another is a clear demonstration of this fact.

    Turing’s work on computation had been specifically motivated by the manipulation of syntax. The role of recursion theory in first-order metamathematics applies to its syntactic elements. Unfortunately, platonism — and Goedel’s platonism in particular — confuses the distinction between mathematics and metamathematics.

    It would be much more fruitful to couch this sort of disagreement in terms of Markov’s constructive views. The syntactic basis is compatible with Turing and the notion of metamathematical enumeration associated with Hilbert’s original distinction. Restricted to purely syntactic manipulation, bivalent truth valuation is realized through Markov’s strengthened implication. This relies upon a notion of “givenness.” This notion is criticized by philosophers who demand realism for mathematical truth.

    This is certainly not a criticism of the effectiveness of Goedel’s platonism when formulating the incompleteness theorems. The significance of his constructions lay with being quantifier-free. Hilbert sought security in finiteness because it eliminated the problems attributed to quantification. But, his platonism is a stronger assumption than needed for verification of well-formed syntax.

    Non-standard models arise from the general problem of non-categoricity in the first-order paradigm and not with incompleteness.

    My interest in foundations lies elsewhere. But, I believe I have stated the situation correctly. People who study computation ought to be taught about Markov and these distinctions. Unfortunately, there is a bias against constructive mathematics in the teaching of “mathematical logic.”

  39. Shion Arita Says:

    @Shawn Ligocki

    Sorry if this is a question where the answer becomes obvious once I’ve tried deeply going into what you wrote about the machine, but if it only iterates the collatz-like function 15 times, what makes it take so long to accomplish that?

    Second, more abstract question: I wonder what the first BB is where the TM is doing something fundamentally different than a collatz-like problem. I would be very surprised if that’s the longest-taking process as you can make a TM get. So there has to be a crossover point somewhere. Who knows, maybe even the real BB(5) or BB(6) is one of them??

  40. Scott Says:

    Shion Arita #39: I can answer your first question. Shawn explained that a function close to f(k)=3k gets recursively applied to its own output 15 times. If so, then the final output will be 3^3^… (15 times), and even just printing it will take roughly that number of steps.

    As for your second question, it’s true that as n increases, it has to become harder and harder to explain what the nth BB is doing—since even repeated exponentiation processes like above (for example) are dominated by the Ackermann function, even Ackermann is dominated by fast-growing computable functions that are more abstract still, and so on. Even so, it now seems plausible to me that the BB’s will continue to be “Collatz-like” for a sufficiently broad definition of “Collatz-like”! I.e. that they’ll continue to understandable as applying some iterative process to positive integers, quickly producing larger and larger ones, until some simple arithmetical halt condition happens to be reached, but with the nature of the iterative process becoming more and more abstract.

  41. Scott Says:

    mls #38: My view couldn’t possibly be more different from yours!

    For me, recursion theory isn’t grounded in either first-order logic or second-order logic. It’s just the opposite: Turing machines—and equivalent concepts like cellular automata and Post rewrite systems—are themselves the most basic concept in the foundations of math, since they capture what you can do by applying clear, deterministic rules to discrete symbols. If that isn’t clear, then nothing else in math is clear, and certainly not first- or second-order logic.

    First-order logic is “grounded” by the fact that you can program a Turing machine to do the deductions for you. “Second-order logic,” by contrast, isn’t grounded by anything, which is why (e.g.) Quine regarded it as more of an aspiration than a logic.

  42. Joshua Zelinsky Says:

    @Scott #40.

    It is also worth noting that Collatz like problems are Turing complete. So in some sense, everything can be thought of as a sufficiently abstracted Collatz type problem. It isn’t clear if there’s a dividing line. That said, the current champions for BB(5) and BB(6) seem to be of Collatz type in a stronger sense, in that the actual numbers they are manipulating on the tape head correspond directly to the Collatz-like procedure they are doing.

  43. Justin Kee Says:

    Shawn, thanks for the reply. It was edifying.

    “We don’t need to assume the well-orderedness of any large ordinals, instead I’m basically just applying a fancy Number Theory algorithm 15 times and getting remainder 0. The decreasing exponents in the 𝐴𝑛 and 𝑘𝑛 columns are just an artifact of how the number theory works here. Knowing 𝐴𝑛(mod2𝑚) allows us to evaluate 𝐴𝑛+1(mod2𝑚−1).”

    Ah, I see. I thought I saw a connection where it is simply decrementing based on the formula.

  44. Ben Says:

    Scott #31

    I agree that there is a clear formal game we can both understand the rules of which settles what it would mean to prove those statements, and I am capable of following the rules of that game to present what we both understand to be proofs of them.

    And it is a fun formal game that David Hilbert would have thoroughly approved of.

    However to the extent that mathematicians insist that everyone else should accept any particular meaning to the statements they make, or insist that they map to concepts that lay people may have, I do not wish to play that game. Because the formal game does NOT map to those concepts. Not even slightly.

    For an important example, it is reasonable for people to debate the extent to which a concept like 1 actually exists. It certainly feels like it should. But there is no meaningful existence to, say, real numbers that cannot be uniquely specified in any finite statement. And there remains no meaningful existence to them despite the formal proof that almost all real numbers fall into that category.

    Yes, if you play the formal game enough, the concepts it names start to feel real to you. That’s because of reification. Likewise if you spend enough time talking about theology as if it were real, you’ll likely start to feel that God exists. But that is only a feeling in your head. It doesn’t make it real.

    That said, play your game. You’re an expert and it is fascinating to watch. Even if I don’t understand all the steps. In the same light I also like watching chess, even though I do not have the positional understanding to be particularly good myself.

  45. Christopher Says:

    Nick Drozd #9:

    > The number of contest entries gets out of hand fast, growing like O(n^n) (what do you call that kind of growth?) with the number of states.

    Given how hard the problem is already, I wonder if using a programming language that only grows like 2^n would be better?

  46. Scott Says:

    Ben #44: At risk of stating the obvious, math is a formal game that actually works if your goal is to build anything from nuclear weapons to spacecraft to microchips, or to understand the workings of black holes or subatomic particles or the early universe. That, if nothing else, would seem to make it of somewhat broader interest than chess, much as I also like chess.

  47. Ben Says:

    Scott #46

    I agree that math is generally more interesting and important than chess for exactly the reasons you state.

    But the philosophical points under dispute result in complete agreement on any question that can be settled by computation. Either in theory or practice. And it is only computable math that affects our ability to build anything from nuclear weapons to spacecraft to microchips, or to understand the workings of black holes or subatomic particles or the early universe.

    Therefore any dispute over the parts of math where our philosophical differences would lead to a disagreement can have no practical importance. And within that domain, the comparison to chess becomes perfectly fair.

  48. Nick Drozd Says:

    Scott #40

    Even so, it now seems plausible to me that the BB’s will continue to be “Collatz-like” for a sufficiently broad definition of “Collatz-like”! I.e. that they’ll continue to understandable as applying some iterative process to positive integers, quickly producing larger and larger ones, until some simple arithmetical halt condition happens to be reached, but with the nature of the iterative process becoming more and more abstract.

    It’s true that the current best theory of BB machines is that they apply Collatz-like functions. This theory is falsifiable, but it hasn’t been falsified by recent discoveries even though it could have been. Every time anyone has tried to find a new champion, they find something Collatz-like. (This includes Pavel’s new champ, as well as the Beeping Busy Beaver machines discovered by me and Shawn.)

    However, it is also the case that the best simulators (like Shawn’s) are specifically designed to detect Collatz-like behavior. If there are any long-running machines that are not Collatz-like, they will go undetected. Counterexamples are unobtainable.

    So although there is significant evidence for the Collatz theory, there is plenty of room for skepticism. Personally I think the Collatz trick must fail at some point, and a better strategy will overtake it. And that strategy will also get overtaken eventually, etc, forever. It would be convenient from a theoretical perspective if it was Collatz all the way. That’s a nice clear picture to paint. But it also sounds awfully close to making BB computable, and is therefore too good to be true.

  49. Nick Drozd Says:

    By the way, Conjecture 13 from the Frontier survey says that there is an n_0 such that for all n >= n_0, BB(n + 1) > 2^BB(n), “perhaps even with n_0 = 6”. It seems that this is already true with n_0 = 5! (Still not proved though.)

  50. Danylo Yakymenko Says:

    Scott #12:

    They just need to prove that 1000-state beeping TMs can suitably simulate (and thereby dominate) 1002-state ordinary TMs for the purpose of running for huge finite amounts of time. Admittedly, I don’t know off the top of my head exactly how to do that, but I strongly conjecture that it can be done. It’s actually a great question. Does anyone here have ideas? 🙂

    There must exist a (meta) Turing machine that simulates any Turing machine by using its description encoded on the tape. This meta machine has some finite number of states. Next, we modify our meta machine to do the following. In a cycle it enumerates the descriptions of all n-state machines. And starts parallel simulations for each. If a machine halts – meta machine beeps. Clearly, meta machine stops beeping after some time bigger than BB(n).

    But, the thing is the description of such a modified meta machine is dependent only on n. We don’t need to incorporate the description of BB(n) record machine in the meta machine. In other words, the Kolmogorov complexity of the meta program depends only on the Kolmogorov complexity of n, and not on the Kolmogorov complexity of BB(n), which is n, roughly speaking.

    I believe that this argument can be used to prove bounds that involve comparison of BBB(n) with BB(BB(n)) like numbers.

  51. Scott Says:

    Ben #47: Ironically, my philosophy of math is that the parts of math that are definitely meaningful, are precisely the parts that can ultimately be “compiled down” to statements about Turing machines. The remaining parts, like transfinite set theory, might be meaningful, but might also have no meaning above and beyond statements about the formal systems that we use to discuss them.

    This doesn’t sound far at all from your position! Except that I draw the “circle of meaningfulness” a bit wider, not merely around what actually has been computed, but around anything that can in principle be reduced to questions about computation (e.g., “does this Turing machine halt?”).

  52. Scott Says:

    Nick Drozd #48:

      It would be convenient from a theoretical perspective if it was Collatz all the way. That’s a nice clear picture to paint. But it also sounds awfully close to making BB computable, and is therefore too good to be true.

    “Sounds awfully close” maybe, but this theory interestingly fails to imply that BB is computable (which would of course falsify it)! The issue is that there’s no way to know in advance when a Collatz-like iterative process will halt—indeed, John Conway showed in the 70s that reasonable formalizations of that question are simply equivalent to the halting problem.

  53. Scott Says:

    Nick Drozd #49:

      By the way, Conjecture 13 from the Frontier survey says that there is an n_0 such that for all n >= n_0, BB(n + 1) > 2^BB(n), “perhaps even with n_0 = 6”. It seems that this is already true with n_0 = 5! (Still not proved though.)

    Ah, good catch!

  54. Scott Says:

    Danylo Yakymenko #50:

      Next, we modify our meta machine to do the following. In a cycle it enumerates the descriptions of all n-state machines. And starts parallel simulations for each. If a machine halts – meta machine beeps. Clearly, meta machine stops beeping after some time bigger than BB(n).

    Aha, beautiful! In principle, you’ve nailed this question. To answer the original question, all that remains is to construct the meta machine and show that it has fewer than 1000 states.

  55. mls Says:

    Dr. Aaronson #41

    While I might disagree with any particular statement you might make beginning with “Mathematics is…” you have basically stated the distinction I had been trying to make. It is precisely why I brought up Markov’s constructive approach and how the recovery of bivalence requires a different paradigm from what is taught in “mathematical logic.”

    My remark had been motivated by the fact that you had been engaged in a discussion about arithmetic and its models. Among other things, an appeal to the authority of Goedel’s platonism had also been made. This had been why I spoke of the categoricity of second-order Peano arithmetic as the mathematical source for intuitions about “standard numbers” which are often subsumed under the expression “true arithmetic.” And because the existence of non-standard models of first-order Peano arithmetic had been attributed to incompleteness, I offered a correction which I believe to be correct: non-categoricity arises because of the paradigmatic idiosyncracies of the first-order paradigm.

    I am under the impression that Turing machines still assume the existence of an infinite tape. If you can ever find the time, you should look at the first few sections of Markov’s “Theory of Algorithms” to see how the elimination of this infinitary presupposition forces the logic to be based upon being provided a “given expression.” You will find that Markov’s work yields Turing completeness and is comparable to any rewrite systems you may study. The issue lies with the fact that different accounts of truth are in play. That is a matter which most mathematically inclined people prefer to avoid.

    I am well aware of the kind of characterizations you have made with regard to first-order and second-order logic. If you choose to believe that mathematics is nothing more than proofs built from the well-formed formulas of formal systems, it is a belief which you can certainly defend. I have made that very defense without believing a word of it.

    It becomes problematic when you try to account for truth and applicability. Quine also busied himself with questions of ontological commitments. For the sake of science-as-truth we are asked to ascribe a material existence to whatever is denoted by the singular terms of mathematical discourse ( with “material existence” not referring to a physical existence in this statement).

    You may ridicule second-order constructs, but, please demonstrate how knowledge of abstract objects is objective. Although I am unaware of an exact source, I ran across an internet post about a year ago claiming that Goedel wondered how we could even communicate with one another. Yes, I navigate my personal experience using abstract objects. I expect their existence to cease with my own. How could they possibly have an independent existence?

    If mathematics is purely linguistic, why should any explanation requiring a knowledge of mathematics be believed? Is our physical description of the universe “true” by dumb luck?

    As I understand matters, this is the difficulty posed by the view of mathematics you seem to have expressed in your reply. So, we clearly have a different understanding of the subject (as you surmised correctly).

  56. Ben Says:

    Danylo Yakymenko #50 and Scott #54:

    This kind of meta-programming idea is exactly why Gregory Chaitin preferred working in his LISP variant over Turing machines. But Ken Thompson remarked in Reflections on Trusting Trust that writing quines in FORTRAN had the same appeal as running 3-legged races does. Trying to meta-program in Turing machines should have the same appeal.

    Scott #51:

    Agreed. The difference between us is that you think it is meaningful to talk about the truth of a statement that is both unknown and in principle unknowable by us. I don’t. But other than that, we are on the same page. 🙂

  57. mls Says:

    Concerning comment #55

    How did that get posted? After submission I regretted how it went too far off topic at the end. So, I actually deleted the confirmation message from my email.

    In any case, it has been my experience that people who wish to emphasize effective mathematics often do not recognize when their views become essentially constructive. The first time I ran across the situation I was being castigated about the semantics of first-order logic on the basis of Herbrand logic. And, my original posting had been intended to clarify that a similar mismatch had been manifesting itself in the discussion about “standard numbers” and non-standard models of Peano arithmetic.

    Oh well, I will have to keep this in mind for the next time I write something quickly before work.

    Thank you again for this wonderful blog.

  58. Scott Says:

    mls #57: I can only see who confirmed their email address from a laptop, not from my phone. And in any case, when someone doesn’t confirm, I’m put in the difficult position of deciding whether the email might’ve gotten lost or whatever, and whether the comment is anodyne enough that I should just approve it anyway. So please don’t use this as a way to delete submitted comments!

  59. Bram Cohen Says:

    When are people going to start working on non-deterministic busy beaver numbers?

  60. Scott Says:

    Bram Cohen #59: Nick Drozd and I worked on Beeping Busy Beavers, which have the same growth rate as nondeterministic Busy Beavers but which I find a lot nicer. You’re welcome either to join us or to propose and study your own definition of NBB!

  61. Bram Cohen Says:

    Scott #60: I’m pretty sure that non-deterministic busy beaver numbers go up vastly faster than beeping busy beaver numbers, although I’m not sure how to do a reduction one way or the other, and there are also beeping non-deterministic busy beaver numbers which should be larger than either of them.

    To clarify what’s meant here: A non-deterministic busy beaver is one where in some states there are multiple possible transitions it can do and some external input source picks which one. Its number is the longest it can possibly run on any input assuming that that value is finite. A beeping non-deterministic busy beaver has the same input structure but can sometimes beep and its number is the absolute last time it can make a beep on any input as long as that number is finite. Obviously you can reduce either the beeping busy beaver or the nondeterministic busy beaver to the beeping non-deterministic busy beaver but the relation between the other two isn’t so obvious and it’s funny that our intuitions are so completely at odds on that one.

  62. Scott Says:

    Bram Cohen #61: No, one can prove that BBB and NBB both have growth rates like BB for Turing machines with BB oracles (in other words, the second level of the arithmetical hierarchy). Would you like to prove this or do you want pointers? 🙂

  63. Joshua Zelinsky Says:

    By the way, Nick Drozd at one point noted that he expected BB and BBB machines to generally have more “write 1” than “write 0” commands. Proving this seems hard. However, it isn’t that tough to prove the following:

    Set BB(m,n) to be the Busy Beaver problem on n states but only over this Turing machines which have at most m write-1 instructions. Then BB(1,n) is computable.

    Conjecture: Given k, there is a computable function f(x) such that for all k, BB(k, n) < f(n).

    Note that this is not the same as asserting that BB(k,n) is in general computable; and it shouldn't be. Since BB(n,n) =BB(n), the choice of f functions should themselves grow very fast. But it seems like this should be an attackable problem to show that such an f exists. I'd be interested even in if someone can even show that BB(2,n) is bounded above by some computable function.

  64. Scott Says:

    Joshua Zelinsky #63: Did you mean the number of “write 1” transitions in the machine, or the number of times such transitions are actually followed when the machine is run? Whichever one you meant, what can you say about the other one? 😀

  65. Scott Says:

    Oh alright, presumably computability is trivial for TMs that write “1” to their tape at most k times, since such machines should be able to run for at most O(sk2) steps or something, where s is the total number of states, right?

  66. Bram Cohen Says:

    Scott #62: Oh I think I see it now. A BBB can simulate an NBB and whenever the thing it’s simulating finishes it beeps and moves on to the next possible input string. My brain isn’t used to thinking ‘That’s just going from X to C^X which isn’t a big slowdown’. But this makes me even more confused about what the runtime of a BNBB is.

    Joshua #63: An interesting question is what do busy beaver numbers look like on the subset of machines which never overwrite 1s with 0s.

  67. Joshua Zelinsky Says:

    Scott #64, and 65. Sorry for poor phrasing on my part! I meant number of “write 1” transitions in the machine itself.

  68. Ben Says:

    Joshua Zelinsky #63

    Your conjecture seems unlikely. Doubly so since it is not hard to prove the following:

    Given an axiom system that formalizes arithmetic, there is a finite K such that if it can prove a computable upper bound to BB(k, n) for any k > K, then it is inconsistent.

    Proof: Consider a Turing machine that provably searches for all proofs from your axiom system of the consistency of said axiom system. This is a finite Turing machine with some fixed number of write instructions, K. Note that from a contradiction you can conclude anything, so if it is inconsistent, then it will eventually prove its own consistency. The axiom system can prove all of these facts about the machine in question.

    But, by assumption, it can also prove a computable upper bound to BB(K, n). Therefore it can just trace a simulation of the machine for that time. If the machine halts, it produced a proof of consistency. If it did not halt, the axiom system has now produced a proof that the machine never halts, so there must never be a contradiction (because consistency immediately follows), and either way the axiom system proved itself consistent.

    And, having proved itself consistent, it must not be thanks to Gödel’s incompleteness theorem.

  69. mls Says:

    Dr. Aaronson #58

    It had been entirely unintentional. You have my sincere apology.

  70. Joshua Zelinsky Says:

    Also, one other thing; I wrote that BB(n,n)=BB(n), but that’s not necessarily true. It is true that BB(2n,n)=BB(n), since there are at most 2n transition commands in an n state machine.

  71. Joshua Zelinsky Says:

    Ben #68,

    Hmm, that’s a good point, and that does make my conjecture seem substantially less likely.

  72. Danylo Yakymenko Says:

    Scott #62:

    There is a simple way to understand why beeping machines can utilize other beeping machines as computable subroutines. Say, a beeping machine \(M_1\) computes a function \(f(n)\) in the sense that \(f(n)\) is encoded on the tape at the moment of last beep (and \(n\) was the input state). The main beeping machine \(M\), that simulates \(M_1\), can use the tape state of \(M_1\) as a value for \(f(n)\) just after its first beep – and recompute everything if \(M_1\) beeps again.

    There is a beeping machine that computes \(BB(n)\) (like the one that simulates all \(n\)-state Turing machines in parallel). So that we can use it as a subroutine.

    I find beeping machines a beautiful generalization of the Turing formalism for the “next level” computability. Even though it’s easier to think in more abstract terms, like functions, compositions and programming in general.

    P.S. sorry for spamming the same comment, but after an edit on site TeX tags appear broken to me (not rendered).

  73. Joshua Zelinsky Says:

    Ben #68,

    Thinking about this more, I’m not convinced that your result is a good argument against the conjecture (although it is interesting). If we look at Scott’s variant where we are counting the maximum number of 1s written instead of the number of write-1 in the transition, then Scott’s comment in #65 shows that the corresponding functions are bounded above by computable functions there for any fixed k, but implicit constants in the O should grow very fast, to the point where a similar argument to yours might go through for those functions.

  74. Nick Drozd Says:

    Scott #52

    I don’t know, iterate-until-condition just seems like too wimpy of a technique. It’s a simple idea and so it can be made to fit into these small machines, but I find it hard to believe that there aren’t any better ideas at all.

    But more importantly, suppose that all BB champions machines were Collatz-like. This fact could be put to use in searching for champions. It would allow winnowing down the set of machines to be considered by discarding anything that wasn’t Collatz-like. This would be very helpful from the searcher’s perspective. This sounds too good to be true, and I don’t think that kind of help will be available.

    My feeling is that this holds generally: nothing non-trivial can be said about all BB champion machines, because otherwise it would be too helpful.

    For example, Joshua Zelinsky conjectured that the control flow graphs of champion machines should be strongly connected. This is definitely, obviously true (and maybe even provable), but it turns out not to be of much use in searching because the best known enumeration process already mostly produces machines that are strongly connected. So from a search perspective, that fact doesn’t amount to much.

  75. Bram Cohen Says:

    TMs which must write a 1 every time are limited but they may be superpolynomial. When they hit an edge they can bounce back to traversing to the other end. Which state they’ll be in at the other end depends on the current length of the series of 1s mod the length of the state cycle they went through on the way. Because they can’t distinguish any two strings of 1s whose length differ by a multiple of N! there’s an exponential limit on their potential runtime.

    I have a conjecture now that a beeping non-deterministic busy beaver has the same strength as a TM with a beeping busy beaver oracle. Curious if that’s obviously true or obviously false to anyone

  76. Ben Says:

    Joshua Zelinsky #73

    My argument indeed does not prove the conjecture wrong. But does make it seem more dubious.

    Now here is a machine that I’m not positive can be written with a fixed number of 1s. But I think it can be. And if it can, then your conjecture is certainly wrong.

    Write a 1, then some string of m 0s, then another 1 and advance. (Takes m+2 states so far.) Now shuttle back and forth writing 1s until you encounter the first 1. (A state to walk forward through 1s, then write one and reverse. A state to walk backwards through 1s then write one and walk forward to look. A state to decide whether you’re done or to walk backwards.) We are now at m+5 states and a string of 2m+2 1s.

    And now for where I wave my hands wildly because I don’t know for sure it is possible. Though it seems likely to me that it is. (With a better general computing model it would definitely be.)

    With a fixed number m1 of additional states, compute the computable function of that string of 1s that you believe to be an upper bound, record both the result and the original answer. Now one by one compute each Turing machine of length at most 2m+2 with at most m1+5 1s, and then simulate it for that computable function number of times. After you are done, halt.

    If that is possible, then simply by making `m` large enough we have a machine with a fixed number of write-1 states that takes longer to finish than any such machine is supposed to take.

    Therefore if k is large enough, then a computable upper bound on BB(k, n) would lead to a contradiction.

  77. Will Says:

    I only have a vague grasp of the concepts involved here, so apologies if this is a nonsensical line of thought.

    We know that BB(748) is independent of ZFC.

    Furthermore, there is a k \leq 748 that is the smallest integer such that BB(k) is independent of ZFC.

    Likewise, for an axiomatic theory X, which may be stronger than ZFC, there is a k_X, which is the smallest integer such that BB(k_X) is independent of X.

    If we consider X=ZFC+Con(ZFC), do we expect (or know) that k_X > k_ZFC?

    And then if we continue to iterate on this (considering X’ = X+Con(X)), can we extend the grasp of “mathematics” (i.e. things provable with a reasonable set of axioms) to BB(748) and beyond?

    How far can we go?

    Do we get this “for free” or is there some price we have to pay for it?

  78. Joshua Zelinsky Says:

    Ben #76

    Yeah, I don’t see precisely how to fill the details in that construction either. But it seems like that construction or something very similar to it should be doable. So that makes me strongly update in the direction that my conjecture is false! That also then raises the question for which is the smallest k where it breaks down.

  79. Joshua Zelinsky Says:

    @Will #77

    We don’t have any reason to think that ZFC + Con(ZFC) is stronger in the direction of higher Busy Beaver numbers. In general, a lot of statements we care about that are undecidable don’t appear naturally just from adding stronger consistency claims. For example, ZFC + Con(ZFC) doesn’t decide the Continuum Hypothesis.

    You can if you want construct a series of theories, say ZFC0=ZFC, and then ZFCn = ZFC(n-1) + Con(ZFC). There are some serious issues with making this well-defined if you try to keep going, but for any finite set of these we don’t have an issue. We don’t have any reason to expect that any of these theories are better about Busy Beaver. But they might be. This is an interesting question.

    In the other direction, we can however construct very strong theories by adjoining to ZFC some finite list of correct Busy Beaver values. Set ZBn to be ZFC with axioms for the “correct” values of BB(1), BB(2)… BB(n). Then for any n, one of your ZFCn theories is contained in ZBm for some sufficiently large m (assuming that ZFC is actually consistent) since for any ZFCn theory we can make a Turing machine which searches for contradictions in ZFCn and halts if it finds one. That ZFCn searching machine has some number of states m, and thus ZBm knows that that machine doesn’t halt, and so ZBm can prove that ZFCn is consistent. (I think this last paragraph has been pointed out by someone in the comments on this blog before. Possibly it was Scott himself.)

    I have not seen anyone before ask your question of whether ZFC with consistency axioms knows more about Busy Beaver values though, and that seems like an interesting but probably difficult question!

  80. Scott Says:

    Will #77 and Joshua Zelinsky #79: I actually discussed this question in my survey (see especially the comment from Oscar Cunningham on page 6).

    Basically, as you add more axioms—from ZF to ZF+Con(ZF) to ZF+Con(ZF)+Con(ZF+Con(ZF)) and so on through the computable ordinals or beyond—the least k for which BB(k) is independent of your system certainly doesn’t decrease and it very plausibly increases. But of course, it never increases beyond the size of the smallest Turing machine that can iterate through all the theorems of your latest system.

  81. Joshua Zelinsky Says:

    @Scott #80, Ah, so that’s where I saw this before. I did think that that point wasn’t original to me, but couldn’t remember where I had seen it. Thank you.

  82. Bram Cohen Says:

    Building on the BNBB idea, here’s a much more elegant construction:

    Define a beeping and booping turing machine such that one of its states is a beep and another is a boop. To not halt it must not only have an infinite number of beeps but the number of boops which occur between successive beeps must be unbounded. The beeping booping busy beaver number for a given machine is the maximum number of boops which occur between successive beeps (or maybe the number of beeps that happens after, or the greater of those two numbers).

    Obviously a BBBB is a much more elegant construction than a BNBB, but there are questions:

    Does a BBBB correspond to a level 2 oracle? If so, does that imply that its busy beaver numbers grow vastly faster? Does its max boops or number of beeps where that happens grow faster?

    I could be all wrong with this and a BBBB is no stronger than a BBB but I think there’s something in this general direction. BBBBs have the nice property like BBBs that they don’t change the underlying TM’s execution, just ask for a more complete characterization of its behavior.

  83. Douglas Knight Says:

    Joshua 79,

    My understanding is that CH is the exception and most interesting undecidable statements do fit in the large cardinal hierarchy. Also, it’s very surprising that the large cardinal hierarchy is approximately totally ordered.

  84. Bram Cohen Says:

    Answering my own dumb question again: A beeping and booping busy beaver, at least as far as I described it above, can be emulated with a beeping busy beaver by simulating it and then beeping whenever it gets a longer run of boops than has ever happened before. So at least that method of mixing in booping doesn’t help any. The same applies for a BNBB, because you can run every possible input and beep whenever any of them beeps. It feels very odd that there are multiple things which are equivalent to a level 1 oracle but piling them together stays at a level 1 oracle. I’m not entirely sure there isn’t something which can be done with beeping and booping to get to level 2 and will think about it some more.

  85. Bram Cohen Says:

    Okay I think I have it now. Let’s interpret the output of a beeping booping busy beaver as a series of numbers, found by counting the number of boops between successive beeps. We say that it ‘halts’ if every positive integer only occurs a finite number of times in the output. If it doesn’t ‘halt’ then the BBBB number for that BBTM is the first number it spits out which it later repeats an infinite number of times. I *think* this is level 2, and hence should make much larger numbers than those tiny ones which vanilla beeping busy beavers spit out.

  86. Will Says:

    Thanks for answering my questions!

    I’ve got one more. We know there’s a number k, the least number such that BB(k) is independent of ZFC. And we know an upper bound on this is 748. But what about the lower bound? Is our current best lower bound just 4? Is there any way to get a lower bound other than just proving what the value of BB(n) is?

  87. Scott Says:

    Will #86: Indeed, the only known lower bound on that k is 5! (I mean 5, not 120.)

    I discussed this in my survey, where I also raised the question of whether the least k such that the value of BB(k) provably implies Con(ZFC), coincides with the least k such that the value of BB(k) is independent of ZFC.

  88. Nick Drozd Says:

    Bram Cohen #84

    In the real world, we can only ever execute a finite number of steps in a computation. There’s no number of finite computations that can be stacked together to get to an infinite number of steps — that stack of computations is still finite.

    An oracle for the halting problem would magically allow compressing an infinite number of steps into a single call. But then you’re faced with the same problem. That oracle can still only be called finitely often. And no number of finite calls to the oracle can be stacked together to get to an infinite number of calls to the oracle. To do that, you need another oracle, etc.

    As far as defining BB analogues further up the hierarchy, it would be helpful if the definitions came with an example of a TM that exhibits the behavior in question. If Scott had just defined “beeping” machines and left it at that, it would have been like, wtf is he talking about? But he went a step further and exhibited one, so it was clear exactly what he meant.

  89. Joshua Zelinsky Says:

    I strongly suspect that Scott is already aware of the following but there’s a definite limit to how much you can decide Busy Beaver with just by adding stronger consistency claims to ZFC.

    Let ZFC0=ZFC, and let ZFCn = ZFC(n-1) + Con(ZFC(n-1)) as before.

    Claim: There exists a k such that for all n, ZFCn cannot decide the value of BB(k).
    Proof: Assume there was no such k. Then we could make a Turing machine which calculates all BB values by dovetailing through all theorems of ZFC0, ZFC1, ZFC2… . Since we can’t do that, there must be a k such that BB(k) is not decidable by ZFCn for any n.

    One can try to extend this to ordinals rather than just integer values of n, but I suspect that doing that here-be-dragons given the delicate issues that show up when trying to even make such extensions of ZFC well defined.

  90. Scott Says:

    Joshua Zelinsky #89: Yes, I’m aware of all that, since Oscar Cunningham pointed it out to me. But have the courage of your convictions and keep going! 🙂

    For every computable extension of ZFC, there will exist an integer k such that BB(k) is independent of that extension. Conversely, though, nothing rules out the possibility that for every k, there’s some “natural” extension of ZFC that decides BB(k). The “dragons” are in deciding what counts as a “natural” extension. We could always create one artificially by just adding the BB(k) value as a new axiom. But if we want to do it via iterated consistency statements, then the answer might depend on the seemingly trivial issue of which Turing machine we use to enumerate the consistency statements, as Alan Turing observed in his 1939 PhD thesis!

  91. Joshua Zelinsky Says:

    Also, people interested in this topic may be interested in this new preprint by Joel David Hamkins https://arxiv.org/abs/2208.07445 may connect to some of these issues. At a minimum, it explicitly connects to the name-the-biggest-number-game and how that ties in to large cardinal axioms.

  92. Bram Cohen Says:

    Nick Drozd #88:

    By the latest definition above beeping booping busy beaver numbers are well defined but the numbers can be very hard to calculate. As far as finding a good one, probably the best approach is to take a failed beeping busy beaver which beeps infinitely and assign one of its other state transitions to be booping. Somewhat amusingly the exact same machine with different places assigned to beeping and booping can potentially make a much larger number than even a large beeping turing machine number. As long as the behavior is ‘messy’ it makes sense because calculating beeping and booping requires a much more complete characterization of the machine’s behavior than just beeping.

  93. Christopher Says:

    Scott #6 and related comments:

    I am myself a platonist for arithmetic at least. That being said, there is a way to *formalize* the position that BB(8000) doesn’t exist, even if you don’t agree with it.

    One thing people attempt to say is that since ZFC can’t upper bound BB(8000), ZFC doesn’t think BB(8000) exists. This isn’t very satisfying though, because ZFC still proves the statement “there exists n such that n is the 8000th busy beaver number”, it just doesn’t know the exact value. There’s nothing wrong with ZFC believing that a question has an answer it doesn’t know.

    However, there are theories in what are known as “constructive mathematics” that *don’t* even prove the theorem “there exists n such that n is the 8000th busy beaver number”. As such, it’s even consistent to assume it’s false. Some such theories are Heyting arithmetic and CZF (constructive ZF set theory).

    It is possible to do a lot of “ordinary” mathematics in constructive theories (even analysis and set theory), it’s just a bit awkward. Also, things Heyting arithmetic still think that the statement “n is the 8000th busy beaver number” is a perfectly meaningful statement, just that it might have no solution. So if you want to work with BB(8000) constructively, you can do “forall n. n is the 8000th busy beaver number -> phi(n)” where phi is what you want to say about it.

    I don’t think constructive mathematics is “more accurate” than classical mathematics, but I find it’s meta-mathematicial properties interesting. A good constructive system T will satisfy what’s known as church’s rule, which states:

    > For all formulas phi(x, y), if T |- “forall x in ℕ. exists y in ℕ. phi(x, y)” then there exists a computable function f_phi such that T |- “forall x in ℕ. phi(x, f_phi(x))”.

    So we can turn proofs in T into programs. (There is already software to do this.) Note that T doesn’t *disproof” that uncomputable functions exist (generally speaking), it just can’t prove the existence of any particular uncomputable function.

  94. Getting started without a pre-existing understanding of non-standard natural numbers | Gentzen translated Says:

    […] Beavers and Turing machines halting or not, depending on your version of the natural numbers. Scott repeated his argument that you would get trapped in an infinite regress if you conceeded to the objection of the sceptic […]

  95. Joshua Zelinsky Says:

    By the way, I asked the question about BB(k,n) over at Mathoverflow https://mathoverflow.net/questions/430649/a-variant-of-the-busy-beaver-function and Ville Salo there has an answer showing it is false. The smallest such k for which it is false is still open, but it may be very small.

    On the bright side, this may give an interesting family of functions which grow faster than any computable function but still slower than BB.

  96. Scott Says:

    Joshua Zelinsky #95: Thanks!!

  97. Nick Drozd Says:

    Joshua Zelinsky #63, 95

    > Set BB(m,n) to be the Busy Beaver problem on n states but only over this Turing machines which have at most m write-1 instructions. Then BB(1,n) is computable.

    Could you explain why this is true? It seems to me that you could do a lot with a gajillion states, even if only one of them ever prints a mark.

    By the way, I would choose a different name for this one. BB(m, n) could be interpreted to mean BB with m states and n colors.

  98. Joshua Zelinsky Says:

    @Nick Drozd,

    I thought I had an argument that these machines needed to always either halt, march out in direction forever, or go into a simple loop, but now that I’m thinking about it more, my argument seems fatally flawed.

    (And yes I agree that my choice of notation may not have been ideal either. If you have a better notation to propose I’ll adopt it.)

  99. Joshua Zelinsky Says:

    @Nick Drozd #97,

    Here’s an argument that works I think to show that every machine with a single write 1 instruction has easily classiifiable behavior on the all zero tape.

    If the write 1 instruction is in response to reading a then it will never be triggered, and our machine will only move around writing 0s, which will obviously either get stuck in a loop or spiral out (just trace the path it takes on the graph). So, it must be in response to writing a 0.

    We also may assume that the write 1 instruction is in the start state, since we’re if not, we’ll just have written a few 0s over other 0s and then be in that state.

    So now assume we have started and written a single 1. All other states now write 0s. If we get in a process that gets stuck outside the write 1 state, this is easily classifiable. So assume we eventually get back to that state, again just by tracing things through. Then either that start state halts on reading a 1, or that state continues, and writes a 0. In that case, the tape is now in an all zero state, and will then loop the same way it did earlier until it gets back to the start state where it writes a 1 when it gets back there, and is now in a functional loop; the position of the 1 may change but the machine doesn’t have any memory of that.

  100. P.A.R.T.Y. Says:

    #41 Scott, it’s significant that you didn’t include lambda calculus in that list. I highly recommend that you learn it at least at a basic level.

    In fact, if we are talking about an untyped lambda calculus, then there is nothing simpler – even in principle! Think of it this way: imagine the simplest entity that can do something. We call such entities functions – they take an argument and return an output. But what if we have nothing at all but functions? In this case, all functions can take and return are other functions! (They can even take themselves as an argument – which, however, is not surprising).
    Here are some of these functions explicitly:

    \a -> a (identity function, or “idiot”);

    \a -> a a (understudy, or “mockingbird”);

    \a -> \b -> \c -> a c (b c) (S-combinator).

    (There are some other notations without letters like de Bruijn notation (which eliminates the need to sometimes replace characters to avoid collisions), but I won’t cover them here, Wikipedia has an explanation of how they work).

    You can see that we don’t even need multi-argument functions – a function can simply take the first argument and return itself, but with the argument “captured inside” (this is also called “closure”, and the process of “expanding” a function of many arguments into a function of one argument, returning another function of one argument, etc. until they are exhausted is called “currying”).

    Thus, we have two “fundamental constructions” that even don’t make sense one without the other. They are called “abstraction”, i.e. the construction of the function, which I denoted by an arrow (in \a -> a the first character after the slash (lambda in the original) is just a placeholder saying “you need to put an argument here”, the arrow means the actual function, the last character *a* means “replace me with what was put to \a”), and “application”, denoted by a simple juxtaposition. Here’s how it works:

    (\a -> a a) (\a -> a a) =>
    (\a -> a a) (\a -> a a) =>
    (\a -> a a) (\a -> a a) =>

    (By => I meant the reduction process – in fact, reduction is a “calculation step”).

    What was just above is actually an infinite loop!

    And here is the recursion in the lambda calculus:

    (\f -> (\x -> f (x x)) (\x -> f (x x))) g =>
    (\x -> g (x x)) (\x -> g (x x)) =>
    g ((\x -> g (x x)) (\x -> g (x x))) =>
    g (g ((\x -> g (x x)) (\x -> g (x x)))) =>

    Demonstrating that the lambda calculus is Turing complete is very simple – you just need to construct the basic functions for arithmetic, choice of two options and recursion (which we already did above), and write anything you want in it – anything that any other programming language can. (On the subject of arithmetic in the untyped lambda calculus, there is a wikipedia article on Church enumeration).

    In fact, abstraction is also directly related to implication in logic, and application corresponds to modus ponens. This is called “Curry-Howard isomorphism”. But untyped lambda calculus is unsuitable as a deduction system, since the fixed-point combinator (that is, unbounded recursion) leads to the liar paradox (or, more correctly, Curry’s paradox), so all computer reasoning systems (including theorem provers) are not Turing complete, although it’s still possible to write algorithms of arbitrary complexity.

    Why I wrote this at all: Scott, you are one of the best researchers on computational complexity, to my knowledge, and I’m a bit sorry that you don’t consider category theory, type theory, lambda calculus, etc. worthy of attention. The problem with Turing machines and automata is that they are generally not equipped with “natural algebra” (For example, type exponentiation is a function type, type multiplication is a tuple, type addition is a “disjoint union”), unlike lambda calculus, which, thanks to this property, allows you to write and study algorithms in a compositional style. (In fact, I often think that functional programming is properly called compositional programming.) Moreover, for a long time, the study of computational complexity using the lambda calculus was considered impossible, until this article was written: arXiv:1405.3311. I understand that Turing machines initially look very “natural”, but learn functional programming and you may literally stop understanding the imperative style! I myself initially didn’t understand the functional style at all, I thought that it was all “esoteric”, before I rolled into this rabbit hole through the back door.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.