BusyBeaver(6) is really quite large
For overdetermined reasons, I’ve lately found the world an increasingly terrifying and depressing place. It’s gotten harder and harder to concentrate on research, or even popular science writing. Every so often, though, something breaks through that wakes my inner child, reminds me of why I fell in love with research thirty years ago, and helps me forget about the triumphantly strutting factions working to destroy everything I value.
Back in 2022, I reported an exciting advance in BusyBeaverology: namely, whereas we previously knew merely that BB(6) > 1036,534, Pavel Kropitz managed to show that
BB(6) > 1510.
For those tuning in from home, here BB(6) is the 6th Busy Beaver number, i.e. the maximum number of steps that a 6-state Turing machine with a {0,1} alphabet can take before halting, when run on an initially all-0 input tape. Also, the left-superscript means tetration, or iterated exponentiation: for example, 1510 means 10 to the 10 to the 10 and so on 15 times.
By comparison, last year the international “BBchallenge” team determined that BB(5) is “merely” 47,176,870 (see also Quanta magazine’s superb feature article on that milestone). So, between 5 and 6 is where the Busy Beaver function makes its leap, from the millions to beyond the bounds of observable reality.
But if you thought that was the end of the BB(6) story, think again! Eleven days ago, Tristan Sterin, who organized the BBchallenge the team, emailed to tell me that a team member with the handle “mxdys” improved the BB(6) bound yet further, to
BB(6) > 10,000,00010
(i.e., 10 to the 10 to the 10 and so on 10 million times), with a correctness proof in Coq. Then, three days ago, Tristan wrote again to say that mxdys has improved the bound again, to
$$ BB(6) \gt ^{^{{^9}2}2}2 $$
I.e., BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. So in particular, BB(6) is at least 2 pentated to the 5, where pentation is iterated tetration, i.e. the operation that is to tetration as tetration is to exponentiation, exponentiation is to multiplication, and multiplication is to addition.
Last week, when we “merely” knew that BB(6) > 10,000,00010, I talked to a journalist who asked me to give an intuitive sense of how big such a number is. So I said, imagine you had 10,000,00010 grains of sand. Then you could … well, uh … you could fill about 10,000,00010 copies of the observable universe with that sand. I hope that helps people visualize it!
The journalist also asked: have these new discoveries about BB(6) caused me to rethink any broader beliefs about the Busy Beaver function? And I mean, yes and no: it was always completely within the realm of possibility that BB(6) would already be, not some puny little thing like 1036,534, but way out in iteration land. Now that we know for sure that it is, though, maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9, rather than when it’s 20 or 30 or whatever. (Currently, we know that BB(n) becomes independent of ZFC only when n=643.)
Unrelated Update: I’m just now returning to the US from STOC’2025 in Prague, where I saw lots of old friends and learned many interesting new things, again helping to distract me from the state of the world! Many I’ll write about some of those things in a future post. For now, though, anyone who’s interested in my STOC plenary lecture, entitled “The Status of Quantum Speedups,” can check out the PowerPoint slides here.
Follow
Comment #1 June 28th, 2025 at 12:12 pm
I eagerly await the day when you’ll stop listing “Physics simulations” under “In-Principle Quantum Advantage”.
Comment #2 June 28th, 2025 at 12:19 pm
> Now that we know for sure that it is, though, maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9, rather than when it’s 20 or 30 or whatever.
Why is that? There are many more countable big numbers.
Comment #3 June 28th, 2025 at 12:31 pm
“maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9”
Why not be even bolder at this point and conjecture that BB(6) is independent of ZFC?
Comment #4 June 28th, 2025 at 12:33 pm
Vladimir #1: Why?
Comment #5 June 28th, 2025 at 12:38 pm
@Matus #2,
“Why is that? There are many more countable big numbers.”
We already know that BB(745) is independent of ZFC. But more importantly, we know that one of the ways things can get to be independent of some reasonably tame axiomatic system is if they involve an extremely fast growth. This is thematically similar to that. Also, this is evidence that there are Turing machines with very few states which don’t halt until extremely largely iterations when run on the blank tape, which makes it more plausible that there are some machines whose halting or lack thereof is not easily quantifiable in something like ZFC since it suggests that there’s just a lot of really complicated machines that have a small number of states.
Comment #6 June 28th, 2025 at 12:41 pm
Matus #2: Truthfully I have no idea. But wherever I previously believed independence happened, it seems like I ought to adjust downwards.
Comment #7 June 28th, 2025 at 12:42 pm
Joshua Zelinsky #3: Oh, I left that bolder conjecture for a commenter like you to make! 🙂
Comment #8 June 28th, 2025 at 12:52 pm
According to your own post from a year ago, it’s down to 643!
I really wish the BBChallenge people kept better track of this stuff, I have tried to follow along to some extent but they rare update the main website, and the newer faster-updating wiki is confusingly organized… and on this particular question it has no information at all, just this little stub. Gotta say the Mario 64 A-button challenge people have done a much better job of this sort of thing. 😛
Comment #9 June 28th, 2025 at 1:09 pm
> imagine you had 10,000,00010 grains of sand. Then you could … well, uh … you could fill about 10,000,00010 copies of the observable universe with that sand.
Since they are the same number, wouldn’t this mean that each copy of the observable universe has exactly one grain of sand in it? Is this a typo?
Comment #10 June 28th, 2025 at 1:12 pm
Sniffnoy #8: [forehead slap] I’d totally forgotten, thanks!!! Just updated the post.
Comment #11 June 28th, 2025 at 1:44 pm
Scott #4
As I’ve repeatedly failed to convince you but hope someone someday yet will, there’s certainly a known in-principle advantage for unitarily evolving a given initial state, but not for actually finding an interesting initial state, e.g. the ground state of the Hubbard model or some large molecule. The latter is what most condensed matter physicists and quantum chemists think of as “physics simulations”.
Comment #12 June 28th, 2025 at 2:08 pm
Interesting!
How does BB(6) compare to other really large numbers at this stage? I assume it’s still not the “largest” named number, but I’m not sure what actually is at this stage. I usually go with Rayo’s number in my head.
Question for Scott (or anyone):
I’ve been brushing up on my basic set theory, and I think I’ve learned enough at this stage to ask – what do I need to learn to understand the proof of BB(745) (or anything else) being independent of ZFC? I want to understand this formally. Is basic Set Theory enough?
Comment #13 June 28th, 2025 at 2:11 pm
I apologize if this is already answered in the links, but is there any insight into how the explosion is *so* large by adding a single more? What widget or computation are the 6s capable of expressing which can blast so far past 47,176,870? If they are just doing Collatz-like computations, what accounts for the growth there?
(I know it’s just a bound here, so you can’t answer this question for *the* busy beaver, but I assume there are many longer programs, and it is not the case that all of the BB(6)s do less than or equal to 47,176,870 but then there is one pathological TM which does the headline ‘at least 2 pentated to the 5’; and so you could relatively easily find a specific instance of a BB(6) which does, say, 47 billion instead of a mere 47 million, and analyze it to try to see what trick it is using. If that is not possible, why not?)
Comment #14 June 28th, 2025 at 2:17 pm
Ajay #9: It breaks my heart to explain the joke, but … 1000000010 is so incomprehensibly more enormous than 10100, or whatever your bound for the number of sand grains that could fit in the observable universe, that even after you divide the former by the latter, you still get a number that’s most easily approximated as 1000000010 (try it and see!).
Comment #15 June 28th, 2025 at 2:20 pm
Ajay #9. No, this is correct. The point is that tetration is fast enough that the number of universes is dramatically more than ⁹⁹⁹⁹⁹⁹⁹10 (since this number is more than ⁹⁹⁹⁹⁹⁹⁹10 times smaller than ¹⁰⁰⁰⁰⁰⁰⁰10)
Comment #16 June 28th, 2025 at 2:22 pm
Vladimir #11: Oh, ok, that. One issue is that I talk pretty often to quantum chemists and materials scientists who are engaged with this topic and who are much more optimistic about quantum speedups than you. In any case, though, “quantum simulations” very explicitly includes dynamical problems (eg reaction rates), not just properties of ground states where getting a quantum speedup that matters in practice will indeed be harder.
Comment #17 June 28th, 2025 at 2:30 pm
Scott #16
I’m not pessimistic about quantum computers being useful for physics and chemistry in practice, just pedantic about claiming they have a known in-principle advantage in the same sense that Shor’s algorithm has one.
Comment #18 June 28th, 2025 at 2:32 pm
It’s known that BB(14) is bigger than Graham’s number, but this new finding leads me to believe that BB(7) is probably bigger than Graham’s number. Intuitively, the technology required to go from pentation to Graham’s number feels simpler than the technology required to go from `47,176,870` to `2 5`.
Comment #19 June 28th, 2025 at 2:33 pm
Edan Maor #12: The new lower bound on BB(6) is still tiny compared to Graham’s number, although huge compared to Skewes’ number.
I don’t accept that “Rayo’s number” is actually well-defined, since Rayo’s definition depends on second-order logic—and hence, on which transfinite sets “really exist” and which don’t, potentially even on questions like AC and CH.
For BB(643) being independent of ZFC, conceptually there’s almost nothing to understand. You just build an explicit 643-state Turing machine M that (very slowly) enumerates all the theorems of ZFC, halting if and only if it finds a contradiction. If ZFC could determine the value of BB(643), it could also prove that M ran for more steps than that and hence forever, thereby proving that ZFC is consistent, which would contradict the Second Incompleteness Theorem. Where you need to work your ass off and invent all sorts of tricks is just to reduce the number of states (a naïve construction will have millions of states). For more see Riebel’s thesis, which I linked in the post, or my paper with Adam Yedidia.
Comment #20 June 28th, 2025 at 2:36 pm
Vladimir #17: If you really want to be pedantic, I’m going to out-pedant you by pointing out that “physics simulation” includes, as one special case, simulating a quantum computer running Shor’s algorithm. 😉
Comment #21 June 28th, 2025 at 2:49 pm
@scott#19: Thanks, I was just asking what it meant that BB(n) becomes independent of the ZFC axioms.
Comment #22 June 28th, 2025 at 2:54 pm
.mau. #21: It means that there’s no proof in ZFC of any statement of the form “BB(n)=k,” for any explicit positive integer k.
Comment #23 June 28th, 2025 at 2:55 pm
Hi Scott, This is cool!
When these improved bounds were found, was it a new machine being discovered each time, or the runtime analysis was improved? Was it all collatz like iterations that stop by chance, or some new ideas involved with the new discoveries?
I know you talked about a machine that almost certainly doesn’t halt statistically but hard to prove (some growing sequence needing to have more than twice as many evens than odds or something like that). Do we know of any machine that almost certainly does halt statistically but really hard to prove that it does (maybe because its pseudo-random sequence grows so fast, that it’s hard to calculate the iteration at which it will halt?)
Comment #24 June 28th, 2025 at 3:01 pm
Ajay #9
> you could fill about
The joke is in the “about” the same number, in relative terms. Because BB(6) is so large compared to the number of sand grains in the universe, you can easily say BB(6) times that number is about the same as BB(6).
Comment #25 June 28th, 2025 at 3:07 pm
gwern #13: At a very high level, the 5-state champion does a Collatz-like iteration that happens to halt after a dozen or two steps, having expended ~47 million steps in doing so (the number of steps growing quadratically with the positive integers that are generated).
As of a few days ago, the 6-state champions again did Collatz-like iterations that happen to halt after a bunch of steps. But now, each step of the iteration involves exponentiating the number from the previous step, which is how you quickly generate stacked exponentials.
I don’t understand yet how mxdys’s most recent 6-state champion gets from there to pentation. Maybe you or someone else would like to look at the github and explain it to us! 🙂
Comment #26 June 28th, 2025 at 3:39 pm
Pretty soon we’ll see that BB(6) is > ♾️ -1 but < ♾️.
😅
And yes, I know. This is a joke…
Comment #27 June 28th, 2025 at 4:08 pm
Hi Scott,
I’m pretty interested in the use of quantum computers for drug discovery. However, I recently watched a video by Looking Glass Universe (https://www.youtube.com/watch?v=pDj1QhPOVBo&t=877s) in which she mentioned a paper written a while ago by Garnet Chan et al which seemed to indicate that for most generic chemical systems, (so nothing extremely strange that’s fully correlated like superconductors), it is unlikely to get an exponential quantum advantage. From what I understand, this is because you need to already have a very good approximation of the ground state to get the ground state energy, and if this is the case it means a classical computer can probably already simulate the problem well.
As such, I just wanted to get your thoughts on how likely you think it is that quantum computers will have a significant impact in this area (drug discovery, medicine, biology etc..)? And if it’s not through quantum phase estimation, how will they be of use? Is there still a lot that pure Hamiltonian simulation can do in this area?
Also I see in your recently posted slides that you think that recent developments (Yamakawa-Zhandry) etc seem to indicate that there are a lot of quantum algorithms with quantum advantage out there that remain to be discovered. I guess my question just is….if you had to guess, would you say you think there is a greater chance than not that there is something “Shor/QPE/HHL like in usefuleness” still lurking out there that we haven’t found yet? Or most of what remains to be found will be highly esoteric (some new cryptographic scheme, complex physics system etc…).
Comment #28 June 28th, 2025 at 4:56 pm
Yes! All developments since 2020 point overwhelmingly in the direction of stronger conjectures. For example, one of the conjectures says something about when 2BB(N) BB(5)2!
Comment #29 June 28th, 2025 at 5:06 pm
Great post! Just wanted to point out a small typo, “expenentiation”. Cheers!
Comment #30 June 28th, 2025 at 5:15 pm
#13 To expand on Scott’s comment, all 4 of these champions and former champions described here have the same basic structure: compute some big function, apply a collatz-like test, if one of the remainders, halt, otherwise repeat. As Scott mentioned, for the BB(5) champ that function is multiplication and for Pavel’s 10↑↑15 TM it is exponentiation. For the newest pentation TM, it is tetration. So it applies a tetration function several times and thus gets this low pentation level of iteration.
Of course, the next question you may ask is, how are they computing these helper functions (multiplication, exponentiation, tetration)? The main method here is just loops. TMs can very easily add, do that over a loop and you get multiplication, do that over a loop and you get exponentiation, etc. This is how Pavel’s 10↑↑15 TM worked as well.
The two newest champions take advantage of a slightly different technology we have called “Shift Overflow Counters”. The basic idea is that they go through phases where they repeatedly increment a binary counter on the tape until it “overflows” (expands to require an extra digit). At those points it does a new behavior. This shift overflow counter technology roughly computes exponential functions (because binary counters take exponential steps to expand). With one extra loop on top of this you get a tetration level helper function.
Comment #31 June 28th, 2025 at 5:45 pm
amohr #29: Thanks! Fixed.
Comment #32 June 28th, 2025 at 5:47 pm
Shawn Ligocki #30: Thanks so much; that’s extremely helpful!
Comment #33 June 28th, 2025 at 6:25 pm
I apologise in advance in case the question is stupid for some reason, but is it plausible that AlphaProof-style proof assistants trained on a corpus of solved busy beaver instances (and given the right tools) or maybe language models with reinforcement fine-tuning on beaverology tool use could in the near future be useful in resolving new BB(6) champion candidates?
My understanding is that most beavers are decided by automated tools that do hard-coded tests, and that people make extensive use of various custom sped-up simulation and cycle detection tools, but that the hardest cases (both record holders and difficult non-halters) do require a lot of human reasoning. If so, it seems to me that busy beavers could be an interesting AI reasoning test case.
Comment #34 June 28th, 2025 at 7:14 pm
Aron #33: Yes, we might soon see if it’s possible – for one, the unsolved conjecture that a particular 6-state Turing machine halts (“Antihydra”: https://www.sligocki.com/2024/07/06/bb-6-2-is-hard.html , https://bbchallenge.org/antihydra , https://wiki.bbchallenge.org/wiki/Antihydra )
is now part of a growing Google Deepmind repository of open problems formalized in Lean: https://github.com/google-deepmind/formal-conjectures/blob/main/FormalConjectures/Other/BusyBeaverAntihydra.lean for which proofs may now be regularly attempted by various AI systems (e.g. AlphaProof, AlphaEvolve etc.)
Comment #35 June 28th, 2025 at 7:35 pm
Finding a small number k such that BB(k) is Independent of ZFC comes down to having a very efficient compiler working on the program optimisation for BB(643). But program equivalence is itself undecidable.
Wouldn’t then it be possible that there is some grey zone in BB’s, such that e.g. the statement “BB(8) is independent of ZFC” is itself independent of ZFC? And so forth. How many levels could such tower have?
Comment #36 June 28th, 2025 at 8:16 pm
Scott #19: “The new lower bound on BB(6) is still tiny compared to Graham’s number, although huge compared to Skewes’ number.”
Rather than conjecture that BB(6) is independent of ZFC I’m going to conjecture that it is larger than Graham’s number. Pulling this out my ass based only on how outrageously the lower bound has grown as communicated via this post 😛
Comment #37 June 28th, 2025 at 10:01 pm
Like ZFC, the is some k such that BB(k) is independent of (your favorite fragment of) Peano arithmetic. Do we have any better bounds for (your favorite fragment of) PA than ZFC. I’d also be curious to know what you’d conjecture on this question.
Comment #38 June 28th, 2025 at 11:21 pm
Jack #35: I think the answer here is a little boring. If ZFC proves a thing, it also proves “ZFC proves that thing”. In fact, for any recursively enumerable proof system T, if T “proves” X (for whatever T means by proving something), ZFC proves “T proves X”. Conversely, if ZFC does not prove X, ZFC also can’t prove “ZFC doesn’t prove X” because any statement of that form immediately implies ZFC is consistent. Independence proofs therefore can only ever take place in some stronger theory, such as ZFC + “ZFC is consistent” or ZFC + some large cardinal axiom. So every “tower” of this sort is either infinitely tall, or a flat piece of ground.
Comment #39 June 28th, 2025 at 11:58 pm
By the way, I hope there’s enough matter in the universe to write down a formula for the exact value of BB(6)
Comment #40 June 29th, 2025 at 1:02 am
Hi Scott,
Truly fascinating stuff. I appreciate your comment #19 that summarizes the entire line of thinking about the constructed BB(643) in one paragraph.
Thinking about this I realized something that is probably completely obvious to you and most people reading this blog, but I’d still like to put it out there: the problem with a hypothetical machine H that decides the halting problem is never with classifying machines that halt – that’s ‘easy’. The difficulty is always with classifying non-halting machines as such.
Non-halting machines must exist that cannot be proven to be non-halting, neither by the hypothetical machine H, nor by the cleverest mathematicians in the world bringing their entire arsenal of ZFC-rooted mathematics to bear.
Funnily enough, we cannot by definition produce examples of such pathological non-halting machines – but we know they must be out there.
One could define a function NONBB_NASTY(k) that counts the number of non-halting k-state TM’s that cannot be proven within ZFC to halt. We now know that NONBB_NASTY(k) is 0 for k <= 5, and your interest about finding the smallest TM that is independent of ZFC, if I understand correctly, is equivalent to finding the smallest k for which NONBB_NASTY(k) is non-zero.
Comment #41 June 29th, 2025 at 1:04 am
Actually I have some vague memory of seeing it stated that the number had been reduced below 643 (only by like 1 or 2 though, not by more than that), but it’s likely that I’m simply misremembering. Hard to say though because this doesn’t seem to be being kept track of in a centralized way! 😛 (Then again, I guess that’s how most mathematical literature is, but when you have a project website and wiki like this…)
Comment #42 June 29th, 2025 at 4:27 am
Sidney#40: there must be a misunderstanding: the machine proving BB(643) is not determinable by ZFC is one of those non-halting paradoxical machines, as we “know” it doesn’t halt if ZFC is consistent, yet we cannot prove it (Gödel). While we are at it , the Gödel speed-up theorem guarantees “short” halting machines, easily proved halting if we admit Consis (ZFC), and whose shortest proof of halting in ZFC is about the same length (unimaginably huge) that the simulation of the machine
Comment #43 June 29th, 2025 at 5:43 am
FYI, this post is now trending on Hacker News: https://news.ycombinator.com/item?id=44406171
I don’t remember if I ever saw you Scott visiting HN before, but there’s some interesting discussion happening over there about this. (Well, interesting to me because a lot of people are saying things I vaguely believe are true or false, but that I’m not 100% sure about.)
Comment #44 June 29th, 2025 at 6:01 am
Two questions:
1. It is trivial that the lowest value n such that BB(n) is independent of PA is smaller or equal to the same number for ZFC. But looking at the approach that gave us the 643 bound, it doesn’t seem able to let us show any gap. This is because the axioms of PA don’t look simpler and hence enumerating proofs in PA with the same idea will result in a Turing machine with roughly the same number of states. Do you take this to be evidence that the gap is actually not large or non-existent?
2. I know you’re not a fan of lambda-calculus, but suppose we define the same function for lambda expressions. Could we expect this to shed any light on the Busy Beaver function? (I myself don’t think so, but I couldn’t refrain from asking.)
Comment #45 June 29th, 2025 at 6:22 am
@Sniffnoy, #41,
I think you are thinking of how 748 went to 745 at one point.
Comment #46 June 29th, 2025 at 7:55 am
Michael Dickens #18: We have gotten pretty close! On May 2025, Pavel Kropitz (who discovered the previous BB(6) bound) found that BB(7) > 2{11}2{11}3 (where {11} denotes 11 arrows in up-arrow notation), which can be considered Ackermann level.
https://wiki.bbchallenge.org/wiki/1RB0RA_1LC1LF_1RD0LB_1RA1LE_1RZ0LC_1RG1LD_0RG0RF
If we can find a TM that iterates the Ackermann function, we could reach Graham’s number.
Comment #47 June 29th, 2025 at 9:10 am
My comment #28 got garbled. Too much fancy HTML. Let me try again.
All developments since 2020 (the year of Busy Beaver Frontier) point overwhelmingly in the direction of stronger conjectures. For example, one of the conjectures says something about when 2BB(N) < BB(N + 1). But it is now known that BB(6) > BB(5)2!
I will also add that there are various numbers floating around that are taking an outsized importance in popular imagination. ZF cannot prove BB(748), BB(643), whatever the latest is. IMO, these numbers (643, etc) are not even remotely close to the true values. Even for the specific strategy of implementing a TM that enumerates theorems of ZF until it finds a contradiction, I think there are dramatically smaller TMs that do it.
Of course this is not a knock against anyone who has worked on this. But nobody should take away the impression that the true limit of provability has been approached, because it definitely has not.
Stated a little more positively, if anyone wants to take a crack at lowering that number (of TM states), there is plenty of opportunity for doing so! Probably even just refactoring the existing record TM would be good enough to do better.
Comment #48 June 29th, 2025 at 9:18 am
There appears to be a typo: “alphanet” instead of “alphabet”.
Comment #49 June 29th, 2025 at 9:43 am
Following up to #46: And that bound “BB(7) > 2{11}2{11}3” was found after a small amount of search, so I’m sure there are bigger things there. I’m totally convinced that BB(7) > Graham’s number. So much so that I just last week made a $1000 bet with John Tromp that this bound would be proven in the next 10 years. I think the only hesitation I have in that bet is whether we’ll be able to prove it in that time, but at the current rate of improvement, I’m hopeful 🙂
Comment #50 June 29th, 2025 at 9:56 am
#44: (1) I agree, the current method for proving ZFC-independent BB values seems like it would produce very similar bounds for most axiomatic systems. Basically that it would produce bounds that looked something like “size(axioms) + constant” and it feels like the constant is large and the size of most axiom lists is roughly similar.
But, of course, this is not the only way for a TM to be independent of ZFC/PA. For example, maybe there is some way to use the fact that PA cannot prove that Goodstein sequences are finite or that the Hydra game always ends or that \( f_{\epsilon_0} \) is total in order to find a TM independent of PA (but not ZFC). (Although annoyingly these are not easy to encode b/c they are not Pi_1 sentences). And then of course, we may never be able to understand the smallest TM that is independent of PA/ZFC b/c it may do so in some really confusing/convoluted way that is not easily explainable.
Comment #51 June 29th, 2025 at 10:02 am
#44: (2) There has in fact been a bit of study recently into BB for lambda calc. See https://wiki.bbchallenge.org/wiki/Busy_Beaver_for_lambda_calculus where we’ve recently solved it up to programs of up to 37 bits. This BB lambda function has nice advantage that it measures program size in bits which gives a lot more granularity. I personally am hoping that it helps shed some light on what aspects of BB champions is sort of universal and what is more specific to the twiddly details of that model. Unsurprisingly, recursion is coming much quicker in lambda calculus where that is basically the entire basis for the model (vs. TMs where building recursion requires a bit of work). So far the lambda calculus version has not taken advantage of Collatz-like problems in the same way that the TM one has, perhaps this is also connected to the differences between the models (or perhaps we just need to search a bit further).
Comment #52 June 29th, 2025 at 10:31 am
#40
> the problem with a hypothetical machine H that decides the halting problem is never with classifying machines that halt – that’s ‘easy’. The difficulty is always with classifying non-halting machines as such.
While classifying halting machines is “theoretically trivial” (just run them until they halt). We have started to find some 6-state, 2-symbol TMs where proving that they halt (before the heat-death of the universe) will require some math breakthroughs in accelerating applications of Collatz-like functions. See (for example) [Lucy’s Moonlight](https://www.sligocki.com/2025/04/21/lucys-moonlight.html) which (like Pavel’s 10↑↑15 former champion) repeatedly applies an exponential function and then does a Collatz-like check to decide whether it halts or repeats. But in this case, the exponential function itself uses Collatz-like iteration and so we cannot efficiently accelerate that function without new math results in order to decide the remainder mod 3 each repeat. Based on a probabilistic argument this seems like it must eventually halt … but the only way I see to prove that is to actually evaluate at precisely which iteration it halts.
Comment #53 June 29th, 2025 at 10:41 am
#39: Interestingly this depends what you mean by a formula. Specifically, if you allow it to be any algorithm that will compute the answer, then there is guaranteed to be a (very) small algorithm by design. Specifically, the algorithm will be simulate (specific TM rules) starting from a blank tape and count the number of steps until it halts. Heh.
But probably we should restrict formulas to be things which are always total / decidable. In other words, general algorithms aren’t allowed because not all algorithms halt and you don’t know until you have simulated it to halting. So instead, perhaps a formula should be a thing which you are guaranteed a priori will halt before running it. In that case, I suppose that a Rocq/Coq proof of BB(6) will count since that is written in the CoC in which all programs are guaranteed to halt.
I too hope that a Rocq proof of BB(6) is small enough to fit into our universe … and I’m guessing it is … but I would certainly believe that BB(10) might not be!
Comment #54 June 29th, 2025 at 10:57 am
Scott #19, that is an interesting point about Rayo’s number potentially not being well-defined. I hadn’t thought about this before but having looked at the definition a bit more carefully, I can see your point.
I wonder then what the point is of the complicated looking definition of Sat in the definition of Rayo (https://web.mit.edu/arayo/www/bignums.html) when we could simply define Sat with respect to some model of (say) ZFC, given the former doesn’t seem to be any more uniquely defined than the latter.
Of course you could instead define Sat in terms of provability with respect to some axiom system, say ZFC, and that would indeed be uniquely defined, but then it would be easy to bound the resulting “Rayo number” by the busy beaver function, with a similarly sized argument.
Comment #55 June 29th, 2025 at 11:12 am
Just considering a preliminary implementation plan-won’t it be necessary to keep adding sand grains until there is a black hole the size of the observable universe in order to effectively fill it with sand grains (and neglecting expansion during implementation). So about 3×10^60 fine sand to silt size grains for mass of 3×10^53 kg’s (thanks ChatGPT) in a fresh universe with no resident matter or energy.
The observable universe is tiny compared to what might be but vast compared to what is. Construction of this black hole may be like a puzzle in an escape room for our species to reach what might be.
Comment #56 June 29th, 2025 at 11:53 am
Sniffnoy #41, Joshua Zelinsky #44: See here for the reduction to 643 states: https://github.com/sorear/metamath-turing-machines/pull/7
(on another note it would be nice if the comments here would be structured in a tree format)
Comment #57 June 29th, 2025 at 3:02 pm
Another way to explain Scott’s joke:
That brought the following story to my mind: A physics professor at a university was talking about the temperature at the core of the sun. He told the students that it was 15 million degrees. Of course this being a formal physics class these would be metric units and not Fahrenheit, but there was still some ambiguity so someone asked “Is that Celsius or Kelvin?” The professor’s response: “Does it matter?”
Comment #58 June 29th, 2025 at 3:10 pm
Aron #27:
As such, I just wanted to get your thoughts on how likely you think it is that quantum computers will have a significant impact in this area (drug discovery, medicine, biology etc..)?
I guess my question just is….if you had to guess, would you say you think there is a greater chance than not that there is something “Shor/QPE/HHL like in usefuleness” still lurking out there that we haven’t found yet? Or most of what remains to be found will be highly esoteric (some new cryptographic scheme, complex physics system etc…).
You’re asking huge questions to which I don’t have simple or confident answers!
For drug discovery and medicine, I think the honest thing to say is that there’s a pathway to getting a benefit from QCs that isn’t foreclosed by anything currently known. Namely, to use quantum simulation for estimating rates of chemical reactions, etc., and thereby get lots of information of drug candidates much more cheaply than if you needed to test things in the lab. Even there, though, the usual caveat applies: namely, you need to win (in speed, accuracy, cost, or some combination thereof) against the cleverest things people can do on classical computers. But, on the positive side, it might take only a small number of wins to generate billions of dollars in value, commensurate with the billions now being invested in QC. Among my colleagues who know quantum chemistry and whose judgment I trust, some are optimistic and others are more skeptical.
On quantum algorithms, where I can speak with somewhat more firsthand knowledge: yes, the discovery of Yamakawa-Zhandry, DQI, quartic speedups, etc. within the past few years has made it clear that there are deep new quantum algorithms ideas still to be found. Whether any of those ideas will turn out to be practically important is of course the much harder question. I do have a personal suspicion: that if something is practically important, it’s unlikely to be a quantum algorithm for graph isomorphism, lattice problems, or probably even NP-hard optimization problems, or any of the other stuff that people have been bashing their heads against for decades. It’s more likely to be a quantum algorithm that solves some problem that isn’t even on our community’s radar yet.
Comment #59 June 29th, 2025 at 3:15 pm
Aron #33:
is it plausible that AlphaProof-style proof assistants trained on a corpus of solved busy beaver instances (and given the right tools) or maybe language models with reinforcement fine-tuning on beaverology tool use could in the near future be useful in resolving new BB(6) champion candidates?
Another hard question! My personal suspicion is that pure, shallow ML pattern recognition on Turing machines isn’t going to work well, because there’s too much sensitive dependence on the tiniest details of the transition table … but ML in a tight feedback loop with actually running the machines and observing their behavior, and also calling a proof assistant to generate proofs that machines run forever or halt after an Ackermannian number of steps or whatever? If someone manages to implement that well, watch out! 🙂
Comment #60 June 29th, 2025 at 3:24 pm
Jack #35:
Wouldn’t then it be possible that there is some grey zone in BB’s, such that e.g. the statement “BB(8) is independent of ZFC” is itself independent of ZFC? And so forth. How many levels could such tower have?
To add to what Fergus #38 wrote: yes, you could imagine that there’s some 8-state Turing machine whose running forever is not only unprovable in ZFC, but unlike with the Gödelian machines, can’t be proven in ZFC to be equivalent to Con(ZFC) or any large cardinal axiom or anything else of the kind. Indeed, you can imagine that we’ll only ever have an empirical case that that machine runs forever, not a case grounded in any statement we can prove.
I was talking a month ago to the logician (and now AI person) Elliot Glazer. He aptly calls such a situation a “black hole of mathematical justification,” and he strongly conjectures that the value of BB(7) (for example) is already such a black hole. Of course, as I teased him, if he’s right, he’ll never be able to prove it!
Comment #61 June 29th, 2025 at 3:30 pm
Adam Treat #36:
Rather than conjecture that BB(6) is independent of ZFC I’m going to conjecture that it is larger than Graham’s number.
Yeah, that now seems plausible. If not BB(6), then BB(7) for sure.
Relatedly, my daughter Lily asked five years ago what’s the smallest n such that BB(n) exceeds Ackermann(n). We only knew then that it was at most 18. It’s now looking like it might be 6 or 7.
Comment #62 June 29th, 2025 at 3:38 pm
Patrick #37:
Do we have any better bounds for (your favorite fragment of) PA than ZFC. I’d also be curious to know what you’d conjecture on this question.
That’s another great question, one that I raised explicitly in my 2020 survey article!
Alas, so far as I know we don’t yet have any upper bounds for PA better than what we have for ZFC. I think this is a great project for someone.
A priori, my intuition would’ve been that independence from PA should happen earlier than independence from ZFC. After all, we have “natural” examples of arithmetical statements independent of PA (Goodstein’s theorem, the hydra game, etc), in a sense that we don’t have them for ZFC. But this past month’s developments are at least an update in favor of the hypothesis of a “hard takeoff singularity” 🙂 — i.e., that maybe you just completely zoom off into hyperspace by BB(7) or so, eluding PA and ZFC and every other formal system at nearly the same time.
Comment #63 June 29th, 2025 at 3:45 pm
Jefferson Carpenter #39:
By the way, I hope there’s enough matter in the universe to write down a formula for the exact value of BB(6)
Whether there is or isn’t will mostly just depend on what you allow as a “formula”!
If you insist on using “standard operations” (addition, multiplication, and exponentiation) only, then what we learned this month implies that, no, there’s not a “formula” for BB(6) that you could write down using all the matter in the observable universe.
At the other extreme, if you allow any halting computer program that specifies BB(6) as a “formula,” then there definitely is a small formula: namely, the 6-state BB champion itself, whatever it is! 🙂
Comment #64 June 29th, 2025 at 3:51 pm
Sidney #40:
Non-halting machines must exist that cannot be proven to be non-halting, neither by the hypothetical machine H, nor by the cleverest mathematicians in the world bringing their entire arsenal of ZFC-rooted mathematics to bear.
Funnily enough, we cannot by definition produce examples of such pathological non-halting machines – but we know they must be out there.
To add to what f3et #42 wrote, there’s a crucial caveat to this. Namely, the whole point of what Adam Yedidia and I and others did, was to produce explicit Turing machines—initially with 8000 states, now with 643 states—such that one can formally prove, within ZFC or even PA, that they halt if and only ZFC is inconsistent. (And therefore, via Gödel, ZFC can prove that it can’t prove that these machines run forever, unless it itself is inconsistent.)
In a sense, then, we do understand these machines, modulo a loose end that requires going beyond the resources of ZFC to tie up. 🙂
Comment #65 June 29th, 2025 at 3:57 pm
Edan Maor #43: Thanks, yes, I scanned the Hacker News thread last night! Incidentally, I did do an Ask Me Anything on Hacker News years ago.
Comment #66 June 29th, 2025 at 4:01 pm
Navid Rashidian #44:
1. It is trivial that the lowest value n such that BB(n) is independent of PA is smaller or equal to the same number for ZFC. But looking at the approach that gave us the 643 bound, it doesn’t seem able to let us show any gap. This is because the axioms of PA don’t look simpler and hence enumerating proofs in PA with the same idea will result in a Turing machine with roughly the same number of states. Do you take this to be evidence that the gap is actually not large or non-existent?
Maybe, but weak evidence at best. If the BB function “zooms off into hyperspace” already at n = 6 or 7 (see my comment #61), that’s actually stronger evidence for the lack of any significant gap between PA and ZFC.
2. I know you’re not a fan of lambda-calculus, but suppose we define the same function for lambda expressions. Could we expect this to shed any light on the Busy Beaver function? (I myself don’t think so, but I couldn’t refrain from asking.)
You could certainly study a lambda-calculus variant of BB, and that might be interesting in itself, but I’m not sure whether it would shed any light on the Turing-machine variant.
Comment #67 June 29th, 2025 at 4:03 pm
flergalwit #54:
that is an interesting point about Rayo’s number potentially not being well-defined. I hadn’t thought about this before but having looked at the definition a bit more carefully, I can see your point.
FWIW: Agustin Rayo is a great guy, I discussed this with him years ago, and he agrees that given my philosophical commitments I should reject his definition.
Comment #68 June 29th, 2025 at 11:42 pm
Scott #62: Nick Drozd wrote about the “hard takeoff singularity” scenario, where the cut-offs for many theories, from Robinson arithmetic to ZF with large cardinal axioms, all show up at once. https://nickdrozd.github.io/2024/07/15/undecidability-all-at-once.html
Comment #69 June 30th, 2025 at 1:30 am
why bb(5) = 47,000,000?
bb(5)=4098!
it is not even correct to call the function of the number of steps a busy beaver!
a beaver is a Turing machine that marks “1” on a tape with its teeth.
this function is defined by its founder, rado tibo!
who changed everything? and for what purpose?
Comment #70 June 30th, 2025 at 2:23 am
The timing of Pavel’s \( 10 \uparrow \uparrow 15 \) Turing Machine discovery was kind of incredible because, as Adam P. Goucher pointed out on his blog (https://cp4space.hatsya.com/2022/06/23/tetrational-machines/), a different person named Pavel (Pavel Grankovskiy) made a tetrationally long lasting pattern in Conway’s Game of Life about a month earlier: https://conwaylife.com/forums/viewtopic.php?p=144077#p144077
And just like Pavel Kropitz’s discovery, Pavel Grankovskiy’s Life pattern has now been beaten by a pentationally long lasting pattern: https://conwaylife.com/wiki/Engineered_diehard#Double-tetrational_designs
Comment #71 June 30th, 2025 at 4:23 am
Scott #19:
> The new lower bound on BB(6) is still tiny compared to Graham’s number, although huge compared to Skewes’ number.
Apparently I don’t have a good intuitive grasp of the size of different large numbers yet 🙂
> I don’t accept that “Rayo’s number” is actually well-defined, […]
So what do you consider the biggest number nowadays? *Is* there something that’s semi-officially accepted in the community?
Comment #72 June 30th, 2025 at 4:35 am
Scott #65:
Some questions/comments in HN confused me and I wasn’t sure what’s accurate, hoping you/someone could chime in here.
Firstly, based off of this comment (https://news.ycombinator.com/item?id=44408279), I have this question:
If we knew by some method the value of BB(643), and then ran that machine for BB(643) steps, would that prove ZFC is consistent? Someone claimed this and something seemed off to me, but I think that’s basically true now?
It seems weird because it’s a “mechanical” method for proving ZFC consistency, which doesn’t seem like it should be allowed per Incompleteness, unless Turing Machines somehow are a more powerful axiomatic system than ZFC itself? (I don’t really think of Turing Machines as an axiomatic system, this whole line of questioning feels like a category error, so I’m guessing I’m missing something.)
Secondly, according this comment led to a whole discussion, but seems wrong to me (https://news.ycombinator.com/item?id=44407440):
> I think the more correct statement is that there are different models of ZFC in which BB(748) are different numbers.
I haven’t studied much formal logic (yet?), but this doesn’t sound right to me. My understanding is that BB(N) is well-defined under standard math, so it has a specific value under ZFC.
Comment #73 June 30th, 2025 at 7:30 am
@ konkhra #69,
You ask “why bb(5) = 47,000,000?
bb(5)=4098!
it is not even correct to call the function of the number of steps a busy beaver!
a beaver is a Turing machine that marks “1” on a tape with its teeth.
this function is defined by its founder, rado tibo!
who changed everything? and for what purpose?”
Rado originally thought about both this Busy Beaver function and the function you mention. It turns out that the two functions have nearly identical behavior, but that one Scott labeled Busy Beaver is more intuitive for many people, so over time, it has become the one people focused on. But both functions are closely connected and work on understanding the function you mention is also ongoing.
Comment #74 June 30th, 2025 at 8:07 am
@Kai Middleton #57:
I’m reminded of a statistics FAQ “When I estimate a standard deviation from a sample, should I use N or N-1 in the denominator?” whose reply was “If you’re worrying about that, N is not big enough.”
Comment #75 June 30th, 2025 at 8:09 am
Edan Maor#72
I think, but could be mistaken, people on HackerNews are wrong. Non-standart models or standart models of ZFC or any other axiomatic system has noting to do with BB and it doesn’t depend on it. I’m sure you could create some “non-standart Turing machine” in ZFC but usualy we use “metamathematical” numbers when describing BB. Btw we use this metamatematical numbers intuition when descrbing axioms of ZFC or PA. To create formal systems you already need to know in some way what numbers are and what logic is, etc.
Comment #76 June 30th, 2025 at 8:43 am
Edan Maor #72:
– The resolution of the difficulty is that, in order to know BB(643), you would already need to know ZFC was consistent. Otherwise, the machine that checked for inconsistencies in ZFC might halt after a larger number of steps.
– No, there are no models of ZFC where BB(n) has any (standard) value other than its actual value k. There are only models that “wrongly believe” that the value is greater than k, but when you try to pin them down on what it actually is, it’s always greater than any integer you can name, so you can never refute their false belief by running all the n-state machines and showing that none of them halt in that number of steps. This is what it means for the value to be a “nonstandard integer.”
Comment #77 June 30th, 2025 at 8:50 am
Edan Maor #71:
So what do you consider the biggest number nowadays? *Is* there something that’s semi-officially accepted in the community?
The thing about big numbers is, you can always define a bigger one… 🙂
For real though, a biggest number contest between two professionals would probably quickly degenerate into arguments about the well-definedness of ordinals. This is because both contestants would want to use a function that was “BB-like” but for souped-up oracle Turing machines that can solve the halting problem for machines that can solve the halting problem for … and so on through some computable ordinal. Which ordinal? The supremum of all the ordinals whose existence can be proven in ZFC? Why not assume even stronger axioms and go further? See the difficulty? 🙂
Comment #78 June 30th, 2025 at 10:11 am
Which of the following is true (or maybe something else is).
1) People in the area thought BB(6) was much smaller so are surprised at the result.
2) People in the area thought BB(6) was quite large but that this would be hard to prove so they are surprised that its been proven.
3) People in the area thought BB(6) was quite large and the proof was within reach of current techniques, so ,while impressed, are not surprised.
If you don’t want to speak for `people in the area’ then replace that with `Scott Aaronson’
ALSO- do people in the area (or just Scott) think that BB(6) will be shown to be much bigger?
‘
Comment #79 June 30th, 2025 at 10:43 am
William Gasarch #78: I don’t think most people had firm opinions at all! If you’d asked me, I would’ve been like “dunno, it could be bigger, who knows by how much” (which is again my feeling now). Maybe some people were confident that it was vastly bigger than 10^^15 or whatever, and are again confident today that it’s vastly bigger than 2^^2^^2^^9. I’ll let those people comment if they’re out there.
Comment #80 June 30th, 2025 at 11:27 am
Is it well defined to ask what is the weakest extension of ZFC which determines the value of BB(643)?
Comment #81 June 30th, 2025 at 11:46 am
Josiah #80: Alas, not really. You could always just add an axiom that says “BB(643)=k” (whatever the true k is), so in some sense that’s the “weakest.” But when you try adding iterated consistency statements to ZFC in an ordinal hierarchy, the power of the resulting theory partly just depends on what Turing machine you use to encode the ordinal, allowing you to cheat and smuggle in whatever new axioms you want. Source: Alan Turing’s 1939 PhD thesis, which he wrote just before he left to join Bletchley Park! See my blog post about it.
Comment #82 June 30th, 2025 at 11:52 am
Scott #76:
> The resolution of the difficulty is that, in order to know BB(643), you would already need to know ZFC was consistent. Otherwise, the machine that checked for inconsistencies in ZFC might halt after a larger number of steps.
I’m not sure I understand why you need to know the consistency of ZFC to know BB(643).
Aren’t we able to calculate the value of BB(643) (in theory), under a “stronger” axiom system if need be? At that point, we can run TM_643 for BB(TM_643) steps, after which we’ve “proved” that ZFC is consistent. We’re not proving “BB(643)=k” in ZFC, we’re proving it in some stronger system, so that doesn’t lead to a Godelian contradiction.
Comment #83 June 30th, 2025 at 12:57 pm
Edan Maor #82: Again — as soon as BB(643) is known, proving the consistency of ZFC is reduced to a “mere finite computation.” That is why ZFC can’t determine BB(643). That’s also why any stronger theory that determines BB(643), also proves Con(ZFC).
Comment #84 June 30th, 2025 at 1:04 pm
Scott #66
Checking for busy beaver analogs in slightly different models of computation would tell us if there was anything significant about the exact takeoff points.
Comment #85 June 30th, 2025 at 1:22 pm
Scott #83:
> Again — as soon as BB(643) is known, proving the consistency of ZFC is reduced to a “mere finite computation.” That is why ZFC can’t determine BB(643). That’s also why any stronger theory that determines BB(643), also proves Con(ZFC).
Ah, yes, that’s what I understood, it just seemed “weird” to me that we could prove ZFC consistent by merely calculating the value of a finite function, and then running a Turing machine for that many steps. I guess it helps that the number of steps is, while finite, pretty big.
Btw, that post about Turing’s PHD thesis is fascinating.
Comment #86 June 30th, 2025 at 3:51 pm
Edan Maor #85 – another way of making the point is that you can run a suitable 643-state Turing machine for \(k\) steps, and if it doesn’t halt you have a proof of the implication \(BB(643)\leq k => Con(ZFC)\). (If it does halt, you’ve proved the inconsistency of ZFC of course.) However, to get from there to a proof of \(Con(ZFC)\), you’d have to have a proof of \(BB(643)\leq k\) for some integer \(k\). We therefore know the latter isn’t possible in ZFC (assuming ZFC is actually consistent).
If you have a proof of \(BB(643)\leq k\), e.g. in some system strictly stronger than ZFC, then indeed you’d be able to convert this into a proof of \(Con(ZFC)\) in that system, after running a \(k\)-step computation (again assuming ZFC actually is consistent). However it seems highly likely that proving \(Con(ZFC)\) directly in such a system would be easier than going via the bound on the value of \(BB(643)\).
If you were simply informed by a reliable oracle that \(BB(643)\leq k\) for some \(k\) (whether that’s God, or maybe you’re lucky enough to live in a universe whose physics allows you to run every step of a non-halting Turing machine in finite physical time) then after running the \(k\)-step computation, you still wouldn’t technically have a proof of \(Con(ZFC)\) (in the sense of being able to exhibit a finite length logical argument) though you’d know it’s true, if you trust the oracle. On the other hand if you can talk to an omniscient God about computer science, or can live to see the full run of a non-halting Turing machine, you’d have other ways of generating proofs of \(Con(ZFC)\) or whatever else you wish. 🙂
Comment #87 June 30th, 2025 at 8:03 pm
Since people seem to be interested in the biggest number question, I may as well plug my paper about the subject: https://arxiv.org/abs/1809.08676 Spoiler: you can define an analogue of Rayo’s number that fits with basically any set of philosophical commitments you might have, and that’s pretty much the best you can do.
Comment #88 June 30th, 2025 at 9:41 pm
Prague (or as they say Praha) is an amazing city.
anyone who has not been there at least once is missing big time
I hope you enjoyed their famous beer tour
Comment #89 June 30th, 2025 at 9:43 pm
Scott #58
May I ask what quantum “quartic speedups” are you referencing in passing? I don’t mean that it has to be rigorously proven (“even” nondeterministic TMs, representing NP, and pop-science quantum computer explanations in a sense, don’t have such proven speedups – they’d prove lower bounds for SAT solving if they did),just in what cases the speedup appears to be quartic.
Comment #90 July 1st, 2025 at 4:59 am
anonymous #89: Didn’t I already say in the slides? There’s the quartic speedup for tensor PCA from Matt Hastings in 2019, and then the more recent quartic speedup for Sparse LPN from Schmidhuber et al. Both are speedups over the best known classical algorithm, which is based on something called the Kikuchi method (the quantum algorithm also uses Kikuchi).
Comment #91 July 1st, 2025 at 11:00 am
Scot #81:
Thanks for the reply, the post about Turing’s thesis was interesting.
On a different note you say that provided you know the true answer k, one can add the axiom ‘BB(643)=k’, but unless I’m missing something presumably there exists a q != k such that ZFC cannot prove ‘BB(643) != q’, in which case ZFC + ‘BB(643)=q’ is just as consistent as ZFC + ‘BB(643)=k’. How hard is it to find such a q (or rather, some x which either is such a q, or else is k)?
Comment #92 July 1st, 2025 at 11:58 am
Josiah #91: No, as I explained in a previous comment, the axiom “BB(643)=k” will be consistent for exactly one standard integer k. If k is too small, you’ll be able to see in ZFC that there’s a Turing machine that runs for longer, while if it’s too large, you’ll be able to see in ZFC that no Turing machine halts in exactly that number of steps. In the models where BB(643) is “larger” than its correct value, it’s always a nonstandard integer.
Comment #93 July 1st, 2025 at 1:38 pm
Dacyn #87,
That’s a pretty dense paper. Maybe this is super naive, but I think it can be boiled down to: LEM causes problems. The debate about the largest number becomes much easier when we do not assume LEM and hence every number must be constructed, right?
Comment #94 July 1st, 2025 at 4:54 pm
Aron #27 and Scott #58:
I thought I would share this online tool https://futuretech.mit.edu/quantum-economic-advantage-calculator that is designed to allow users to explore different combinations of algorithmic problems and quantum hardware. Users can freely deviate from known projections and default parameters to derive their own insights as to the general timeline of when certain scenarios may become economically advantageous to run on a quantum machine.
Comment #95 July 1st, 2025 at 6:31 pm
Adam Treat #93: Hmm, I never really thought of it as being a “dense” paper except for the appendices, but maybe you’re right that it is. Regarding your attempted summary, I think there are two important qualifications: (1) it’s not just LEM that causes problems, but the entire concept of negation when interpreted in the standard way. So the languages discussed in the paper don’t have negation except for atomic formulas; you can use de Morgan’s laws to construct a negation operator for more general formulas but it doesn’t have quite the meaning you might naively expect. This distinguishes those languages from languages like constructive/Heyting arithmetic, which define negation in terms of implication, but see my Appendix D for why I don’t think this definition makes much sense.
(2) working in a language that doesn’t have LEM/standard negation doesn’t mean that “every number must be constructed” at least if you’re using “constructed” in anything like the standard sense. For example, one of the languages I discuss (called GST / gradualist set theory in the paper) allows one to refer to arbitrary numbers definable in ZFC* (or ST / set theory in the language of the paper), and these numbers aren’t “constructible” in any standard sense.
* There is a technicality here in that we need to fix a model of ZFC/ST in order to discuss numbers definable in ZFC/ST, and the whole premise of GST is that there’s no canonical model of ST, but rather an increasing ordinal-indexed sequence of such models. This can be reconciled by fixing a model of ZFC to call the canonical model, e.g. the set V_\alpha where \alpha is the smallest inaccessible cardinal.
Comment #96 July 2nd, 2025 at 5:10 am
Can you say something about how these bounds are proved? Surely not by letting the machine run until it halts and counting the number of steps. But then how do we know that they do halt?
Comment #97 July 2nd, 2025 at 5:37 am
Vincent #96: Yes, obviously they don’t run the machines step by step! 😀
Instead they understand what’s going on at a higher level of abstraction. Which is usually something like: “this machine takes a positive integer x, replaces it by f(x), and keeps repeating that until x mod 8 = 3 (or some other such condition holds).” And if for example each f(x) is exponentially larger than x, then you can get really staggering growth before the termination condition kicks in. And if you’re careful, you can know that without needing to compute f(x) explicitly, just knowing its values mod 8 or whatever.
Anyone who knows the details of the most recent bounds (Shawn Ligocki?) is welcome to provide more detail on them… 🙂
Comment #98 July 2nd, 2025 at 8:10 am
Hi Scott,
I was watching the Stoc2025 slide and i saw the Betti Number Algo near the Erwinization of the HHL.
I may ask what do you mean by that?
I don’t have understand the true meaning of the slide.
Which one of those is the right one
1)Betti number calculation could be Erwinized by somebody, so is not a quantum advantage.
2)Betti number algo are just QPE in another form.( so no new quantum algo)
3)Betti number algo are just QPE and they could be Erwinezed.
Thank you in advance.
Comment #99 July 2nd, 2025 at 8:29 am
Scott, I am sorry for a layperson’s question, but is there such thing as:
a) Asymptotically fastest-growing computable function, or
b) Asymptotically slowest-growing incomputable function and
c) are there computable functions that grow faster than some incomputable functions?
If “no” in c), d) is there a “finite” growth rate gap between computable and incomputable functions? Can we, perhaps, enumerate the growth rates of incomputable functions by infinite ordinals, and if yes, is there an incomputable function that has the growth rate \omega, thus answering b)? I would bet on a) being false, because you can multiply any computable function f(n) by n and still obtain a computable function, b) being somehow true if c) is false and d) is true (because \omega / n would still be \omega, the class of slowest-growing incomputable functions thus incorporating an infinite number of elements), but I am highly skeptical of my understanding of questions related to incomputable functions (and ordinals), and ChatGPT returned some obvious nonsense like using BB(n) to prove there is no fastest-growing computable function.
Comment #100 July 2nd, 2025 at 10:00 am
Turing note:
The other day, in a thrift store, I spotted an “age 8 and over” game called “Turing Tumble” (TT) where (apparently) you solve logic puzzles by placing deflectors on a board with marbles rolling around, or something like that.
I look forward to an analysis of TT by students in a TCS class. TT is not likely to be as fast as a QC, but it appears to work (even at room temperature!), so there is that. TT might make a good gift for any kids you might be connected to.
Comment #101 July 2nd, 2025 at 12:10 pm
QuantumPie #98: “Ewinized” is my humorous term for “dequantized.”
Yes, it’s conceivable that the Betti number algorithm could be dequantized.
Even if it isn’t, though, the bigger issue is that it only tells you the Betti number up to an additive error of order 2n/poly(n). And it’s hard to think of practical ML situations where such a crude estimate would suffice, and where (eg) “always output 0” wouldn’t provide an equally good estimate. That’s why I regard the applicability of this algorithm as still an open question.
Comment #102 July 2nd, 2025 at 12:14 pm
Fulmenius #99: There obviously can’t be a fastest-growing computable function, because whatever it was, you could square it. Similarly for slowest-growing.
The function “H(n)=1 if the nth Turing machine halts or 0 otherwise” is uncomputable, but many functions grow faster than it. 🙂
Comment #103 July 2nd, 2025 at 5:21 pm
Scott #92
Suppose it’s consistent in ZFC to say add the axiom BB(643) = k
But what if we run all relevant turing machines in lockstep until k steps anyway? We will find out if this is the true value or not
Or are you saying that the only consistent axiom is the one that says the true value of BB(643)?
So if BB(643) is uncomputable, then this axiom is also uncomputable and so we can’t add it
Comment #104 July 2nd, 2025 at 6:57 pm
Elias #103: One more time…
No, you will not find out if k is the true value by running all the Turing machines for k steps. You won’t know that some other machine won’t halt after a longer number of steps.
There is only one value of k for which the axiom “BB(n)=k” can be consistently added to ZFC — namely, the true value. But that doesn’t mean that ZFC can prove that BB(n)=k; you might really need to add it as a new axiom. You can rule out every other possible value k’≠k, one by one by one, but that’s not a proof of BB(n)=k because it takes infinite time.
Comment #105 July 3rd, 2025 at 1:35 am
Elias #103: regarding the last sentence, although the value of BB(643) cannot be proved in ZFC, the value is not uncomputable. Indeed it’s just an integer, and all integers are computable. It’s the function n->BB(n) that is uncomputable. Scott wrote about this point a while ago: https://scottaaronson.blog/?p=8106 .
Whether any unprovable-in-ZFC values such as BB(643) can in principle be reliably determined by humanity (and what that even means) is another matter, and as far as I know there’s no real consensus on that.
Dacyn #87, thanks for the paper. I’d like to understand this point better, so it’s now on my reading list.
Comment #106 July 3rd, 2025 at 2:38 am
Scott #104:
An utterly pedantic note, but if BB(n)=k is independent of ZFC (where k is the “true” value in the standard model) then you should be able to consistently add either “BB(n) = k” *or* “BB(n) != k” as axioms, right? By the completeness theorem for first-order logic. It’s just that all models of the latter will contain non-standard n-state “Turing machines” that “run” for longer.
In the same vein as the paradoxical result that PA plus “PA is inconsistent” is actually itself a consistent theory (by Godel’s theorem).
Comment #107 July 3rd, 2025 at 2:54 am
Bubblyworld #106: Yes, that’s consistent with what I said (one correction: it’s not the Turing machine itself that becomes “nonstandard,” just its runtime).
You can consistently add the axiom “BB(n)≠k,” even though there’s no actual integer k’≠k for which you can consistently add the axiom “BB(n)=k’.”
Comment #108 July 4th, 2025 at 5:03 am
Incredible stuff. Do you know Scott if there’s a problem now with the numbers being so huge and increasingly difficult to prove that they halt, that we’d struggle to recognise/analyse any potential new champions that aren’t doing Collatz-like iterations?
I feel like the work done on BB(6), though extremely impressive, wouldn’t have been possible without the BB(5) champion to analyse. So with no simpler example of a BB utilising a different type of operation, I wonder if its now become too difficult to find other types of BB Champions.
Comment #109 July 4th, 2025 at 8:39 am
Simon #108,
That question may be rephrased as a two-parter:
1) Are emergent growth strategies possible as halting Turing machines grow in size?
2) Do we have any hope conceptualizing such emergent growth strategies?
Does that sound right? I’m going to guess both are intractable?
Comment #110 July 4th, 2025 at 8:54 am
Simon #108 and Adam Treat #109: These questions would be better directed to mxdys or Shawn Ligocki or someone else who’s been engaged in the search for these machines.
But I’ll comment that “Collatz-like” is actually an extremely broad category, if you include these most recent machines in it. It encompasses machines that run for numbers of steps that we can’t compactly express without up-arrow notation!
One good concrete question would be: what happens if BB(6), or BB(7), or BB(8) turns out to be so huge that we can’t compactly express its value even with up-arrow notation? Will we then still be able to understand the machine well enough to prove halting? Maybe yes but it will surely be more difficult…
Comment #111 July 4th, 2025 at 2:21 pm
Since the search for BB numbers “properly” (i.e. by outputting concrete numerals) is basically done, why not switch to computing upper and lower bounds on Chaitin’s constant? It should be based on largely the same techniques (proving whether or not small turing machines halt), but it will never get “boring” to write out the digits properly since it looks like a random sequence of bits. In particular, anytime you determine whether a given machine halts or not, you can decrease the gap, the smaller the machine the better. Although in theory there is a lower bound on how small a gap you can prove, you can never reach this lower bound, which is fun because it means you get to keep *approaching* the optimal gap forever and ever!
Comment #112 July 4th, 2025 at 3:02 pm
Christopher #111: You’re welcome to work on that! But note that defining Chaitin’s constant requires a prefix-free Turing-universal programming language, and I don’t know if there’s a “canonical” choice for that, analogous to how 1-tape 2-symbol Turing machines became the canonical choice for the BB function with Rado’s 1962 paper. (Also, it seems like there’s still plenty of fun to be had putting astronomical lower bounds on BB(6) and BB(7)!)
Comment #113 July 4th, 2025 at 6:24 pm
Adam #109 Yes excellent rephrasing of my slightly long-winded question.
Scott #110 Thanks, appreciate “Collatz-like” is a broad category as you say, but with approx 200 quintillion 6-state machines, I’d have thought on sheer numbers that the vast majority of promising machines would be exhibiting non-Collatz-like behaviour. If so, just wondered if anything like that had been spotted, or if the absence of “exotic” champion candidates so far is indicative that such non-Collatz machines have become too difficult to identify given the complexity thats now being dealt with at BB(6).
Comment #114 July 6th, 2025 at 10:01 pm
#108: This is basically the question that nobody knows how to answer without getting your hands dirty and just trying to understand every single 6-state (or onward) Turing machine 🙂
Basically you can categorize all TMs in some domain (such as 6-state, 2-symbol) into 4 categories: Proven Halting, Proven Non-halting, “Cryptids” and “Holdouts”. Holdouts are the TMs that we cannot yet categorize.
Cryptids is a term I coined to describe machines like [Antihydra](https://www.sligocki.com/2024/07/06/bb-6-2-is-hard.html) where it’s halting problem seems quite hard because of its close connection to the Collatz conjecture (we need some new math or at least some cutting edge number theory research to solve them).
We have reduced the Holdouts list for BB(6) down to almost 3000 TMs and this number is lowering every month. So it really does feel within reach that we could reach 0 Holdouts. Of course this does not mean that we’ve solved BB(6), because we still have the Cryptids. But it does mean that we have understood the problem to a very high degree and basically understand the behavior of all machines, just some of those behaviors are mathematically hard to resolve.
It certainly helps when analyzing TMs to have “analogies” that are simpler, but that does not have to come from smaller BB domains, it can also come from considering the rules of the TM on different tape configurations.
One of the things I’ve found so fascinating so far with BB research is that most TM champions (and Cryptids) a reasonable to “reverse engineer” and discover “how they work” at least enough to write a tidy little blurb about them (they loop an exponential function an arbitrary number of times and wrap that in a Collatz-like check).
Of course, the problem with this sort of statement is that it is so biased by the Streetlight Effect (all the things we’ve been able to analyze are things that are reasonable to analyze) which is why it’s so important to understand the Holdout counts. When that number reaches 0, we will have truly understood (up to the “Cryptid” hard problems) all TMs and so can make definitive statements. When that number is large, it is best to assume that our results are quite biased and that the real champion could be a of a totally different (and perhaps incomprehensible) variety.
Comment #115 July 6th, 2025 at 10:04 pm
@Scott
It seems that Daniela Rus was recently threatened by pro-Palestinian thugs:
https://x.com/sfmcguire79/status/1942054531459613056
Comment #116 July 6th, 2025 at 10:20 pm
Shawn Ligocki #114: Thanks so much; that was extremely enlightening! Especially the perspective wherein resolving the behavior of all 6-state machines, modulo the Cryptids, would be very clear progress on the BB(6) problem short of resolving it, roughly analogous to figuring out the full range of parameters where your favorite problem is in P or is NP-hard and thereby leaving “only” the P vs NP problem unsettled.
Comment #117 July 7th, 2025 at 7:06 am
https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation
Comment #118 July 7th, 2025 at 7:39 am
Don’t we expect the rate of increase of B(N) to start to cap at some point?
I.e. once a program is big enough, the best way to generate large numbers becomes a straightforward function of the size of the program?
Comment #119 July 7th, 2025 at 5:34 pm
Once N is large, wouldn’t the BB(N) winning programs rely on pseudo randomness?
E.g. once a program is big enough to generate PI indefinitely, can’t we then have it stop when it finds a matching sequence of arbitrary length (like BB(5) worth of consecutive ‘1’s)?
We know that the expansion of PI has all possible arbitrary sequences in it, so we know the program will stop at some point, it’s just a matter of searching long enough.
Comment #120 July 7th, 2025 at 6:21 pm
@Fred #118,
You might have that intuition, but the entire point of the theorem that BB(n) grows faster than any computable function is that never happens. It always will be growing absurdly fast, far faster than any computable function you write down about the number of states.
However, one thing we still cannot rule out is that it is possible that BB(n) has most of its growth on a somewhat sparse set. However, this is very likely false. Scott’s survey on the Busy Beaver function is the best place still to read about what is known here even as it is not quite up to date about a lot of specific records.
Comment #121 July 7th, 2025 at 8:55 pm
fred #118: It depends on what you mean by a “straightforward function”. In a sense, the function \(N\mapsto BB(N)\) is already such a function, since it can be defined in a straightforward manner. But you probably meant something closer to “computable function”, and then the answer is no — the function \(N\mapsto BB(N)\) grows faster than any computable function, so in particular it is not eventually equal to any such function.
Comment #122 July 8th, 2025 at 5:54 am
fred #118:
That’s incorrect. BB(n) cannot eventually become computable for big enough n, because if it did, then the first finite amount of inputs to be erratic (which would still be computable anyway due to being finite) would be still possible to upper-bound by the later BB(n) values – but it’s known BB(n) can’t be upper-bounded, because if we could upper-bound BB(n), then by running n-state machines for f(n) time, with f(n) upper-bounding BB(n), we could solve the halting problem – if they didn’t halt within f(n)>=BB(n) steps, they will not halt, therefore solving the halting problem. But the halting problem is known to be uncomputable, with the gist of the proof being one can get a contradiction when making a construction using the assumed hypothetical halting problem solver as a turing machine.
Comment #123 July 8th, 2025 at 3:31 pm
I took a (short) look at the COQ code and I doubt that this really gives you more confidence than a handcrafted proof.
– how do you know if the Datatypes, Theorems and Lemmas are correctly formulated in COQ?
– how can you be sure that the COQ engine with all the complex libraries are correct?
– how can you be sure that you haven’t introduced a hidden contradiction ?
– it’s difficult to gain any insight or intuition from such syntactic COQ proofs.
Comment #124 July 8th, 2025 at 5:09 pm
Thanks for the answers.
Yea, so BB(N) is only a “function” because it matches N to some other unique value.
And I didn’t realize the winners of BB(N) are always going to be machines based on very unlikely coincidences. So making the machine just a tiny bit bigger will expand its space of potential such numbers.
As Scott pointed out, something that generates a very particular series of digits (i.e. not generic) that happens to meet some characteristics by chance when way way far into the expansion.
My suggestion regarding PI isn’t that it’s a BB(N) candidate, but I was just wondering what could be the best hand crafted algorithm to generate a truly huge number which we know will finish some day.
When we say that the expansion of PI has all finite sequences of digits in it, would that include a partial copy of itself up to that point?
E.g. at each new digit at position i we generate, we would look whether that digit matches the first digit of the sequence, and whether the i+1 digit matches the second digit, etc
so sequences like this:
28928901… (the match is at 4th digit)
or
12345678912345678901… (the match is at 10th digit)
but we do such a search with PI (314…).
Such a sequence is ever growing, but always finite.
I’m not sure if there’s a way to estimate anything about this (e.g. the probability that every new digit would be a match).
Does anyone know a better way to generate a sequence of unknown gigantic length that we know is guaranteed to finish some day?
But since we’ll never be able to actually run such things to completion, we can’t compare them 😛
Comment #125 July 9th, 2025 at 4:37 am
fred#119: actually, pi being normal (in base ten) is conjectured, but not proven. But this is not the point : to find a specific sequence of length n, you have usually to wait no much longer than 10^n steps. So as soon as BB(6), those ideas gets functions ridiculously small relative to tetration, not to mention higher uparrows values
Comment #126 July 9th, 2025 at 8:59 am
Dear Scott,
I have a few questions about computability which are loosely related to this post, I hope it’s ok for me to ask here.
My favourite intuition about about statements independent of an axiom system comes from non-Euclidean geometries: the 5th postulate is not a theorem of the first four because there exist models that satisfy those and not the 5th. I find this harder to visualise for set-theoretical examples, e.g. the continuoum hypothesis, but I can chalk those up to “infinities being weird”. However what really breaks my intuition is the undecidability of statements about natural numbers, like solubility of diophantine equations or indeed busy beaver numbers. Let’s say we have a statement Q of the form “∀n P(n)” where P(n) is checkable in finite time for given n. I believe the statement BB(k) = m (or > m) is of this form, because the statement that a given Turing machine doesn’t halt is. Let’s say we know Q is undecidable. Then my understanding is that Q must be true, because if it were false if would be provably false (there would exist some n* such that ¬P(n*)). But then why does this reasoning not *prove* that Q is in fact true? I’ve read that it is false in non-standard arithmetic, but surely (unlike the case of euclidean/non-euclidean geometry) this model is not on the same footing of standard arithmetic (can’t we just axiomatise that then if the true natural numbers are what we really care about)? What is the “axiomatic dial” one must turn to change the value of busy beaver numbers, just like changing curvature of 2D space results in different values of the sum of interior angles of a triangle?
I find it interesting that many open problems in maths are of the form of Q (Goldbach conjecture, i think RH), however some like the twin primes conjecture are not. Does there exist a Turing machine that halts iff the twin primes conjecture holds/does not hold?
One final question related to some previous comments to this blog post. I understand the proof that BB grows faster than any computable function. However I get the feeling it slightly misses the point: BB does grow very fast and it is not computable, however it does not fail to be computable *because* it grows so fast. My understanding is that what happens with BB is that for some n (possibly even n = 6) there are some *specific* n-state Turing machines for which the halting problem cannot be answered (possibly provably unanswearable in ZFC? though my intuition about this is very poor as described above) Eventually you reach some N for which BB(N) is not computable (not too large to be computable), and this is not specifically about growth because N is fixed. What’s the intuition I’m missing that links very fast growth to lack of computability?
Thank you very much for this blog and for your lectures on youtube, which have given me many hours worth of fascinating material. I would of course be very grateful to anyone for any input on my questions.
Emilio
Comment #127 July 9th, 2025 at 9:48 am
@Fred #124,
It seems like you want a computable function but which grows very fast. In that case, it may help to look up TREE(n) which is a very fast growing function but guaranteed to halt and it is computable for every n.
Comment #128 July 9th, 2025 at 9:55 am
@Emilio Ferrucci #126,
Busy Beaver’s fast growth really is connected to its lack of computability. There are two ways of seeing this. The more straightforward way is to note that if f(n) is another function such that for all sufficiently large n, f(n) => BB(n), then f(n) is also uncomputable. The second way is to note that BB(n) is not the only function where something thematically like this happens. There are functions which are fast growing where them being total functions is provable in ZFC but not PA, and if you instead use just a fragment of PA, one can do the same with slightly slower growing functions. If you get down to a small enough fragment of PA, then even a^b becomes not provably total in the system.
Regarding the twin prime conjecture and related. The term you are looing for is “Pi_1”. The Pi_1 sentences are essentially those sentences which are checkable by a Turing machine in the way you want. Currently no such machine is known for the twin prime conjecture.
Comment #129 July 9th, 2025 at 10:55 am
f3et
“So as soon as BB(6), those ideas gets functions ridiculously small relative to tetration”
To clarify again, I didn’t mean to come up with a generic algorithm to reach BB(n), just was wondering what’s a good generic way to potentially reach really large numbers.
The other thing is, the machine winning BB(6) still must be using some describable strategy that has to do with natural numbers. All winners of BB(N) must be relying by chance on a bunch of numbers that happen to only exhibit some match very far into some expansion.
For all we know, matching “1111111111111111” in PI could be at a point in the expansion that’s way further than BB(6), just by sheer luck. The matching sequence could itself be growing at a ridiculously high rate, like if we take the sequence of PI so far, S, and then match forward against 10^S, so that the match probability is 10^10^S, etc.
That’s kind of what Scott wrote in #97:
““this machine takes a positive integer x, replaces it by f(x), and keeps repeating that until x mod 8 = 3 (or some other such condition holds).” And if for example each f(x) is exponentially larger than x, then you can get really staggering growth before the termination condition kicks in.”
for almost all starting number x and possible function f(x), and mod y = z, most runs would never reach levels of BB(6) either. But all we need is one instance that does.
In a way, the staggering growth of BB(N) is all based on special relations between natural numbers, assuming that 1) the numbers can be generated with a tight algorithm, 2) the relation is rare enough that it will only occur far enough…
I guess we get biased as humans, because the size of a natural number as big as BB(6) still is ~0 when compared to all the space ahead that’s left to explore (i.e. infinity). I don’t think there’s a theorem that natural numbers start to become “less special” once they are big enough (that would make no sense to have some arbitrary threshold). And the growth of BB(N) is the proof of that.
Comment #130 July 9th, 2025 at 11:02 am
Although there’s provably no computable function F(n) that would let you bound BB(n) from above for ALL n>0, is it possible to find one that would work for a finite subset {a, b, …} before the actual values of BB(a), BB(b), etc. were known?
Such an F(n) is impossible for an infinite subset of the positive integers, like the prime numbers or perfect squares, because BB(n) is a monotonic function with BB(n+1) > BB(n) for all n >1. So no such function could exist; if it did, you could just plug in the smallest m>n that’s within the subset, then get f(m) > BB(n). No more halting problem.
But on the finite side, would it be possible to bound BB(6) or BB(7) from above before their actual value is known? For example, BB(6) < f_Epsilon0(6) from the Fast Growing Hierarchy?
Clearly it is analysis of these 6-state TMs' recursive behaviors that's allowing BB(6) to be bounded from below, and BB(6) is already so enormous that I don't see any other way TO bound it from above.
Comment #131 July 9th, 2025 at 12:30 pm
Scott #112
The two suggestions I’d have for the probability distribution would be:
1) Randomly choose a natural number k, and then randomly choose a 2-symbol k-state turing machine. This is very compatible with the current work on the busy beaver numbers, but there is arbitrariness to choosing the probability distribution for k. Maybe just 1/k^2? Or perhaps let k be a random number from 1 to N, where N is a parameter approaching infinity (or thought of as a hyperinteger)?
2) Take the first discovered universal turing machine, and ask for the probability that it halts when the input tape is a random oracle.
*Unfortunately* neither of these are particularly canonical tho.
Comment #132 July 9th, 2025 at 1:27 pm
fred#124
> My suggestion regarding PI isn’t that it’s a BB(N) candidate, but I was just wondering what could be the best hand crafted algorithm to generate a truly huge number which we know will finish some day.
Check out googology wiki
https://googology.fandom.com/wiki/Googology_Wiki
As far as I know, none of those numbers rely on pseudorandomness in anyway
Comment #133 July 9th, 2025 at 1:33 pm
#123: We (bbchallenge) have put quite a lot of effort into making this proof high quality, trustworthy and interpretable. Here are some of the steps that we have done:
1. All definitions necessary to understand the statement of the theorem are collected into a single (121 line) file: https://github.com/ccz181078/Coq-BB5/blob/main/CoqBB5/BB5/BB5_Statement.v which should be relatively straightforward for anyone familiar with both Turing Machines and Coq syntax to read and confirm. If you can confirm these definitions and trust the Rocq (previously called Coq) engine, you can confirm the proof without having to validate every single line of the proof.
2. These definition were reviewed by Rocq experts Yannick Forster and Théo Zimmermann who confirmed that these definitions correspond to the expected definitions and should be considered trustworthy.
3. While we can never be 100% sure that the Rocq engine is bug-free, I think is not an exaggeration to say that it is one of the most carefully reviewed systems in the history of mankind. If you are going to trust any code, I think you should trust the Rocq engine.
4. We are currently writing an article describing the proof in a more human readable format. The preprint will be submitted to Arxiv in the next few months. Our goal was never to write a Rocq proof and call it a day. Instead, the Rocq proof has served as an extremely strong validation of a process that we describe and can be reproduced by anyone else.
5. This proof is miles ahead of any previous proof of a Busy Beaver value. Previous proofs were generally accomplished by using unproven software followed by manual analysis of “holdouts” (remaining TMs not proven by software). For example, in Allen Brady’s proof of BB(4), he simply states that “Direct inspection of each of the 218 holdouts from these programs reveals that none will ever halt … All of the remaining holdouts were examined by means of voluminous printouts of their histories along with some program extracted features. It was determined to the author’s satisfaction that none of these machines will ever stop.”
Comment #134 July 9th, 2025 at 9:24 pm
Emilio Ferrucci #126:
> can’t we just axiomatise that then if the true natural numbers are what we really care about
No. The issue is that we would need to axiomatize the assertion “all natural numbers can be constructed by starting with zero and repeatedly applying the successor function”, but this is an English language assertion, not something that can be naturally stated in mathematical language. Peano arithmetic (and other axiomatizations of arithmetic) attempt to bridge the gap via the axiom schema of induction, but while induction is intuitively a consequence of our English language assertion, the reverse is not true. Also, the induction schema depends on a choice of a category of predicates with respect to which you can apply induction, and there is no single canonical choice for such a category — the induction schema gets stronger the larger the category you take, and you can always take larger and larger categories. Finally, some statements appear to be independent of PA in a way that isn’t resolvable by the induction schema over any category, so getting a “standard” model of the natural numbers appears to require simply guessing the truth-value of some statements and hoping you got it right.
Comment #135 July 10th, 2025 at 12:23 am
Am I correct in seeing a newer commit (6fc33be, from the end of August last year) in CatsAreFluffy’s metamath-turing-machines github repo that says there is a ZFC-independent TM with 636 states? This would beat the 643-state machine that is being mentioned above.
I’ve not seen this until today, and no discussion of it, or independent verification, though I suppose that would just consist of someone who knows their business looking at the (relatively minor) change in encoding and say “sure, I can tell that’s functionally equivalent”.
Comment #136 July 10th, 2025 at 4:50 am
Emilio Ferrucci #126:
> can’t we just axiomatise that then if the true natural numbers are what we really care about
We can axiomatise the untrue natural numbers, within a given theory of true and untrue natural numbers. So if you want, take their difference in the meta theory. But of course, that doesn’t buy you much, because the halting problem must stay undecidable.
Let me clarify my personal position with respect to Scott’s
I see no point in assuming that the natural numbers of the meta theory should be non-standard. Or at least, if they are non-standard, then because the meta theory itself was already a model in some “higher” meta theory, with standard natural numbers.
But I do think that deciding whether “some object” in “some given representation” is not a standard natural number is in general undecidable. And it is quite possible to appreciate why this is so. My point is that if you don’t implicitly cheat, then no representation capable of representing all natural numbers allows you to always decide in general whether “some object” in that representation is not a standard natural number. Good representations have the “you know it when you see it” property, that at least you will know for any actual standand natural number, that it was a standard natural number eventually.
Comment #137 July 10th, 2025 at 4:06 pm
Scott,
I wonder about the smallest n that can’t be proved with ZFC.
Let’s denote n as the smallest number s.t. BB(n) could not be proved by ZFC.
Let’s denote n’ as the smallest number s.t. there is Turing machine with n’ states that can contradict ZFC axioms
I assume 6<=n<=n'<=643 if I get it right.
Does n must equal n'? I have a feeling that n is much smaller (say n~8 as you conjecture and n'~100).
In other words, designing a TM that can contradict ZFC is the best strategy to upper bound n?
Comment #138 July 10th, 2025 at 4:48 pm
@Joshua Zelinsky Thank you for the intuitions on growth and the \Pi_1 reference. I guess I still don’t understand why non-computability has to do with growth, given that (if I understand correctly) there will be a positive integer N for which BB(N) is not computable. Then I could just define the constant function n \mapsto BB(N) and this would still not be computable for all n despite not growing at all. I’m not quite sure how to square this with your comment, though the a^b example is interesting; out of curiosity, what’s the formal system under which this function is not computable?
@Dacyn, @gentzen Thank you for your replies, I knew there was some issue with axiomatising “standard arithmetic”, however my intuition still fails me on these matters (in a way that it doesn’t when I think about undecidability other axiom systems, e.g. 5th postulate). Your comments and links are very interesting though.
Comment #139 July 11th, 2025 at 10:02 am
Another tangentially related question:
In the nth Busy Beaver machine how close to uniformly distributed how many times each state occurs as a proportion of BB(n). That is, if the states of the Busy Beaver machine on n states are s(1), s(2) … s(n), and the machine is in state s(i), t(i) times before halting, what does t(i)/BB(n) look like. In particular, is there a function f(x) where f(x) = o(1/x) and where for all i,
|1/n – t(i)/BB(n)| BB(n)(1+2/n). This which was proven by someone in the comments here a while ago, by using the BB machine on n states and building off of its state occurred the most times in the run of BB(n) and then adding a “do-nothing” pair right before any entry into that state. If the states really are close to uniform in how often each appears, then this argument cannot be substantially improved. But if the nth BB machine involves some states appearing much more than other states, than this argument can be potentially tighter.
Comment #140 July 11th, 2025 at 7:10 pm
#114 Shawn
Apologies for the delayed reply and thanks so much for your detailed response. I’m in awe of the work done by the BB experts so really appreciate you taking the time to explain it.
Interesting then that, if the 3,000 holdouts are resolved with no surprises, it seems that Collatz-like iterations still rule at BB(6) as they did at BB(5). It would be fascinating though if a potential champion doing something different was found and it was possible to decipher its behaviour. Maybe something like that will be found for BB(7).
But regardless of the type of operation performed by the eventual BB(6) champion, the progress achieved really is amazing. After BB(5) was finally proved, my layman opinion was that BB(6) would be forever out of reach beyond enthusiastic speculation.
Comment #141 July 12th, 2025 at 11:07 am
Emilio Ferrucci #126:
“can’t we just axiomatise that then if the true natural numbers are what we really care about”
It is important to understand that the debate in the foundations of mathematics has never been sorted out. Gentzen #136 points to analyses utilizing”first-order” axiomatizations. You ought to ask what that could possibly mean and why it could matter. Applied mathematicians who “do not believe in infinities” certainly would cringe if confronted with a claim that their mathematics incontestably depends upon infinite first-order axiom schemes.
First-order model theory is an established field which had arisen relative to algebraic considerations developed with Peacock and de Morgan. This in involves polynomials formulated over language signatures. It is intrinsically suited for combinatorial foundational studies, but one must clearly delineate combinatorial foundations from semantic foundations. One may speak of first-order logic as it has been subsumed into this paradigm. Yet, in so far as one might hope a logic is characterized by its inference rules, this is not the case. The expression ‘first-order’ is vague.
“First order” applies to the pre-established harmony of declaring a class of syntactic marks as denoting “simple” (that is, without distinguishable or, more strongly, separable components). One might compare this to a geometric point. Yet, although useful, it also conflates ideas. Applied mathematicians reason with localized vectors, whereas algebraic vectors are not spatially localized.
With regard to inference rules, logicism, standard first-order logic, and Herbrand logic all have the same inference rules. Yet, the first is metaphysical, the second ontological, and the third constructive. All are “classical” in the sense of being extensions of bivalent propositional logic.
Yet, bivalent propositional logic is not categorical. In 1999 Pavicic and Megill demonstrated this for algebraic logics. In 2006, Schechter developed a syntactic analogue — the hexagon interpretation.
When Gentzen describes the construction of a non-standard natural number, he uses the compactness theorem of first-order model theory. Yet, recursive function theory is not subsumed under this paradigm. Diagonalization arguments are significant for both first-order semantics and recursive function theory, but one occurs in semantic foundations while the other occurs in combinatorial foundations.
An important observation made by Gentzen is the significance of circularity when asserting that computational foundations is “concrete” in some way. Another important observation is that the role of the sign of equality must be questioned. The first-order inference rules for the above-mentioned paradigms all assert (pre-established harmony) the necessary truth of reflexive equality statements.
With regard to the latter, Hilbert understood that the assumption of well-construed “identity” had been a presupposition of formal axiomatics. The questioning of this presupposition arises with Brouwer and intuitionism. This is why “apartness” is an important relation in constructive mathematics (different from Herbrand logic). Bivalent reasoning is re-established for “concrete” computational contexts by Markov.
This has consequences, however, for semantics. Standard first-order logic interprets Tarski’s schema as a correspondence theory of truth. So, “all” is interpreted as a well-construed logical domain (a set or class). By contrast, “all” in constructive semantics (or, at least, Markov’s) invokes an omega-rule,
https://en.m.wikipedia.org/wiki/%CE%A9-consistent_theory#%CF%89-logic
Arguably, this view traces back to Poincare’s dismissal of Hilbert’s metamathematical program. This is why Hilbert argued that structural induction is distinct from mathematical induction. In general, the French school is often characterized as semi-intuitionistic — not widely studied in foundational contexts. Relative to the problem of induction, Poincare had been an unabashed intuitionist — “all natural numbers” cannot be “concretely realized.”
Of course, there is an idea that mathematics “begins with axioms” since Euclid. But, at around 13:00 in the talk by Atiyah,
https://online.kitp.ucsb.edu/online/plecture/atiyah/rm/jwvideo.html
one has the view that mathematics originates with physical reality (eliminative materialism). And, even Russell described formal axiomatics for the natural numbers as circumventing “honest toil.”
Funny how mathematics is easy until it is not.
Comment #142 July 12th, 2025 at 5:36 pm
Evyatar #137: Yes, that’s an excellent question, and one that I explicitly raised in my 2020 survey!
We don’t know the answer. The trouble is that, in practice, essentially the only method we have for proving arithmetical statements independent of ZFC is the Gödel method.
Comment #143 July 12th, 2025 at 10:54 pm
Mathematics engenders such interesting, positive, and polite discussions in the general case. I guess this can break down when someone states they know something to be true because of a proof while others state there is no proof. This may then collapse, nearly, to the usual style of political arguments.
As a social experiment I would support a one term test of requiring a PhD in mathematics/computer science/physics to qualify for national elected office. Press conferences would be outstanding.
Comment #144 July 13th, 2025 at 6:40 pm
After sharing the current state of the art of ZFC-independent BB numbers online, Andrew Wade improved the upper bound down below 600 (!) Commit 30d2e31 in the GitHub repo andrew-j-wade / metamath-turing-machines uses just 588 states, by doing some intelligent checking to see if and when certain parts of the control logic can be skipped.
Comment #145 July 13th, 2025 at 8:33 pm
David #144: Wow, thanks for telling me! Could you provide a direct link? Also: if Andrew Wade is reading this, any chance we might get a written description of the result?
Comment #146 July 14th, 2025 at 5:13 pm
If I understand correctly, what makes the Busy Beaver numbers grow so large is the presence of an infinite tape to work with.
Has anyone looked at finite-tape Busy Beaver numbers? What is BB(6) if the tape is 32 bits long? 64? 128? How does that specific function grow as the tape grows longer?
Comment #147 July 14th, 2025 at 10:49 pm
Here’s the link that David didn’t include for whatever reason: https://github.com/andrew-j-wade/metamath-turing-machines/commit/30d2e31
It looks like the description only describes how it got down from 636 to 588, and you’ll have to trace back further in the repo to see how he got down from 643 to 636! (Were there really that many do-nothing states? Huh…)
Comment #148 July 15th, 2025 at 7:20 pm
@Scott #145
I didn’t know what the link policy was, but Sniffoy gave the link in #147. In comment #135 I pointed out that CatsAreFluffy had done the reduction to 636 states (this commit), but I was not sure if this was known privately and not used as the current record due to issues. But if people that know this stuff are confident that the 636 and 588 state machines are legit, then I’m satisfied.
I will ping Andrew (who I don’t know personally, he responded to me out of the blue) to come here and give more info.
(BTW, twice now I have attempted to verify my address using the link in the email, and it’s told me me address could not be verified. Not sure what’s going on there!)
Comment #149 July 15th, 2025 at 11:28 pm
theHigherGeometer let me know of the comments here.
The initial improvement to 636 was not done by me, and involved changing zf2.nql to use the decz function in a couple of places which compiles down to one register machine instruction where used.
I’ve gotten the number of states down to 549, though as a warning I haven’t done any regression tests. I may do so next weekend: the portion of the tape corresponding to the first 16 registers of the machine should go through exactly the same transitions at the 636-state machine once initialized. I’m surprised how far I’ve gotten, it taken a couple of large number of change big and small.
Turing machine level changes
(1) Removing do-nothing states. As an example consider:
main()[000000000001000] = 0 r reg_decr.0 1 r main().jump(3,1,0)
main().jump(3,1,0) = 0 l main().jump(3,1,1) 1 l main().jump(3,1,1)
main().jump(3,1,1) = 1 l dispatch.1.carry 1 l dispatch.1.carry
if state main()[000000000001000] is instead:
main()[000000000001000] = 0 r reg_decr.0 1 l dispatch.1.carry
the TM ends up in the same place, and it might be that the intermediate state is no longer referenced.
(4) Track and optimize based on possible old tape values. Consider this state again:
main().jump(3,1,0) = 0 l main().jump(3,1,1) 1 l main().jump(3,1,1)
Assume it hasn’t been optimized away. Because of the way the register machine works, this state will never encounter a ‘1’ on the tape. Think of it as:
main().jump(3,1,0) = 0 l main().jump(3,1,1) * * *
Where * * * is don’t care, and is available for other purposes, i.e. any state where the ‘0’ slot is don’t care instead.
If you pick the right two states to combine into one, this can be a powerful enabler of additional optimizations.
The don’t care slots are marked during the initial TM codegen since the intermediate layer of the compiler has the understanding the register machine and its correspondence to the TM tape and states.
(5) Change how registers are initialized. This was inspired by BB(5) which leaves the tape in a state that almost looks like initialized registers. Through a brute-force search I was able to find a suitable 5-state TM to create a fair number of registers initialized to 0 on the tape. There are only a few states to be saved here as the old method was pretty efficient.
Register machine level changes
(2) Jumps to the halt instruction should just halt instead. There’s no point in going through all the states of a jump if you already know the TM is going to halt.
(3) Dead code elimination. After change (2) one of the halt instructions is now dead code and can be removed. There was already logic there, just commented out, I changed it slightly to make the handling of conditional jumps more robust though I don’t know if there would have been a problem in practice.
NQL Code level changes
(various commits) Staring at dumps of the register machine code and adding in additional align directives so the compiler puts the noop instructions in better places, allowing a few more subtrees in the decision tree to be shared. I would like to teach the compiler to do this but I wouldn’t expect to be able to improve on hand-optimization here. I did have the compiler insert an align directive in the transfer subroutines it generates.
github
Comment #150 July 18th, 2025 at 9:52 pm
Scott P. #146: If \(BB(N,M)\) is the maximal runtime of an \(N\)-state Turing machine running on an \(M\)-bit tape, then \(BB(N,M) \leq N M 2^M\), since the combined system consisting of the Turing machine state, head location, and tape has \(N M 2^M\) possible states. I think that this bound is reasonably close to optimal (i.e. up to a constant factor), since you can define a Turing machine that just repeatedly increments the integer whose binary representation is given by the tape state until it reaches the end of the tape, then repeats \(N/C\) times for an appropriate constant \(C\), but I haven’t worked out the details.
Comment #151 July 19th, 2025 at 1:47 pm
In a Platonist mood, I just took a piece of paper and wrote “BB(6)”.
While the new lower bound sounds massive, Jefferson Carpenter’s question about its real-world scale is easy: BB(6) fits on a desk 😀
In other news, there’s an OpenAI IMO gold (claimed) today! (and there’s one professor on my blog that correctly predicted it). I wonder what Paul Christiano thinks, his LessWrong 2022 post puts the probability at 8%.
Comment #152 July 19th, 2025 at 2:30 pm
Dacyn #150, so if I understand correctly, this definition of \(BB(N,M)\) would actually have \(BB(N,M) > BB(N)\) for sufficiently high \(M\) (given any fixed \(N\))? The reason \(N\)-state Turing machines on a finite tape can actually take longer to halt than their counterparts on infinite tape is that the former can detect when they’re at the end of the tape, and use that in their halting criterion.
Maybe therefore it would be more natural to define \(BB(N,M)\) as the maximal runtime of an \(N\)-state Turing machine on an infinite tape that only uses \(M\) of the bits? (Or maybe instead use a circular \(M\)-bit tape?) That should have the property that \(BB(N,M)\leq BB(N)\) with equality for sufficiently high \(M\), given fixed \(N\). In particular $M\geq BB(N)$ should suffice to get equality. Presumably that bound is approximately tight, though I wouldn’t know how to prove it.
Comment #153 July 21st, 2025 at 8:32 am
But … WHY? (A question I have been meaning to ask you for a while ..)
In my mis-spent youth I remember arguing with friends how many iterations of Ackerman’s function we could manage on the supercomputer in the next door basement. And you could kinda see why Ackerman’s function gets large quickly.
But BB(n) goes beyond any intuition I can manage. Have you any insight on why it grows so fast?
Ta.
Comment #154 July 22nd, 2025 at 5:09 pm
In person you mentioned a concept of RAM busy beavers as ones using a tree. This is an elegant model, where instead of two moves, left and right, there are three: left down, right down, and up. Instead of there being an explicit exit condition it can be when you pop up past the top, which is what you start on. Has anyone ever looked at busy beaver numbers on that sort of machine?
Comment #155 July 23rd, 2025 at 8:29 am
This is fascinating to me in a strange, niggly way, and I’ve had to go away and come back to this multiple times.
I hope you’ll forgive me if I expand on my thinking here. I don’t promise it will be helpful to anyone, but if there’s an error I’d like to hear it.
After some consideration, then I think what I find most curious is this:
I remember reading a description (almost certainly in a previous article here) that the way BB(5) was determined was by figuring out what each individual class of ‘interesting’ machine was doing. (The interesting ones for this purpose being those which didn’t seem to be inclined to stop however long one simulated them for.)
That is, each one is a program, and that is calculating something.
Once that is known, one can then figure out whether the algorithm will terminate.
Quite how small these programs are fascinates me.
Each state has:
2 options based on read value, each of which has:
– 1 bit write value
– 1 bit move left or right
– new state pointer.
There are 5 states. Plus halt as a target, so 6 total, although we only need a single halt. So, somewhere between 2 and 3 bits of encoding as a state pointer. I will round to 3 for a conservative (over)estimate.
so the total program for a 5 state Turing machine is (under):
5 states * 2 read-vals * (1+1+1+3) = 60 bits.
(for completeness, a 6-state Turing machine would then be under 66 bits)
I did a quick check and found this value given for the number of 5-state machines in a previous article: 16,679,880,978,201
Which is 0xF2B,9661,6F19 (in hexadecimal, i.e. a 44-bit number)
I think that discrepancy is okay – number of potential machines is not the same as program size. Not exactly.
But also rounding to three bits of pointer is a big jump, and depending on the assumptions you make about halt, that can reduce the number. (And as observed in that article, there are other symmetries which reduce the number of effectively distinct machines further.)
In any case, I think the article’s overall message is that the additional state of the 6-state machine allows the encoding of a program which can engage in a much deeper level of recursion.
To find a lower bound for BB(6) – one doesn’t run the program; that is no longer feasible. Instead, one has to determine what an ‘interesting’ program is doing, and then show that it does actually halt, and how many steps that will take.
Comment #156 July 25th, 2025 at 4:45 pm
Am I the only one who finds these numbers somewhat terrifying? Contemplating them feels like floating in the middle of the ocean, knowing there are miles of water underneath you.
Graham’s number is just an abstraction, a fancy way of writing down an absurd number of multiplications. So what. But BB(6), or BB(7) – it describes the behavior of something you could write down on a post-it note, or make into a child’s toy, or simulate by hand. Somehow that tiny artifact can embark on a computation, that will finish someday…. but not in a trillion trillion trillion trillion lifetimes of the universe. It gives me chills.
Also, a question, and apologies if it was answered earlier – do we have any idea how to prove independence from ZFC more efficiently than by building machines that enumerate every theorem? Obviously we must not have a very good idea, but is there even a hint at what such a proof would look like?
Comment #157 July 28th, 2025 at 10:38 am
I happened to come across your blog entry “BusyBeaver(6)”, and from some of the comments I saw, I thought your readers (if any notice a comment this far down, written a month later) might be interested in the following way of trying to get a handle on the size of 2^^(2^^(2^^9)). Using base-10 logarithms and a scientific calculator, one can show that 2^^5 = 2.0035… x 10^19728. Therefore, 2^^5 written out as a base-10 numeral has 19729 digits. One can further show that 2^^6 = 10^10^19727.7804… has 10^19727.7804… = 6.03… x 10^19727 digits (at this point, it becomes pointless to add 1 to the power of 10 to get the number of digits), that 2^^7 = 10^10^10^19727.7804… has 10^10^19727.7804… digits (i.e. 2^^6 digits), etc. Continuing these calculations, it is clear (for greater clarity, use mathematical induction along with logarithms and a calculator) that for all larger integers n, the number of digits of 2^^(n+1) is essentially equal to 2^^n. Thus, incrementing the tetration exponent by 1 of a base-2 tetrated representation of an integer M gives an integer whose number of digits is M. Using this method to reach 2^^9, we begin with an integer that has 19729 digits (this is 2^^5), then pass to an integer with 2^^5 digits, then pass to an integer with 2^^6 digits, then pass to an integer with 2^^7 digits, and finally pass to an integer with 2^^8 digits. Let’s now try to get a handle on 2^^(2^^9). Let N @ 10^^n denote an exponential tower of n-many 10’s (evaluated top-down) in which the highest 10 is affixed with exponent n. For example, a googleplex is 10^10^100 = 100 @ 10^^2 = (10^2) @ 10^^2 = 2 @ 10^^3. I found this notation useful in some things I was writing back in 2002 — see “B. TETRATED-SCIENTIFIC NOTATION” at https://groups.google.com/g/sci.math/c/qBFWz3erMA0/m/t3Qsr-NiKdEJ. One can show the following for n > 5: 10^^(n-2) = 1 @ 10^^(n-2) = (10^10) @ 10^(n-4) >> 19727.7804… @ 10^^(n-4) = 2^^n = 4.295078… @ 10^^(n-3) >> 1 @ 10^^(n-3) = 10^^(n-3). Thus, an exponentiated tower of n-many 2’s is numerically between an exponentiated tower of (n-2)-many 10’s and an exponentiated tower of (n-3)-many 10’s. Moreover, for large values of n (e.g. n = 10^100, n = 2^^9, etc.), there is no “notational difference” between an exponentiated tower of n-many 2’s and an exponentiated tower of n-many 10’s (i.e. 2^^n = 10^^n when n is large enough that we don’t care about the differences between using n-2 and using n and using n-3), and so to deal with 2^^(2^^9), I’ll deal with the corresponding exponential tower of 10. Since 10^^(2^^9) is an exponentiated tower of (2^^9)-many 10’s, one way to indicate its size is that we begin with a 19729-digit integer, then pass to an integer with the previous integer’s number of digits, then pass to an integer with the previous integer’s number of digits, and so on, continuing this process of passing to an integer with the previous integer’s number of digits (2^^9)-many times. As for reaching 2^^(2^^(2^^9)) — an integer that is, for our purposes, essentially the same as the integer 10^^(2^^(2^^9)) — in the same way, we have to perform this process of passing to an integer with the previous integer’s number of digits 2^^(2^^9)-many times. The essays cited in my mathoverflow post at https://mathoverflow.net/a/74395/15780 may be of interest to some people here.
Comment #158 July 30th, 2025 at 1:57 pm
(apologies if this was already asked- I couldn’t find it).
IS BB(6) > ACK(6)?
If not then is surely the case that BB(7)>ACK(7) ?
Comment #159 July 31st, 2025 at 4:16 pm
everyone knows about the “schrodinger’s cat” superposition in QM.
now discover the “schrodinger’s majorana” superposition in QC, where IBM’s Majorana topological qubit results exist in a state of both both repudiated and not repudiated!
you will also learn the difference between a quantum computer and a micorwave oven. read all about it:
https://www.theregister.com/2025/07/31/microsoft_quantum_paper_science/?td=rt-3a
Comment #160 July 31st, 2025 at 4:54 pm
correction, the suspect Majorana topological qubit results are from Microsoft, not IBM. (that mistake shows that I am an oldster.)
Comment #161 July 31st, 2025 at 9:20 pm
William Gasarch #158: We don’t know yet whether BB(6) > Ackermann(6), although given the recent advances it seems plausible.
We do now know that BB(7) > Ackermann(7) (and even Ackermann(11), apparently).
Comment #162 August 1st, 2025 at 4:01 am
[…] sixth Busy Beaver number, previously known to be greater than (^{15}10), has had its bounds improved – it’s really extremely […]
Comment #163 August 4th, 2025 at 6:27 am
[…] humongous. This blog post from Bill Gasarch outlines the consequences of the discovery, and this post on Scott Aaronson’s blog has more detail. And while we’re improving lower bounds on things, Gil Kalai has blogged about an improvement […]
Comment #164 August 20th, 2025 at 6:02 pm
This discussion reminds me of Grahams number, and Knuth’s up arrow notation for iterated tetration and pentration. To be honest this makes my head hurt.
https://en.wikipedia.org/wiki/Graham%27s_number
https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation
Comment #165 September 17th, 2025 at 7:44 am
After some playing around with ChatGPT, the best “how large is this number” thing I came up with was “the number of zeroes in 2 pentated to 5 has a number of zeroes larger than the number of particles in the universe”