Archive for the ‘Announcements’ Category

Google’s Sycamore chip: no wormholes, no superfast classical simulation either

Friday, December 2nd, 2022

Update (Dec. 6): I’m having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I’m learning a huge amount from the talks and discussions here—and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I’ll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. [New: click here for video of my talk!]

But there’s one thing, relevant to this post, that I can’t let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

To be clear, I don’t blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment’s historic significance might strike me as perfectly valid inferences from what I’d read in the press. I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits—a talk that eschewed all claims of historic significance and of literal wormhole creation.

But it’s now clear to me that, between

(1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and

(2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time,

something went terribly wrong—something that risks damaging trust in the scientific process itself. And I think it’s worth reflecting on what we can do to prevent it from happening again.


This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

News, social media, and my inbox have been abuzz with two claims about Google’s Sycamore quantum processor, the one that now has 72 superconducting qubits.

The first claim is that Sycamore created a wormhole (!)—a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit’s blog and Chad Orzel’s blog.

The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means.

On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can’t do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime?

I submit that neither claim is true. On the one hand, Sycamore did not “create a wormhole.” On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can’t do remains much the same as last week or last month!


Let’s start with the wormhole thing. I can’t really improve over how I put it in Dennis Overbye’s NYT piece:

“The most important thing I’d want New York Times readers to understand is this,” Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

More broadly, Overbye’s NYT piece explains with admirable clarity what this experiment did and didn’t do—leaving only the question “wait … if that’s all that’s going on here, then why is it being written up in the NYT??” This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the “QC creates a wormhole” framing.

Alright, but what’s the actual basis for the “QC creates a wormhole” claim, for those who don’t want to leave this blog to read about it? Well, the authors used 9 of Sycamore’s 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do—i.e., what they’d already classically calculated that it would do.

So, the objection is obvious: if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim. Why should the standards be different just because we’re talking about a quantum computer rather than a classical one?

Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 29=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There’s some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we’d mostly learn about certain corrections that arise because the simulation was being done with “only” n=100 qubits, rather than in the n→∞ limit where SYK is rigorously understood. But while those corrections, arising when n is “neither too large nor too small,” would surely be interesting to specialists, they’d have no obvious bearing on the prospects for creating real physical wormholes in our universe.

And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?

Daniel Harlow explained the thinking to me as follows (he stresses that he’s explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I’d still prefer to call it a simulated wormhole for simulated people.

For completeness, here’s Harlow’s passage from the NYT article:

Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper.

“So I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Dr. Harlow wrote in an email. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would CERTAINLY be off the table.” Developing computers big enough to do so might take 10 or 15 years, he added.


Alright, let’s move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can’t provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you’re in the “anti-concentration regime” (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there’s a classical algorithm to approximately sample the quantum circuit’s output distribution in poly(n) time (albeit, not yet a practical algorithm).

Here’s what’s crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits: a regime where

  1. you can still see explicitly that you’re exploiting a 250– or 2100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but
  2. you also have a signal that (as it turned out) is large enough to see with heroic effort.

To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I’m just worried that others aren’t reading their paper as carefully as they should be!

So then, what’s the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n→∞ qubits. Nothing in Google’s or USTC’s actual Random Circuit Sampling experiments depended on that hope, but it would’ve been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis.

Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait … no … that’s not what I meant to write…


Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments “create their own reality” full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-‘n-supremacy post.

Dismantling Quantum Hype with Tim Nguyen

Tuesday, November 22nd, 2022

Happy Thanksgiving to my American readers! While I enjoy a family holiday-week vacation in exotic Dallas—and yes, I will follow up on my old JFK post by visiting Dealey Plaza—please enjoy the following Thanksgiving victuals:

I recently recorded a 3-hour (!) YouTube video with Timothy Nguyen, host of the Cartesian Cafe. Our episode is entitled Quantum Computing: Dismantling the Hype. In it, I teach a sort of extremely compressed version of my undergraduate Intro to Quantum Information Science course, unburdening myself about whatever Tim prompts me to explain: the basic rules of quantum information, quantum circuits, the quantum black-box model, the Deutsch-Jozsa algorithm, BQP and its relationship to classical complexity classes, and sampling-based quantum supremacy experiments. This is a lot more technical than an average podcast, a lot less technical than an actual course, and hopefully just right for some nonempty subset of readers.

Outside of his podcasting career, some of you might recognize Nguyen as the coauthor, with Theo Polya, of a rebuttal of “Geometric Unity.” This latter is the proposal by the financier, podcaster, and leading “Intellectual Dark Web” figure Eric Weinstein for a unified theory of particle physics. Now, I slightly know Weinstein, and have even found him fascinating, eloquent, and correct about various issues. So, in an addendum to the main video, Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

Enjoy!

WINNERS of the Scott Aaronson Grant for Advanced Precollege STEM Education!

Friday, November 18th, 2022

I’m thrilled to be able to interrupt your regular depressing programming for 100% happy news.

Some readers will remember that, back in September, I announced that an unnamed charitable foundation had asked my advice on how best to donate $250,000 for advanced precollege STEM education. So, just like the previous time I got such a request, from Jaan Tallinn’s Survival and Flourishing Fund, I decided to do a call for proposals on Shtetl-Optimized before passing along my recommendations.

I can now reveal that the generous foundation, this time around, was the Packard Foundation. Indeed, the idea and initial inquiries to me came directly from Dave Orr: the chair of the foundation, grandson of Hewlett-Packard cofounder David Packard, and (so I learned) longtime Shtetl-Optimized reader.

I can also now reveal the results. I was honored to get more than a dozen excellent applications. After carefully considering all of them, I passed along four finalists to the Packard Foundation, which preferred to award the entire allotment to a single program if possible. After more discussion and research, the Foundation then actually decided on two winners:

  • $225,000 for general support to PROMYS: the long-running, world-renowned summer math camp for high-school students, which (among other things) is in the process of launching a new branch in India. While I ended up at Canada/USA Mathcamp (which I supported in my first grant round) rather than PROMYS, I knew all about and admired PROMYS even back when I was the right age to attend it. I’m thrilled to be able to play a small role in its expansion.
  • $30,000 for general support to AddisCoder: the phenomenal program that introduces Ethiopian high-schoolers to programming and algorithms. AddisCoder was founded by UC Berkeley theoretical computer science professor and longtime friend-of-the-blog Jelani Nelson, and also received $30,000 in my first grant round. Jelani and his co-organizers will be pressing ahead with AddisCoder despite political conflict in Ethiopia including a recently-concluded civil war. I’m humbled if I can make even the tiniest difference.

Thanks so much to the Packard Foundation, and to Packard’s talented program officers, directors, and associates—especially Laura Sullivan, Jean Ries, and Prithi Trivedi—for their hard work to make this happen. Thanks so much also to everyone who applied. While I wish we could’ve funded everyone, I’ve learned a lot about programs to which I’d like to steer future support (other prospective benefactors: please email me!!), and to which I’d like to steer kids: my own, once they’re old enough, and other kids of my acquaintance.

I feel good that, in the tiny, underfunded world of accelerated STEM education, the $255,000 that Packard is donating will already make a difference. But of course, $255,000 is only a thousandth of $255 million, which is a thousandth of $255 billion. Perhaps I could earn the latter sort of sums, to donate to STEM education or any other cause, by (for example) starting my own cryptocurrency exchange. I hope my readers will forgive me for not having chosen that route, expected-utility-maximization arguments be damned.

Postdocs, matrix multiplication, and WSJ: yet more shorties

Friday, October 7th, 2022

I’m proud to say that Nick Hunter-Jones and Matteo Ippoliti—both of whom work at the interface between quantum information science and condensed-matter physics (Nick closer to the former and Matteo to the latter)—have joined the physics faculty at UT Austin this year. And Nick, Matteo, and I are jointly seeking postdocs to start in Fall 2023! Please check out our call for applications here. The deadline is December 1; you apply through AcademicJobsOnline rather than by emailing me as in past years.


The big news in AI and complexity theory this week was DeepMind’s AlphaTensor, and its automated discovery of new algorithms for matrix multiplication. (See here for the Nature paper.) More concretely, they’ve used AI to discover (among other things) an algorithm for multiplying 4×4 matrices, over finite fields of characteristic 2, using only 47 scalar multiplications. This beats the 49=7×7 that you’d get from Strassen’s algorithm. There are other improvements for other matrix dimensions, many of which work over fields of other characteristics.

Since I’ve seen confusion about the point on social media: this does not improve over the best known asymptotic exponent for matrix multiplication, which over any field, still stands at the human-discovered 2.373 (meaning, we know how to multiply two N×N matrices in O(N2.373) time, but not faster). But it does asymptotically improve over Strassen’s O(N2.81) algorithm from 1968, conceivably even in a way that could have practical relevance for multiplying hundreds-by-hundreds or thousands-by-thousands matrices over F2.

Way back in 2007, I gave a talk at MIT CSAIL’s “Wild and Crazy Ideas Session,” where I explicitly proposed to use computer search to look for faster algorithms for 4×4 and 5×5 matrix multiplication. The response I got at the time was that it was hopeless, since the search space was already too huge. Of course, that was before the deep learning revolution.


This morning, the Wall Street Journal published an article by Karen Hao about competition between China and the US in quantum computing. Unfortunately paywalled, but includes the following passage:

Meanwhile, American academics say it’s gotten harder for Chinese students to obtain visas to conduct quantum research in the U.S. “It’s become common knowledge that when Chinese students or postdocs come to the U.S., they can’t say they’re doing quantum computing,” says Scott Aaronson, director of the Quantum Information Center at the University of Texas, Austin.

Two more shorties

Tuesday, October 4th, 2022

For anyone living under a rock with no access to nerd social media, Alain Aspect, John Clauser, and Anton Zeilinger have finally won the Nobel Prize in Physics, for their celebrated experiments that rubbed everyone’s faces in the reality of quantum entanglement (including Bell inequality violation and quantum teleportation). I don’t personally know Aspect or Clauser, but Zeilinger extremely graciously hosted me and my wife Dana when we visited Vienna in 2012, even bringing us to the symphony (he knows the director and has front-row seats), and somehow making me feel more cultured rather than less.

As usual, the recipe for winning the Nobel Prize in Physics is this:

(1) Do something where anyone who knows about it is like, “why haven’t they given the Nobel Prize in Physics for that yet?”

(2) Live long enough.

Huge congratulations to Aspect, Clauser, and Zeilinger!


Elham Kashefi, my quantum complexity theory colleague and treasured friend for more than 20 years, brought to my attention a Statement of Solidarity with Students in Iran from the International Academic Community. Of course I was happy to sign the statement, just like I was back in 2009 when brave Iranian students similarly risked their lives and freedom for women’s rights and other Enlightenment values against the theocracy. I urge you to sign the statement as well. If enough Shtetl-Optimized readers disapprove of their brutal repression, surely the mullahs will reconsider! More seriously though: if any readers can recommend a charity that’s actually making a difference in helping Iranians participate in the modern world, I’d be happy to do another of my matching donation drives.

Shorties!

Friday, September 30th, 2022

(1) Since I didn’t blog about this before: huge congratulations to David Deutsch, Charles Bennett, Gilles Brassard, and my former MIT colleague Peter Shor, and separately to Dan Spielman, for their well-deserved Breakthrough Prizes! Their contributions are all so epochal, so universally known to all of us in quantum information and theoretical computer science, that there’s little I can write to gild the lily, except to say how much I’ve learned by interacting with all five of them personally. I did enjoy this comment on the Breakthrough Prizes by someone on Twitter: “As long as that loudmouth Scott Aaronson keeps getting ignored, I’ll be happy.”

(2) My former UT colleague Ila Fiete brought to my attention an important scientists’ petition to the White House. The context is that the Biden administration has announced new rules requiring federally-funded research papers to be freely available to the public without delay. This is extremely welcome—in fact, I’ve advocated such a step since I first became aware of the scourge of predatory journals around 2004. But the looming danger is that publishers will just respond by leaning more heavily on the “author pays” model—i.e., hitting up authors or their institutions for thousands of dollars in page fees—and we’ll go from only the credentialed few being able to read papers that aren’t on preprint archives or the like, to only the credentialed few being able to publish them. The petition urges the White House to build, or fund the research community to build, an infrastructure that will make scientific publishing truly open to everyone. I’ve signed it, and I hope you’ll consider signing too.

(3) Bill Gasarch asked me to announce that he, my former MIT colleague Erik Demaine, and Mohammad Hajiaghayi have written a brand-new book entitled Computational Intractability: A Guide to Algorithmic Lower Bounds, and a free draft is available online. It looks excellent, like a Garey & Johnson for the 21st century. Bill and his coauthors are looking for feedback. I was happy to help them by advertising this—after all, it’s not as if Bill’s got his own complexity blog for such things!

(4) Back when Google was still a novelty—maybe 2000 or so—I had my best friend, the now-famous computer security researcher Alex Halderman, over for Rosh Hashanah dinner with my family. Alex and I were talking about how Google evaded the limitations of all the previous decades’ worth of information retrieval systems. One of my relatives, however, misheard “Google” as “kugel” (basically a dense block of noodles held together with egg), and so ended up passing the latter to Alex. “What is this?” Alex asked. Whereupon my uncle deadpanned, “it’s a noodle retrieval system.” Since then, every single Rosh Hashanah dinner, I think about querying the kugel to retrieve the noodles within, and how the desired search result is just the trivial “all of them.”

Win a $250,000 Scott Aaronson Grant for Advanced Precollege STEM Education!

Thursday, September 1st, 2022

Back in January, you might recall, Skype cofounder Jaan Tallinn’s Survival and Flourishing Fund (SFF) was kind enough to earmark $200,000 for me to donate to any charitable organizations of my choice. So I posted a call for proposals on this blog. You “applied” to my “foundation” by simply sending me an email, or leaving a comment on this blog, with a link to your organization’s website and a 1-paragraph explanation of what you wanted the grant for, and then answering any followup questions that I had.

After receiving about 20 awesome proposals in diverse areas, in the end I decided to split the allotment among organizations around the world doing fantastic, badly-needed work in math and science enrichment at the precollege level. These included Canada/USA Mathcamp, AddisCoder, a magnet school in Maine, a math circle in Oregon, a math enrichment program in Ghana, and four others. I chose to focus on advanced precollege STEM education both because I have some actual knowledge and experience there, and because I wanted to make a strong statement about an underfunded cause close to my heart that’s recently suffered unjust attacks.

To quote the immortal Carl Sagan, from shortly before his death:

[C]hildren with special abilities and skills need to be nourished and encouraged. They are a national treasure. Challenging programs for the “gifted” are sometimes decried as “elitism.” Why aren’t intensive practice sessions for varsity football, baseball, and basketball players and interschool competition deemed elitism? After all, only the most gifted athletes participate. There is a self-defeating double standard at work here, nationwide.

Anyway, the thank-you notes from the programs I selected were some of the most gratifying emails I’ve ever received.

But wait, it gets better! After reading about the Scott Aaronson Speculation Grants on this blog, representatives from a large, reputable family foundation contacted me to say that they wanted to be involved too. This foundation, which wishes to remain anonymous at this stage although not to the potential grant recipient, intends to make a single US$250,000 grant in the area of advanced precollege STEM education. They wanted my advice on where their grant should go.

Of course, I could’ve simply picked one of the same wonderful organizations that SFF and I helped in the first round. On reflection, though, I decided that it would be more on the up-and-up to issue a fresh call for proposals.

So: do you run a registered 501(c)(3) nonprofit dedicated to advanced precollege STEM education? If so, email me or leave a comment here by Friday, September 9, telling me a bit about what your organization does and what more it could do with an extra $250K. Include a rough budget, if that will help convince me that you can actually make productive use of that amount, that it won’t just sit in your bank account. Organizations that received a Scott Aaronson Speculation Grant the last time are welcome to reapply; newcomers are also welcome.

I’ll pass up to three finalists along to the funder, which will then make a final decision as to the recipient. The funder will be directly in touch with the potential grantee(s) and will proceed with its intake, review and due diligence process.

We expect to be able to announce a recipient on or around October 24. Can’t wait to see what people come up with!

Busy Beaver Updates: Now Even Busier

Tuesday, August 30th, 2022

Way back in the covid-filled summer of 2020, I wrote a survey article about the ridiculously-rapidly-growing Busy Beaver function. My survey then expanded to nearly twice its original length, with the ideas, observations, and open problems of commenters on this blog. Ever since, I’ve felt a sort of duty to blog later developments in BusyBeaverology as well. It’s like, I’ve built my dam, I’ve built my lodge, I’m here in the pond to stay!

So without further ado:

  • This May, Pavel Kropitz found a machine demonstrating that $$ BB(6) \ge 10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}}}}}}} $$ (15 times)—thereby blasting through his own 2010 record, that BB(6)≥1036,534. Or, for those tuning in from home: Kropitz constructed a 6-state, 2-symbol, 1-tape Turing machine that runs for at least the above number of steps, when started on an initially blank tape, and then halt. The machine was analyzed and verified by Pascal Michel, the modern keeper of Busy Beaver lore. In my 2020 survey, I’d relayed an open problem posed by my then 7-year-old daughter Lily: namely, what’s the first n such that BB(n) exceeds A(n), the nth value of the Ackermann function? All that’s been proven is that this n is at least 5 and at most 18. Kropitz and Michel’s discovery doesn’t settle the question—titanic though it is, the new lower bound on BB(6) is still less than A(6) (!!)—but in light of this result, I now strongly conjecture that the crossover happens at either n=6 or n=7. Huge congratulations to Pavel and Pascal!
  • Tristan Stérin and Damien Woods wrote to tell me about a new collaborative initiative they’ve launched called BB Challenge. With the participation of other leading figures in the neither-large-nor-sprawling Busy Beaver world, Tristan and Damien are aiming, not only to pin down the value of BB(5)—proving or disproving the longstanding conjecture that BB(5)=47,176,870—but to do so in a formally verified way, with none of the old ambiguities about which Turing machines have or haven’t been satisfactorily analyzed. In my survey article, I’d passed along a claim that, of all the 5-state machines, only 25 remained to be analyzed, to understand whether or not they run forever—the “default guess” being that they all do, but that proving it for some of them might require fearsomely difficult number theory. With their more formal and cautious approach, Tristan and Damien still count 1.5 million (!) holdout machines, but they hope to cut down that number extremely rapidly. If you’re feeling yourself successfully nerd-sniped, please join the quest and help them out!

A low-tech solution

Tuesday, July 19th, 2022

Thanks so much to everyone who offered help and support as this blog’s comment section endured the weirdest, most motivated and sophisticated troll attack in its 17-year history. For a week, a parade of self-assured commenters showed up to demand that I explain and defend my personal hygiene, private thoughts, sexual preferences, and behavior around female students (and, absurdly, to cajole me into taking my family on a specific Disney cruise ship). In many cases, the troll or trolls appropriated the names and email addresses of real academics, imitating them so convincingly that those academics’ closest colleagues told me they were confident it was really them. And when some trolls finally “outed” themselves, I had no way to know whether that was just another chapter in the trolling campaign. It was enough to precipitate an epistemic crisis, where one actively doubts the authenticity of just about every piece of text.

The irony isn’t lost on me that I’ve endured this just as I’m starting my year-long gig at OpenAI, to think, among other things, about the potential avenues for misuse of Large Language Models like GPT-3, and what theoretical computer science could contribute to mitigating them. To say this episode has given me a more vivid understanding of the risks would be an understatement.

But why didn’t I just block and ignore the trolls immediately? Why did I bother engaging?

At least a hundred people asked some variant of this question, and the answer is this. For most of my professional life, this blog has been my forum, where anyone in the world could show up to raise any issue they wanted, as if we were tunic-wearing philosophers in the Athenian agora. I prided myself on my refusal to take the coward’s way out and ignore anything—even, especially, severe personal criticism. I’d witnessed how Jon Stewart, let’s say, would night after night completely eviscerate George W. Bush, his policies and worldview and way of speaking and justifications and lies, and then Bush would just continue the next day, totally oblivious, never deigning to rebut any of it. And it became a core part of my identity that I’d never be like that. If anyone on earth had a narrative of me where I was an arrogant bigot, a clueless idiot, etc., I’d confront that narrative head-on and refute it—or if I couldn’t, I’d reinvent my whole life. What I’d never do is suffer anyone’s monstrous caricature of me to strut around the Internet unchallenged, as if conceding that only my academic prestige or tenure or power, rather than a reasoned rebuttal, could protect me from the harsh truths that the caricature revealed.

Over the years, of course, I carved out some exceptions: P=NP provers and quantum mechanics deniers enraged that I’d dismissed their world-changing insights. Raving antisemites. Their caricatures of me had no legs in any community I cared about. But if an attack carried the implied backing of the whole modern social-justice movement, of thousands of angry grad students on Twitter, of Slate and Salon and New York Times writers and Wikipedia editors and university DEI offices, then the coward’s way out was closed. The monstrous caricature then loomed directly over me; I could either parry his attacks or die.

With this stance, you might say, the astounding part is not that this blog’s “agora” model eventually broke down, but rather that it survived for so long! I started blogging in October 2005. It took until July 2022 for me to endure a full-scale “social/emotional denial of service attack” (not counting the comment-171 affair). Now that I have, though, it’s obvious even to me that the old way is no longer tenable.

So what’s the solution? Some of you liked the idea of requiring registration with real email addresses—but alas, when I tried to implement that, I found that WordPress’s registration system is a mess and I couldn’t see how to make it work. Others liked the idea of moving to Substack, but others actively hated it, and in any case, even if I moved, I’d still have to figure out a comment policy! Still others liked the idea of an army of volunteer moderators. At least ten people volunteered themselves.

On reflection, the following strikes me as most directly addressing the actual problem. I’m hereby establishing the Shtetl-Optimized Committee of Guardians, or SOCG (same acronym as the computational geometry conference 🙂 ). If you’re interested in joining, shoot me an email, or leave a comment on this post with your (real!) email address. I’ll accept members only if I know them in real life, personally or by reputation, or if they have an honorable history on this blog.

For now, the SOCG’s only job is this: whenever I get a comment that gives me a feeling of unease—because, e.g., it seems trollish or nasty or insincere, it asks a too-personal question, or it challenges me to rebut a hostile caricature of myself—I’ll email the comment to the SOCG and ask what to do. I commit to respecting the verdict of those SOCG members who respond, whenever a clear verdict exists. The verdict could be, e.g., “this seems fine,” “if you won’t be able to resist responding then don’t let this appear,” or “email the commenter first to confirm their identity.” And if I simply need reassurance that the commenter’s view of me is false, I’ll seek it from the SOCG before I seek it from the whole world.

Here’s what SOCG members can expect in return: I continue pouring my heart into this subscription-free, ad-free blog, and I credit you for making it possible—publicly if you’re comfortable with your name being listed, privately if not. I buy you a fancy lunch or dinner if we’re ever in the same town.

Eventually, we might move to a model where the SOCG members can log in to WordPress and directly moderate comments themselves. But let’s try it this way first and see if it works.

Choosing a new comment policy

Tuesday, July 12th, 2022

Update (July 13): I was honored to read this post by my friend Boaz Barak.

Update (July 14): By now, comments on this post allegedly from four CS professors — namely, Josh Alman, Aloni Cohen, Rana Hanocka, and Anna Farzindar — as well as from the graduate student “BA,” have been unmasked as from impersonator(s).

I’ve been the target of a motivated attack-troll (or multiple trolls, but I now believe just one) who knows about the CS community. This might be the single weirdest thing that’s happened to me in 17 years of blogging, surpassing even the legendary Ricoh printer episode of 2007. It obviously underscores the need for a new, stricter comment policy, which is what this whole post was about.


Yesterday and today, both my work and my enjoyment of the James Webb images were interrupted by an anonymous troll, who used the Shtetl-Optimized comment section to heap libelous abuse on me—derailing an anodyne quantum computing discussion to opine at length about how I’m a disgusting creep who surely, probably, maybe has lewd thoughts about his female students. Unwisely or not, I allowed it all to appear, and replied to all of it. I had a few reasons: I wanted to prove that I’m now strong enough to withstand bullying that might once have driven me to suicide. I wanted, frankly, many readers to come to my defense (thanks to those who did!). I at least wanted readers to see firsthand what I now regularly deal with: the emotional price of maintaining this blog. Most of all, I wanted my feminist, social-justice-supporting readers to either explicitly endorse or (hopefully) explicitly repudiate the unambiguous harassment that was now being gleefully committed in their name.

Then, though, the same commenter upped the ante further, by heaping misogynistic abuse on my wife Dana—while still, ludicrously and incongruously, cloaking themselves in the rhetoric of social justice. Yes: apparently the woke, feminist thing to do is now to rate female computer scientists on their looks.

Let me be blunt: I cannot continue to write Shtetl-Optimized while dealing with regular harassment of me and my family. At the same time, I’m also determined not to “surrender to the terrorists.” So, I’m weighing the following options:

  • Close comments except to commenters who provide a real identity—e.g., a full real name, a matching email address, a website.
  • Move to Substack, and then allow only commenters who’ve signed up.
  • Hire someone to pre-screen comments for me, and delete ones that are abusive or harassing (to me or others) before I even see them. (Any volunteers??)
  • Make the comment sections for readers only, eliminating any expectation that I’ll participate.

One thing that’s clear is that the status quo will not continue. I can’t “just delete” harassing or abusive comments, because the trolls have gotten too good at triggering me, and they will continue to weaponize my openness and my ethic of responding to all possible arguments against me.

So, regular readers: what do you prefer?