Archive for the ‘Rage Against Doofosity’ Category

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.

Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)

There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.

Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.

Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

Brief Update on Texan Tenure

Sunday, May 7th, 2023

Update (May 8): Some tentative good news! It looks like there’s now a compromise bill in the House that would preserve tenure, insisting only on the sort of post-tenure review that UT (like most universities) already has.

Update (May 9): Alas, it looks like the revised bill is not much better. See this thread from Keith Whittington of the Academic Freedom Alliance.

I blogged a few weeks ago about SB 18, a bill that would end tenure at Texas public universities, including UT Austin and Texas A&M. The bad news is that SB 18 passed the Texas Senate. The good news is that I’m told—I don’t know how reliably—that it has little chance of passing the House.

But it’s going to be discussed in the House tomorrow. Any Texas residents reading this can, and are strongly urged, to submit brief comments here. Please note that the deadline is tomorrow (Monday) morning.

I just submitted the comment below. Obviously, among the arguments that I genuinely believe, I made only those that I expect might have some purchase on a Texas Republican.

I’m a professor of computer science at UT Austin, specializing in quantum computing.  I am however writing this statement strictly in my capacity as a private citizen and Texas resident, not in my professional capacity.

Like the supporters of SB 18, I too see leftist ideological indoctrination on college campuses as a serious problem.  It’s something that I and many other moderates and classical liberals in academia have been pushing back on for years.

But my purpose in this comment is to explain why eliminating tenure at UT Austin and Texas A&M is NOT the solution — indeed, it would be the equivalent of treating a tumor by murdering the patient.

I’ve seen firsthand how already, just the *threat* that SB 18 might pass has seriously hampered our ability to recruit the best scientists and engineers to become faculty at UT Austin.  If this bill were actually to pass, I expect that the impact on our recruiting would be total and catastrophic.  It would effectively mean the end of UT Austin as one of the top public universities in the country.  Hundreds of scientists who were lured to Texas by UT’s excellence, including me and my wife, would start looking for jobs elsewhere — even those whose own tenure was “grandfathered in.”  They’d leave en masse for California and Massachusetts and anywhere else they could continue the lives they’d planned.

The reality is this: the sorts of scientists and engineers we’re talking about could typically make vastly higher incomes, in the high six figures or even seven figures, by working in private industry or forming their own startups.  Yet they choose to accept much lower salaries to spend their careers in academia.  Why?  Because of the promise of a certain way of life: one where they can speak freely as scholars and individuals without worrying about how it will affect their employment.  Tenure is a central part of that promise.  Remove it, and the value proposition collapses.

In some sense, the state of Texas (like nearly every other state) actually gets a bargain through tenure.  It couldn’t possibly afford to retain top-caliber scientists and engineers — working on medical breakthroughs, revolutionary advances in AI, and all the other stuff — if it DIDN’T offer tenure.

For this reason, I hope that even conservatives in the Texas House will see that we have a common interest here, in ensuring SB 18 never even makes it out of committee — for the sake of the future of innovation in Texas.  I’m open to other possible responses to the problem of political indoctrination on campus.

Will UT Austin and Texas A&M survive beyond this week?

Monday, April 17th, 2023

Update (April 20): Alas, the Texas Senate has approved SB 18. The survival of higher education in Texas now hinges on this bill not being taken up or passed in the House, or not being enforced as written (e.g., because UT’s existing post-tenure review system is judged to satisfy it).

This week, the Texas Senate will take up SB 18, a bill to ban the granting of tenure at all public universities in Texas, including UT Austin and Texas A&M. (Those of us who have tenure would retain it, for what little that’s worth.)

[Update: I’ve learned that, even if this bill passes the Senate, there’s a good chance that it will get watered down or die in the House, or found to be satisfied by UT’s existing system of post-tenure review. That’s the only reason why people in the know aren’t panicking even more than they are.]

I find it hard to imagine that SB 18 will actually pass both houses and be enforced as written, simply because it’s obvious that if it did, it would be the end of UT Austin and Texas A&M as leading research universities. More precisely, it would be the immediate end of our ability to recruit competitively, and the slightly slower end of our competitiveness period, as faculty with options moved elsewhere. This is so because of the economics of faculty hiring. Particularly in STEM fields like computer science, those who become professors typically forgo vastly higher salaries in industry, not to mention equity in startup companies and so on. Why would we do such a nutty thing? Because we like a certain lifestyle. We’re willing to move several economic strata downward in return for jobs where (in principle) no one can fire us without cause, or tell us what we’re allowed to say or publish. The evidence from industry labs (Google, Facebook, Microsoft, etc.) suggests that, in competitive fields, for Texas to attract and retain top faculty without tenure would require paying them hundreds of thousands more per year. In that sense, tenure is a bargain for universities and the state. Of course the situation is a bit different for art history and English literature, but in any case SB 18 makes no distinction between fields.

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities, and SB 16, which would require the firing of any professor if they “compel or attempt to compel a student … to adopt a belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief.” (The language here seems sloppy to me: is liberal democracy “inherently superior” to Nazism? Would teaching students about the horrors of Nazism count as “attempting to compel them” to accept this superiority?)

Taken together, it’s clear that the goal is to hit back hard against “wokeness” in academia, and thereby satisfy the Republican base.

Here’s the thing: there really is an illiberal ideology that’s taken over parts of academia (not all of it)—an ideology that Tim Urban, in his wonderful recent book What’s Our Problem?, usefully terms “Social Justice Fundamentalism” or SJF, to distinguish it sharply from “Liberal Social Justice,” the ideology of (for example) the Civil Rights movement. Now, I’m on record as not a fan of the SJF ideology, to put it mildly, and the SJF ideology is on record as not a fan of me. In 2015, I was infamously dragged through the mud of Salon, The New Republic, Raw Story, and many other magazines and websites for a single blog comment criticizing a form of feminism that had contributed to making my life miserable, even while I proudly called myself a liberal feminist (and still do). More recently, wokesters have written to my department chair trying to get me disciplined or fired, for everything from my use of the now-verboten term “quantum supremacy,” to a reference to female breasts in a poem I wrote as a student that was still on my homepage. (These attempts thankfully went nowhere. Notwithstanding what you read, sanity retains many strongholds in academia.)

Anyway, despite all of this, the Texas Republicans have somehow succeeded in making me more afraid of them, purely on the level of professional survival, than I’ve ever been of the Social Justice Fundamentalists. In effect, the Republicans propose to solve the “problem of wokeness” by simply dropping thermonuclear weapons on all Texas public universities, thereby taking out me and my colleagues as collateral damage—regardless of our own views on wokeness or anything else, and regardless of what we’re doing for Texas’ scientific competitiveness.

I don’t expect that most of my readers, in or out of Texas, will need to be persuaded about any of this—nor am I expecting to change many minds on the other side. Mostly, I’m writing this post in the hope that some well-connected moderates here in Austin will link to it, and the post might thereby play a tiny role in helping Texas’ first-rate public universities live one more day. (And to any such moderates: yes, I’m happy to meet in person with you or your colleagues, if that would help!) Some posts are here on this blog for no better reason than, y’know, moral obligation.

Xavier Waintal responds (tl;dr Grover is still quadratically faster)

Thursday, March 23rd, 2023

This morning Xavier Waintal, coauthor of the new arXiv preprint “””refuting””” Grover’s algorithm, which I dismantled here yesterday, emailed me a two-paragraph response. He remarked that the “classy” thing for me to do would be to post the response on my blog, but: “I would totally understand if you did not want to be contradicted in your own zone of influence.”

Here is Waintal’s response, exactly as sent to me:

The elephant in the (quantum computing) room: opening the Pandora box of the quantum oracle

One of the problem we face in the field of quantum computing is a vast diversity of cultures between, say, complexity theorists on one hand and physicists on the other hand. The former define mathematical objects and consider any mathematical problem as legitimate. The hypothesis are never questioned, by definition. Physicists on the other hand spend their life questioning the hypothesis, wondering if they do apply to the real world. This dichotomy is particularly acute in the context of the emblematic Grover search algorithm, one of the cornerstone of quantum computing. Grover’s algorithm uses the concept of “oracle”, a black box function that one can call, but of which one is forbidden to see the source code. There are well known complexity theorems that show that in this context a quantum computer can solve the “search problem” faster than a classical computer.

But because we closed our eyes and decided not to look at the source code does not mean it does not exist. In, Miles Stoudenmire and I deconstruct the concept of oracle and show that as soon as we give the same input to both quantum and classical computers (the quantum circuit used to program the oracle on the actual quantum hardware) then the *generic* quantum advantage disappears. The charge of the proof is reversed: one must prove certain properties of the quantum circuit in order to possibly have a theoretical quantum advantage. More importantly – for the physicist that I am – our classical algorithm is very fast and we show that we can solve large instances of any search problem. This means that even for problems where *asymptotically* quantum computers are faster than classical ones, the crossing point where they become so is for astronomically large computing time, in the tens of millions of years. Who is willing to wait that long for the answer to a single question, even if the answer is 42?

The above explicitly confirms something that I realized immediately on reading the preprint, and that fully explains the tone of my response. Namely, Stoudenmire and Waintal’s beef isn’t merely with Grover’s algorithm, or even with the black-box model; it’s with the entire field of complexity theory. If they were right that complexity theorists never “questioned hypotheses” or wondered what did or didn’t apply to the real world, then complexity theory shouldn’t exist in CS departments at all—at most it should exist in pure math departments.

But a converse claim is also true. Namely, suppose it turned out that complexity theorists had already fully understood, for decades, all the elementary points Stoudenmire and Waintal were making about oracles versus explicit circuits. Suppose complexity theorists hadn’t actually been confused, at all, about under what sorts of circumstances the square-root speedup of Grover’s algorithm was (1) provable, (2) plausible but unproven, or (3) nonexistent. Suppose they’d also been intimately familiar with the phenomenon of asymptotically faster algorithms that get swamped in practice by unfavorable constants, and with the overhead of quantum error-correction. Suppose, indeed, that complexity theorists hadn’t merely understood all this stuff, but expressed it clearly and accurately where Stoudenmire and Waintal’s presentation was garbled and mixed with absurdities (e.g., the Grover problem “being classically solvable with a linear number of queries,” the Grover speedup not being “generic,” their being able to “solve large instances of any search problem” … does that include, for example, CircuitSAT? do they still not get the point about CircuitSAT?). Then Stoudenmire and Waintal’s whole objection would collapse.

Anyway, we don’t have to suppose! In the SciRate discussion of the preprint, a commenter named Bibek Pokharel helpfully digs up some undergraduate lecture notes from 2017 that are perfectly clear about what Stoudenmire and Waintal treat as revelations (though one could even go 20+ years earlier). The notes are focused here on Simon’s algorithm, but the discussion generalizes to any quantum black-box algorithm, including Grover’s:

The difficulty in claiming that we’re getting a quantum speedup [via Simon’s algorithm] is that, once we pin down the details of how we’re computing [the oracle function] f—so, for example, the actual matrix A [such that f(x)=Ax]—we then need to compare against classical algorithms that know those details as well. And as soon as we reveal the innards of the black box, the odds of an efficient classical solution become much higher! So for example, if we knew the matrix A, then we could solve Simon’s problem in classical polynomial time just by calculating A‘s nullspace. More generally, it’s not clear whether anyone to this day has found a straightforward “application” of Simon’s algorithm, in the sense of a class of efficiently computable functions f that satisfy the Simon promise, and for which any classical algorithm plausibly needs exponential time to solve Simon’s problem, even if the algorithm is given the code for f.

In the same lecture notes, one can find the following discussion of Grover’s algorithm, and how its unconditional square-root speedup becomes conditional (albeit, still extremely plausible in many cases) as soon as the black box is instantiated by an explicit circuit:

For an NP-complete problem like CircuitSAT, we can be pretty confident that the Grover speedup is real, because no one has found any classical algorithm that’s even slightly better than brute force. On the other hand, for more “structured” NP-complete problems, we do know exponential-time algorithms that are faster than brute force. For example, 3SAT is solvable classically in about O(1.3n) time. So then, the question becomes a subtle one of whether Grover’s algorithm can be combined with the best classical tricks that we know to achieve a polynomial speedup even compared to a classical algorithm that uses the same tricks. For many NP-complete problems the answer seems to be yes, but it need not be yes for all of them.

The notes in question were written by some random complexity theorist named Scot Aronsen (sp?). But if you don’t want to take it from that guy, then take it from (for example) the Google quantum chemist Ryan Babbush, again on the SciRate page:

It is well understood that applying Grover’s algorithm to 3-SAT in the standard way would not give a quadratic speedup over the best classical algorithm for 3-SAT in the worst case (and especially not on average). But there are problems for which Grover is expected to give a quadratic speedup over any classical algorithm in the worst case. For example, the problem “Circuit SAT” starts by me handing you a specification of a poly-size classical circuit with AND/OR/NOT gates, so it’s all explicit. Then you need to solve SAT on this circuit. Classically we strongly believe it will take time 2^n (this is even the basis of many conjectures in complexity theory, like the exponential time hypothesis), and quantumly we know it can be done with 2^{n/2}*poly(n) quantum gates using Grover and the explicitly given classical circuit. So while I think there are some very nice insights in this paper, the statement in the title “Grover’s Algorithm Offers No Quantum Advantage” seems untrue in a general theoretical sense. Of course, this is putting aside issues with the overheads of error-correction for quadratic speedups (a well understood practical matter that is resolved by going to large problem sizes that wouldn’t be available to the first fault-tolerant quantum computers). What am I missing?

More generally, over the past few days, as far as I can tell, every actual expert in quantum algorithms who’s looked at Stoudenmire and Waintal’s preprint—every one, whether complexity theorist or physicist or chemist—has reached essentially the same conclusions about it that I did. The one big difference is that many of the experts, who are undoubtedly better people than I am, extended a level of charity to Stoudenmire and Waintal (“well, this of course seems untrue, but here’s what it could have meant”) that Stoudenmire and Waintal themselves very conspicuously failed to extend to complexity theory.

Of course Grover’s algorithm offers a quantum advantage!

Wednesday, March 22nd, 2023

Unrelated Update: Huge congratulations to Ethernet inventor Bob Metcalfe, for winning UT Austin’s third Turing Award after Dijkstra and Emerson!

And also to mathematician Luis Caffarelli, for winning UT Austin’s third Abel Prize!

I was really, really hoping that I’d be able to avoid blogging about this new arXiv preprint, by E. M. Stoudenmire and Xavier Waintal:

Grover’s Algorithm Offers No Quantum Advantage

Grover’s algorithm is one of the primary algorithms offered as evidence that quantum computers can provide an advantage over classical computers. It involves an “oracle” (external quantum subroutine) which must be specified for a given application and whose internal structure is not part of the formal scaling of the quantum speedup guaranteed by the algorithm. Grover’s algorithm also requires exponentially many steps to succeed, raising the question of its implementation on near-term, non-error-corrected hardware and indeed even on error-corrected quantum computers. In this work, we construct a quantum inspired algorithm, executable on a classical computer, that performs Grover’s task in a linear number of call to the oracle – an exponentially smaller number than Grover’s algorithm – and demonstrate this algorithm explicitly for boolean satisfiability problems (3-SAT). Our finding implies that there is no a priori theoretical quantum speedup associated with Grover’s algorithm. We critically examine the possibility of a practical speedup, a possibility that depends on the nature of the quantum circuit associated with the oracle. We argue that the unfavorable scaling of the success probability of Grover’s algorithm, which in the presence of noise decays as the exponential of the exponential of the number of qubits, makes a practical speedup unrealistic even under extremely optimistic assumptions on both hardware quality and availability.

Alas, inquiries from journalists soon made it clear that silence on my part wasn’t an option.

So, desperately seeking an escape, this morning I asked GPT-4 to read the preprint and comment on it just like I would. Sadly, it turns out the technology isn’t quite ready to replace me at this blogging task. I suppose I should feel good: in every such instance, either I’m vindicated in all my recent screaming here about generative AI—what the naysayers call “glorified autocomplete”—being on the brink of remaking civilization, or else I still, for another few months at least, have a role to play on the Internet.

So, on to the preprint, as reviewed by the human Scott Aaronson. Yeah, it’s basically a tissue of confusions, a mishmash of the well-known and the mistaken. As they say, both novel and correct, but not in the same places.

The paper’s most eye-popping claim is that the Grover search problem—namely, finding an n-bit string x such that f(x)=1, given oracle access to a Boolean function f:{0,1}n→{0,1}—is solvable classically, using a number of calls that’s only linear in n, or in many cases only constant (!!). Since this claim contradicts a well-known, easily provable lower bound—namely, that Ω(2n) oracle calls are needed for classical brute-force searching—the authors must be using words in nonstandard ways, leaving only the question of how.

It turns out that, for their “quantum-inspired classical algorithm,” the authors assume you’re given, not merely an oracle for f, but the actual circuit to compute f. They then use that circuit in a non-oracular way to extract the marked item. In which case, I’d prefer to say that they’ve actually solved the Grover problem with zero queries—simply because they’ve entirely left the black-box setting where Grover’s algorithm is normally formulated!

What could possibly justify such a move? Well, the authors argue that sometimes one can use the actual circuit to do better classically than Grover’s algorithm would do quantumly, and therefore, they’ve shown that the Grover speedup is not “generic,” as the quantum algorithms people always say it is.

But this is pure wordplay around the meaning of “generic.” When we say that Grover’s algorithm achieves a “generic” square-root speedup, what we mean is that it solves the generic black-box search problem in O(2n/2) queries, whereas any classical algorithm for that generic problem requires Ω(2n) queries. We don’t mean that for every f, Grover achieves a quadratic speedup for searching that f, compared to the best classical algorithm that could be tailored to that f. Of course we don’t; that would be trivially false!

Remarkably, later in the paper, the authors seem to realize that they haven’t delivered the knockout blow against Grover’s algorithm that they’d hoped for, because they then turn around and argue that, well, even for those f’s where Grover does provide a quadratic speedup over the best (or best-known) classical algorithm, noise and decoherence could negate the advantage in practice, and solving that problem would require a fault-tolerant quantum computer, but fault-tolerance could require an enormous overhead, pushing a practical Grover speedup far into the future.

The response here boils down to “no duh.” Yes, if Grover’s algorithm can yield any practical advantage in the medium term, it will either be because we’ve discovered much cheaper ways to do quantum fault-tolerance, or else because we’ve discovered “NISQy” ways to exploit the Grover speedup, which avoid the need for full fault-tolerance—for example, via quantum annealing. The prospects are actually better for a medium-term advantage from Shor’s factoring algorithm, because of its exponential speedup. Hopefully everyone in quantum computing theory has realized all this for a long time.

Anyway, as you can see, by this point we’ve already conceded the principle of Grover’s algorithm, and are just haggling over the practicalities! Which brings us back to the authors’ original claim to have a principled argument against the Grover speedup, which (as I said) rests on a confusion over words.

Some people dread the day when GPT will replace them. In my case, for this task, I can’t wait.

Thanks to students Yuxuan Zhang (UT) and Alex Meiburg (UCSB) for discussions of the Stoudenmire-Waintal preprint that informed this post. Of course, I take sole blame for anything anyone dislikes about the post!

For a much more technical response—one that explains how this preprint’s detailed attempt to simulate Grover classically fails, rather than merely proving that it must fail—check out this comment by Alex Meiburg.

Visas for Chinese students: US shoots itself in the foot again

Tuesday, February 14th, 2023

Coming out of blog-hiatus for some important stuff, today, tomorrow, and the rest of the week.

Something distressing happened to me yesterday for the first time, but I fear not the last. We (UT Austin) admitted a PhD student from China who I know to be excellent, and who wanted to work with me. That student, alas, has had to decline our offer of admission, because he’s been denied a US visa under Section 212(A)(3)(a)(i), which “prohibits the issuance of a visa to anyone who seeks to enter the United States to violate or evade any law prohibiting the export from the United States of goods, technology, or sensitive information.” Quantum computing, you see, is now a “prohibited technology.”

This visa denial is actually one that the American embassy in Beijing only just now got around to issuing, from when the student applied for a US visa a year and a half ago, to come visit me for a semester as an undergrad. For context, the last time I had an undergrad from China visit me for a semester, back in 2016, the undergrad’s name was Lijie Chen. Lijie just finished his PhD at MIT under Ryan Williams and is now one of the superstars of theoretical computer science. Anyway, in Fall 2021 I got an inquiry from a Chinese student who bowled me over the same way Lijie had, so I invited him to spend a semester with my group in Austin. This time, alas, the student never heard back when he applied for a visa, and was therefore unable to come. He ended up doing an excellent research project with me anyway, working remotely from China, checking in by Zoom, and even participating in our research group meetings (which were on Zoom anyway because of the pandemic).

Anyway, for reasons too complicated to explain, this previous denial means that the student would almost certainly be denied for a new visa to come to the US to do a PhD in quantum computing. (Unless some immigration lawyer reading this can suggest a way out!) The student is not sure what he’s going to do next, but it might involve staying in China, or applying in Europe, or applying in the US again after a year but without mentioning the word “quantum.”

It should go without saying, to anyone reading this, that the student wants to do basic research in quantum complexity theory that’s extraordinarily unlikely to have any direct military or economic importance … just like my own research! 🙂 And it should also go without saying that, if the US really wanted to strike a blow against authoritarianism in Beijing, then it could hardly do better than to hand out visas to every Chinese STEM student and researcher who wanted to come here. Yes, some would return to China with their new skills, but a great many would choose to stay in the US … if we let them.

And I’ve pointed all this out to a Republican Congressman, and to people in the military and intelligence agencies, when they asked me “what else the US can do to win the quantum computing race against China?” And I’ll continue to say it to anyone who asks. The Congressman, incidentally, even said that he privately agreed with me, but that the issue was politically difficult. I wonder: is there anyone in power in the US, in either party, who doesn’t privately agree that opening the gates to China’s STEM talent would be a win/win proposition for the US … including for the US’s national security? If so, who are these people? Is this just a naked-emperor situation, where everyone in Congress fears to raise the issue because they fear backlash from someone else, but the someone else is actually thinking the same way?

And to any American who says, “yeah, but China totally deserves it, because of that spy balloon, and their threats against Taiwan, and all the spying they do with TikTok”—I mean, like, imagine if someone tried to get back at the US government for the Iraq War or for CIA psyops or whatever else by punishing you, by curtailing your academic dreams. It would make exactly as much sense.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.

On form versus meaning

Sunday, April 24th, 2022

There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure – four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.

This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.

For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.

–GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃

The demise of Scientific American: Guest post by Ashutosh Jogalekar

Monday, January 3rd, 2022

Scott’s foreword

One week ago, E. O. Wilson—the legendary naturalist and conservationist, and man who was universally acknowledged to know more about ants than anyone else in human history—passed away at age 92. A mere three days later, Scientific American—or more precisely, the zombie clickbait rag that now flaunts that name—published a shameful hit-piece, smearing Wilson for his “racist ideas” without, incredibly, so much as a single quote from Wilson, or any other attempt to substantiate its libel (see also this response by Jerry Coyne). SciAm‘s Pravda-like attack included the following extraordinary sentence, which I thought worthy of Alan Sokal’s Social Text hoax:

The so-called normal distribution of statistics assumes that there are default humans who serve as the standard that the rest of us can be accurately measured against.

There are intellectually honest people who don’t know what the normal distribution is. There are no intellectually honest people who, not knowing what it is, figure that it must be something racist.

On Twitter, Laura Helmuth, the editor-in-chief now running SciAm into the ground, described her magazine’s calumny against Wilson as “insightful” (the replies, including from Richard Dawkins, are fun to read). I suppose it was as “insightful” as SciAm‘s disgraceful attack last year on Eric Lander, President Biden’s ultra-competent science advisor and a leader in the war on COVID, for … being a white male, which appears to have been E. O. Wilson’s crime as well. (Think I must be misrepresenting the “critique” of Lander? Read it!)

Anyway, in response to Scientific American‘s libel of Wilson, I wrote on my Facebook that I’ll no longer agree to write for or be interviewed by them (you can read my old stuff free of charge here or here), unless and until there’s a complete change of editorial direction. I encourage all other scientists to commit likewise, thereby making it common knowledge that the entity that now calls itself “Scientific American” bears the same relation to the legendary home of Martin Gardner as does a corpse to a living being. Fortunately, there are high-quality online venues (e.g., Quanta) that partly fill the role that Scientific American abdicated.

After reading my Facebook post, my friend Ashutosh Jogalekar was inspired to post an essay of his own. Ashutosh used to write regularly for Scientific American, until he was fired seven years ago over a column in which he advocated acknowledging Richard Feynman’s flaws, including his arrogance and casual sexism, but also understanding those flaws within the context of Feynman’s whole life, including the tragic death of his first wife Arlene. (Yes, that was really it! Read the piece!) Below, I’m sharing Ashutosh’s moving essay about E. O. Wilson with Ashutosh’s very generous permission. —Scott Aaronson

Guest Post by Ashutosh Jogalekar

As some know, I was “fired” from Scientific American in 2014 for three “controversial” posts (among 200 that I had written for the magazine). When I parted from the magazine I chalked up my departure to an unfortunate misunderstanding more than anything else. I still respected some of the writers at the publication, and while I wore my separation as a badge of honor and in retrospect realized its liberating utility in enabling me to greatly expand my topical range, I occasionally still felt bad and wished things had gone differently.

No more. Now the magazine has done me a great favor by allowing me to wipe the slate of my conscience clean. What happened seven years ago was not just a misunderstanding but clearly one of many first warning signs of a calamitous slide into a decidedly unscientific, irrational and ideology-ridden universe of woke extremism. Its logical culmination two days ago was an absolutely shameless, confused, fact-free and purely ideological hit job on someone who wasn’t just a great childhood hero of mine but a leading light of science, literary achievement, humanism and biodiversity. While Ed (E. O.) Wilson’s memory was barely getting cemented only days after his death, the magazine published an op-ed calling him a racist, a hit job endorsed and cited by the editor-in-chief as “insightful”. One of the first things I did after reading the piece was buy a few Wilson books that weren’t part of my collection.

Ed Wilson was one of the gentlest, most eloquent, most brilliant and most determined advocates for both human and natural preservation you could find. Under Southern charm lay hidden unyielding doggedness and immense stamina combined with a missionary zeal to communicate the wonders of science to both his fellow biologists and the general public. His autobiography, “Naturalist”, is perhaps the finest, most literary statement of the scientific life I have read; it was one of a half dozen books that completely transported me when I read it in college. In book after book of wide-ranging intellectual treats threading through a stunning diversity of disciplines, he sent out clarion calls for saving the planet, for enabling dialogue between the natural and the social sciences, for understanding each other better. In the face of unprecedented challenges to our fragile environment and continued barriers to interdisciplinary communication, this is work that likely will make him go down in history as one of the most important human beings who ever lived, easily of the same caliber and achievement as John Muir or Thoreau. Even in terms of achievement strictly defined by accolades – the National Medal of Science, the Crafoord Prize which recognizes fields excluded by the Nobel Prize, and not just one but two Pulitzer Prizes – few scientists from any field in the 20th century can hold a candle to Ed Wilson. My friend Richard Rhodes who knew Wilson for decades as a close and much-admired friend said that there wasn’t a racist bone in his body; Dick should know since he just came out with a first-rate biography of Wilson weeks before his passing.

The writer who wrote that train wreck is a professor of nursing at UCSF named Monica McLemore. That itself is a frightening fact and should tell everyone how much ignorance has spread itself in our highest institutions. She not only maligned and completely misrepresented Wilson but did not say a word about his decades-long, heroic effort to preserve the planet and our relationship with it; it was clear that she had little acquaintance with Wilson’s words since she did not cite any. It’s also worth noting the gaping moral blindness in her article which completely misses the most moral thing Wilson did – spend decades advocating for saving our planet and averting a catastrophe of extinction, climate change and divisiveness – and instead focuses completely on his non-existent immorality. This is a pattern that is consistently found among those urging “social justice” or “equity” or whatever else: somehow they seem to spend all their time talking about fictional, imagined immorality while missing the real, flesh-and-bones morality that is often the basis of someone’s entire life’s work.

In the end, the simple fact is that McLemore didn’t care about any of this. She didn’t care because she had a political agenda and the facts did not matter to her, even facts as basic as the definition of the normal distribution in statistics. For her, Wilson was some obscure white male scientist who was venerated, and that was reason enough for a supposed “takedown”. And the editor of Scientific American supported and lauded this ignorant, ideology-driven tirade.

Ironically, Wilson would have found this ideological hit job all too familiar. After he wrote his famous book Sociobiology in the 1970s, a volume in which, in a single chapter about human beings, he had the temerity to suggest that maybe, just maybe, human beings operate with the same mix of genes that other creatures do, the book was met by a disgraceful, below-the-belt, ideological response from Wilson’s far left colleagues Richard Lewontin and Stephen Jay Gould who hysterically compared his arguments to thinking that was well on its way down the slippery slope to that dark world where lay the Nazi gas chambers. The gas chamber analogy is about the only thing that’s missing from the recent hit job, but the depressing thing is that we are fighting the same battles in 2021 that Wilson fought forty years before, although turbocharged this time by armies of faithful zombies on social media. The sad thing is that Wilson is no longer around to defend himself, although I am not sure he would have bothered with a piece as shoddy as this one.

The complete intellectual destruction of a once-great science magazine is now clear as day. No more should Scientific American be regarded as a vehicle for sober scientific views and liberal causes but as a political magazine with clearly stated ideological biases and an aversion to facts, an instrument of a blinkered woke political worldview that brooks no dissent. Scott Aaronson has taken a principled stand and said that after this proverbial last straw on the camel’s back, he will no longer write for the magazine or do interviews for them. I applaud Scott’s decision, and with his expertise it’s a decision that actually matters. As far as I am concerned, I now mix smoldering fury at the article with immense relief: the last seven years have clearly shown that leaving Scientific American in 2014 was akin to leaving the Soviet Union in the 1930s just before Stalin appointed Lysenko head biologist. I could not have asked for a happier expulsion and now feel completely vindicated and free of any modicum of regret I might have felt.

To my few friends and colleagues who still write for the magazine and whose opinions I continue to respect, I really wish to ask: Why? Is writing for a magazine which has sacrificed facts and the liberal voice of real science at the altar of political ideology and make believe still worth it? What would it take for you to say no more? As Oscar Wilde would say, one mistake like this is a mistake, two seems more like carelessness; in the roster of the last few years, this is “mistake” 100+, signaling that it’s now officially approved policy. Do you think that being an insider will allow you to salvage the reputation of the magazine? If you think that way, you are no different from the one or two moderate Republicans who think they can still salvage the once-great party of Lincoln and Eisenhower. Both the GOP and Scientific American are beyond redemption from where I stand. Get out, start your own magazine or join another, one which actually respects liberal, diverse voices and scientific facts; let us applaud you for it. You deserve better, the world deserves better. And Ed Wilson’s memory sure as hell deserves better.

Update (from Scott): See here for the Hacker News thread about this post. I was amused by the conjunction of two themes: (1) people who were uncomfortable with my and Ashutosh’s expression of strong emotions, and (2) people who actually clicked through to the SciAm hit-piece, and then reported back to the others that the strong emotions were completely, 100% justified in this case.

My values, howled into the wind

Sunday, December 19th, 2021

I’m about to leave for a family vacation—our first such since before the pandemic, one planned and paid for literally the day before the news of Omicron broke. On the negative side, staring at the case-count graphs that are just now going vertical, I estimate a ~25% chance that at least one of us will get Omicron on this trip. On the positive side, I estimate a ~60% chance that in the next 6 months, at least one of us would’ve gotten Omicron or some other variant even without this trip—so maybe it’s just as well if we get it now, when we’re vaxxed to the maxx and ready and school and university are out.

If, however, I do end this trip dead in an ICU, I wouldn’t want to do so without having clearly set out my values for posterity. So with that in mind: in the comments of my previous post, someone asked me why I identify as a liberal or a progressive, if I passionately support educational practices like tracking, ability grouping, acceleration, and (especially) encouraging kids to learn advanced math whenever they’re ready for it. (Indeed, that might be my single stablest political view, having been held, for recognizably similar reasons, since I was about 5.)

Incidentally, that previous post was guest-written by my colleagues Edith Cohen and Boaz Barak, and linked to an open letter that now has almost 1500 signatories. Our goal was, and is, to fight the imminent dumbing-down of precollege math education in the United States, spearheaded by the so-called “California Mathematics Framework.” In our joint efforts, we’ve been careful with every word—making sure to maintain the assent of our entire list of signatories, to attract broad support, to stay narrowly focused on the issue at hand, and to bend over backwards to concede much as we could. Perhaps because of those cautions, we—amazingly—got some actual traction, reaching people in government (such as Rep. Ro Khanna, D – Silicon Valley) and technology leaders, and forcing the “no one’s allowed to take Algebra in 8th grade” faction to respond to us.

This was disorienting to me. On this blog, I’m used just to howling into the wind, having some agree, some disagree, some take to Twitter to denounce me, but in any case, having no effect of any kind on the real world.

So let me return to howling into the wind. And return to the question of what I “am” in ideology-space, which doesn’t have an obvious answer.

It’s like, what do you call someone who’s absolutely terrified about global warming, and who thinks the best response would’ve been (and actually, still is) a historic surge in nuclear energy, possibly with geoengineering to tide us over?

… who wants to end world hunger … and do it using GMO crops?

… who wants to smash systems of entrenched privilege in college admissions … and believes that the SAT and other standardized tests are the best tools ever invented for that purpose?

… who feels a personal distaste for free markets, for the triviality of what they so often elevate and the depth of what they let languish, but tolerates them because they’ve done more than anything else to lift up the world’s poor?

… who’s happiest when telling the truth for the cause of social justice … but who, if told to lie for the cause of social justice, will probably choose silence or even, if pushed hard enough, truth?

… who wants to legalize marijuana and psychedelics, and also legalize all the promising treatments currently languishing in FDA approval hell?

… who feels little attraction to the truth-claims of the world’s ancient religions, except insofar as they sometimes serve as prophylactics against newer and now even more virulent religions?

… who thinks the covid response of the CDC, FDA, and other authorities was a historic disgrace—not because it infringed on the personal liberties of antivaxxers or anything like that, but on the contrary, because it was weak, timid, bureaucratic, and slow, where it should’ve been like that of a general at war?

… who thinks the Nazi Holocaust was even worse than the mainstream holds it to be, because in addition to the staggering, one-lifetime-isn’t-enough-to-internalize-it human tragedy, the Holocaust also sent up into smoke whatever cultural process had just produced Einstein, von Neumann, Bohr, Szilard, Born, Meitner, Wigner, Haber, Pauli, Cantor, Hausdorff, Ulam, Tarski, Erdös, and Noether, and with it, one of the wellsprings of our technological civilization?

… who supports free speech, to the point of proudly tolerating views that really, actually disgust them at their workplace, university, or online forum?

… who believes in patriotism, the police, the rule of law, to the extent that they don’t understand why all the enablers of the January 6 insurrection, up to and including Trump, aren’t currently facing trial for treason against the United States?

… who’s (of course) disgusted to the core by Trump and everything he represents, but who’s also disgusted by the elite virtue-signalling hypocrisy that made the rise of a Trump-like backlash figure predictable?

… who not only supports abortion rights, but also looks forward to a near future when parents, if they choose, are free to use embryo selection to make their children happier, smarter, healthier, and free of life-crippling diseases (unless the “bioethicists” destroy that future, as a previous generation of Deep Thinkers destroyed our nuclear future)?

… who, when reading about the 1960s Sexual Revolution, instinctively sides with free-loving hippies and against the scolds … even if today’s scolds are themselves former hippies, or intellectual descendants thereof, who now clothe their denunciations of other people’s gross, creepy sexual desires in the garb of feminism and social justice?

What, finally, do you call someone whose image of an ideal world might include a young Black woman wearing a hijab, an old Orthodox man with black hat and sidecurls, a broad-shouldered white guy from the backwoods of Alabama, and a trans woman with purple hair, face tattoos and a nose ring … all of them standing in front of a blackboard and arguing about what would happen if Alice and Bob jumped into opposite ends of a wormhole?

Do you call such a person “liberal,” “progressive,” “center-left,” “centrist,” “Pinkerite,” “technocratic,” “neoliberal,” “libertarian-ish,” “classical liberal”? Why not simply call them “correct”? 🙂