An alarming trend in K-12 math education: a guest post and an open letter

December 3rd, 2021

Update (Dec. 5): Our open letter just made the WSJ, and has been tweeted by Matt Yglesias and others; hopefully more coverage will follow soon! See also Boaz Barak’s thread, and a supportive tweet from Rep. Ro Khanna (D-Silicon Valley). If you’re just arriving here, try TodayMag for an overview of some of the issues at stake.


Today, I’m turning over Shtetl-Optimized to an extremely important guest post by theoretical computer scientists Boaz Barak of Harvard and Edith Cohen of Google (cross-posted on the windows on theory blog). In addition to the post below, please read—and if relevant, consider signing—our open letter about math education in the US, which now has over 150 now 535 746 signatories, including Fields Medalists, Turing Award winners, and Nobel laureates. Finally, check out our fuller analysis of what the California Mathematics Framework is poised to do and why it’s such an urgent crisis for math education. I’m particularly grateful to my colleagues for their writing efforts, since I would never have been able to discuss what’s happening in such relatively measured words. –Scott Aaronson


Mathematical education at the K-12 level is critical for preparation for STEM careers. An ongoing challenge to the US K-12 system is to improve the preparation of students for advanced mathematics courses and expand access and enrollment in these courses. As stated by a Department of Education report “taking Algebra I before high school … can set students up for a strong foundation of STEM education and open the door for various college and career options.” The report states that while 80% of all students have access to Algebra I in middle school, only 24% enroll. This is also why the goal of Bob Moses’ Algebra Project is to ensure that “every child must master algebra, preferably by eighth grade, for algebra is the gateway to the college-prep curriculum, which in turn is the path to higher education.”

The most significant potential for growth is among African American or Latino students, among whom only 12% enroll in Algebra before high school. This untapped potential has longer-term implications for both society and individuals. For example, although African Americans and Latinos comprise 13% and 18% (respectively) of the overall US population, they only account for 4% and 11% of engineering degrees. There is also a gap in access by income: Calculus is offered in 92% of schools serving the top income quartile but only in 77% of schools serving the bottom quartile (as measured by the share of students eligible for free or reduced-price lunch). Thus minority and low income students have less access to STEM jobs, which yield more than double the median salary of non-STEM jobs, and are projected to grow at a 50% higher rate over the next decade.

Given these disparities, we applaud efforts such as the Algebra Project, the Calculus Project, and Bridge to Enter Advanced Mathematics that increase access to advanced mathematical education to underserved students. However, we are concerned by recent approaches, such as the proposed California Math Framework (CMF) revisions,  that take the opposite direction.

See this document for a detailed analysis and critique of the CMF, but the bottom line is that rather than trying to elevate under-served students, such “reforms” reduce access and options for all students. In particular, the CMF encourages schools to stop offering Algebra I in middle school, while placing obstacles (such as doubling-up, compressed courses, or outside-of-school private courses) in the way of those who want to take advanced math in higher grades. When similar reforms were implemented in San Francisco, they resulted in an “inequitable patchwork scheme” of workarounds that affluent students could access but that their less privileged counterparts could not. The CMF also promotes trendy and shallow courses (such as a nearly math-free version of  “data science”) as recommended alternatives  to foundational mathematical courses such as Algebra and Calculus. These courses do not prepare students even for careers in data science itself!

As educators and practitioners, we have seen first-hand the value of solid foundations in mathematics for pursuing college-level STEM education and a productive STEM career. 

While well-intentioned, we believe that many of the changes proposed by the CMF are deeply misguided and will disproportionately harm under-resourced students. Adopting them would result in a student population that is less prepared to succeed in STEM and other 4-year quantitative degrees in college.  The CMF states that “many students, parents, and teachers encourage acceleration beginning in grade eight (or sooner) because of mistaken beliefs that Calculus is an important high school goal.” Students, parents, and teachers are not mistaken. Neither is the National Society of Black Engineers (NSBE), which set in 2015 a goal to double the number of African American students taking calculus by 2025. Calculus is not the only goal of K-12 math education, but it is important for students who wish to prepare for STEM in college and beyond. 

We agree that calculus is not the “be-all and end-all” of high-school mathematics education. In particular, we encourage introducing options such as logic, probability, discrete mathematics, and algorithms design in the K-12 curriculum, as they can be valuable foundations for STEM education in college. However, when taught beyond a superficial level (which unfortunately is not the case in the CMF “data science” proposals), these topics are not any easier  than calculus. They require the same foundations of logic, algebra, and functions, and fluency with numbers and calculations. Indeed, the career paths with the highest potential for growth require more and deeper mathematical preparation than ever before. Calculus and other mathematical foundations are not important because they are admission requirements for colleges, or because they are relics of the “Sputnik era”. They are important because they provide fundamental knowledge and ways of thinking that are necessary for success in these fast growing and in-demand fields.

We also fully support incorporating introductory data analysis and coding skills in the K-12 curriculum (and there are some good approaches for doing so).  But we strongly disagree with marketing such skills as replacing foundational skills in algebra and calculus when preparing for 4-year college degrees in STEM and other quantitative fields. These topics are important and build on basic math foundations but are not a replacement for such foundations any more than social media skills can replace reading and writing foundations. 

Given the above, we, together with more than 150 scientists, educators, and practitioners in STEM, have signed an open letter expressing our concerns with such trends. The signatories include STEM faculty in public and private universities and practitioners from industry. They include educators with decades of experience teaching students at all levels, as well as researchers that won the highest awards in their fields, including the Fields Medal and the Turing Award. Signers also include people vested in mathematical high-school education, such as Adrian Mims (founder of The Calculus Project) and Jelani Nelson (UC Berkeley EECS professor and founder of AddisCoder) who have spearheaded projects to increase access to underserved populations.

We encourage you to read the letter, and if you are a US-based STEM professional or educator, consider signing it as well: https://bit.ly/mathedletter

Unfortunately, in recent years, debates on US education have become politicized. The letter is not affiliated with any political organization, and we believe that the issues we highlight transcend current political debates. After all, expanding access to mathematical education is both socially just and economically wise.


Note: The above guest post reflects the views of its authors, Boaz Barak and Edith Cohen. Any comments below by them, me, or other signatories reflect their own views, not necessarily those of the entire group. –SA

Two new talks and an interview

December 2nd, 2021
  1. A talk to UT Austin’s undergraduate math club (handwritten PDF notes) about Hao Huang’s proof of the Sensitivity Conjecture, and its implications for quantum query complexity and more. I’m still not satisfied that I’ve presented Huang’s beautiful proof as clearly and self-containedly as I possibly can, which probably just means I need to lecture on it a few more times.
  2. A Zoom talk at the QPQIS conference in Beijing (PowerPoint slides), setting out my most recent thoughts about Google’s and USTC’s quantum supremacy experiments and the continuing efforts to spoof them classically.
  3. An interview with me in Communications of the ACM, mostly about BosonSampling and the quantum lower bound for the collision problem.

Enjoy y’all!

The Acrobatics of BQP

November 19th, 2021

Just in case anyone is depressed this afternoon and needs something to cheer them up, students William Kretschmer, DeVon Ingram, and I have finally put out a new paper:

The Acrobatics of BQP

Abstract: We show that, in the black-box setting, the behavior of quantum polynomial-time (BQP) can be remarkably decoupled from that of classical complexity classes like NP. Specifically:

– There exists an oracle relative to which NPBQP⊄BQPPH, resolving a 2005 problem of Fortnow. Interpreted another way, we show that AC0 circuits cannot perform useful homomorphic encryption on instances of the Forrelation problem. As a corollary, there exists an oracle relative to which P=NP but BQP≠QCMA.

– Conversely, there exists an oracle relative to which BQPNP⊄PHBQP.

– Relative to a random oracle, PP=PostBQP is not contained in the “QMA hierarchy” QMAQMA^QMA^…, and more generally PP⊄(MIP*)(MIP*)^(MIP*)^… (!), despite the fact that MIP*=RE in the unrelativized world. This result shows that there is no black-box quantum analogue of Stockmeyer’s approximate counting algorithm.

– Relative to a random oracle, Σk+1⊄BQPΣ_k for every k.

– There exists an oracle relative to which BQP=P#P and yet PH is infinite. (By contrast, if NP⊆BPP, then PH collapses relative to all oracles.)

– There exists an oracle relative to which P=NP≠BQP=P#P.

To achieve these results, we build on the 2018 achievement by Raz and Tal of an oracle relative to which BQP⊄PH, and associated results about the Forrelation problem. We also introduce new tools that might be of independent interest. These include a “quantum-aware” version of the random restriction method, a concentration theorem for the block sensitivity of AC0 circuits, and a (provable) analogue of the Aaronson-Ambainis Conjecture for sparse oracles.

Incidentally, particularly when I’ve worked on a project with students, I’m often tremendously excited and want to shout about it from the rooftops for the students’ sake … but then I also don’t want to use this blog to privilege my own papers “unfairly.” Can anyone suggest a principle that I should follow going forward?

Scott Aaronson, when reached for comment, said…

November 16th, 2021

About IBM’s new 127-qubit superconducting chip: As I told New Scientist, I look forward to seeing the actual details! As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely, what are the gate fidelities? How deep of a quantum circuit can they apply? How have they benchmarked the chip? Right now, all I have to go on is a stats page for the new chip, which reports its average CNOT error as 0.9388—in other words, close to 1, or terrible! (But see also a tweet by James Wootton, which explains that such numbers are often highly misleading when a new chip is first rolled out.) Does anyone here have more information? Update (11/17): As of this morning, the average CNOT error has been updated to 2%. Thanks to multiple commenters for letting me know!

About the new simulation of Google’s 53-qubit Sycamore chip in 5 minutes on a Sunway supercomputer (see also here): This is an exciting step forward on the classical validation of quantum supremacy experiments, and—ironically, what currently amounts to almost the same thing—on the classical spoofing of those experiments. Congratulations to the team in China that achieved this! But there are two crucial things to understand. First, “5 minutes” refers to the time needed to calculate a single amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of independent noisy samples, which is what Google’s Sycamore chip does in 3 minutes. For the latter task, more like a week still seems to be needed on the supercomputer. (I’m grateful to Chu Guo, a coauthor of the new work who spoke in UT Austin’s weekly quantum Zoom meeting, for clarifying this point.) Second, the Sunway supercomputer has parallel processing power equivalent to approximately ten million of your laptop. Thus, even if we agreed that Google no longer had quantum supremacy as measured by time, it would still have quantum supremacy as measured by carbon footprint! (And this despite the fact that the quantum computer itself requires a noisy, closet-sized dilution fridge.) Even so, for me the new work underscores the point that quantum supremacy is not yet a done deal. Over the next few years, I hope that Google and USTC, as well as any new entrants to this race (IBM? IonQ? Harvard? Rigetti?), will push forward with more qubits and, even more importantly, better gate fidelities leading to higher Linear Cross-Entropy scores. Meanwhile, we theorists should try to do our part by inventing new and better protocols with which to demonstrate near-term quantum supremacy—especially protocols for which the classical verification is easier.

About the new anti-woke University of Austin (UATX): In general, I’m extremely happy for people to experiment with new and different institutions, and of course I’m happy for more intellectual activity in my adopted city of Austin. And, as Shtetl-Optimized readers will know, I’m probably more sympathetic than most to the reality of the problem that UATX is trying to solve—living, as we do, in an era when one academic after another has been cancelled for ideas that a mere decade ago would’ve been considered unexceptional, moderate, center-left. Having said all that, I wish I could feel more optimistic about UATX’s prospects. I found its website heavy on free-speech rhetoric but frustratingly light on what the new university is actually going to do: what courses it will offer, who will teach them, where the campus will be, etc. etc. Arguably this is all excusable for a university still in ramp-up mode, but had I been in their shoes, I might have held off on the public launch until I had at least some sample content to offer. Certainly, the fact that Steven Pinker has quit UATX’s advisory board is a discouraging sign. If UATX asks me to get involved—to lecture there, to give them advice about their CS program, etc.—I’ll consider it as I would any other request. So far, though, they haven’t.

About the Association for Mathematical Research: Last month, some colleagues invited me to join a brand-new society called the Association for Mathematical Research. Many of the other founders (Joel Hass, Abigail Thompson, Colin Adams, Richard Borcherds, Jeff Cheeger, Pavel Etingof, Tom Hales, Jeff Lagarias, Mark Lackenby, Cliff Taubes, …) were brilliant mathematicians who I admired, they seemed like they could use a bit of theoretical computer science representation, there was no time commitment, maybe they’d eventually do something good, so I figured why not? Alas, to say that AMR has proved unpopular on Twitter would be an understatement: it’s received the same contemptuous reception that UATX has. The argument seems to be: starting a new mathematical society, even an avowedly diverse and apolitical one, is really just an implicit claim that the existing societies, like the Mathematical Association of America (MAA) and the American Mathematical Society (AMS), have been co-opted by woke true-believers. But that’s paranoid and insane! I mean, it’s not as if an AMS blog has called for the mass resignation of white male mathematicians to make room for the marginalized, or the boycott of Israeli universities, or the abolition of the criminal justice system (what to do about Kyle Rittenhouse though?). Still, even though claims of that sort of co-option are obviously far-out, rabid fantasies, yeah, I did decide to give a new organization the benefit of the doubt. AMR might well fail or languish in obscurity, just like UATX might. On the other hand, the barriers to making a positive difference for the intellectual world, the world I love, the world under constant threat from the self-certain ideologues of every side, do strike me as orders of magnitude smaller for a new professional society than they do for a new university.

Q2B 2021

November 1st, 2021

This is a quick post to let people know that the 2021 Q2B (Quantum 2 Business) conference will be this December 7-9 at the Santa Clara Convention Center. (Full disclosure: Q2B is hosted by QC Ware, Inc., to which I’m the scientific adviser.) Barring a dramatic rise in cases or the like, I’m planning to attend to do my Ask-Me-Anything session, in what’s become an annual tradition. Notably, this will be my first in-person conference, and in fact my first professional travel of any kind, since before covid shut down the US in late March 2020. I hope to see many of you there! And if you won’t be at Q2B, but you’ll be in the Bay Area and would like to meet otherwise, let me know and we’ll try to work something out.

An Orthodox rabbi and Steven Weinberg walk into an email exchange…

October 22nd, 2021

Ever since I posted my obituary for the great Steven Weinberg three months ago, I’ve gotten a steady trickle of emails—all of which I’ve appreciated enormously—from people who knew Steve, or were influenced by him, and who wanted to share their own thoughts and memories. Last week, I was contacted by one Moshe Katz, an Orthodox rabbi, who wanted to share a long email exchange that he’d had with Steve, about Steve’s reasons for rejecting his birth-religion of Judaism (along with every other religion). Even though Rabbi Katz, rather than Steve, does most of the talking in this exchange, and even though Steve mostly expresses the same views he’d expressed in many of his public writings, I knew immediately on seeing this exchange that it could be of broader interest—so I secured permission to share it here on Shtetl-Optimized, both from Rabbi Katz and from Steve’s widow Louise.

While longtime readers can probably guess what I think about most of the topics discussed, I’ll refrain from any editorial commentary in this post—but of course, feel free to share your own thoughts in the comments, and maybe I’ll join in. Mostly, reading this exchange reminded me that someone at some point should write a proper book-length biography of Steve, and someone should also curate and publish a selection of his correspondence, much like Perfectly Reasonable Deviations from the Beaten Track did for Richard Feynman. There must be a lot more gems to be mined.

Anyway, without further ado, here’s the exchange (10 pages, PDF).

Update (Nov. 2, 2021): By request, see here for some of my own thoughts.

Welcome to scottaaronson.blog !

October 21st, 2021

If you’ve visited Shtetl-Optimized lately — which, uh, I suppose you have — you may have noticed that your URL was redirected from www.scottaaronson.com/blog to scottaaronson.blog. That’s because Automattic, makers of WordPress.com, volunteered to move my blog there from Bluehost, free of charge. If all goes according to plan, you should notice faster loading times, less downtime, and hopefully nothing else different. Please let me know if you encounter any problems. And huge thanks to the WordPress.com Special Projects Team, especially Christopher Jones and Mark Drovdahl, for helping me out with this.

Gaussian BosonSampling, higher-order correlations, and spoofing: An update

October 10th, 2021

In my last post, I wrote (among other things) about an ongoing scientific debate between the group of Chaoyang Lu at USTC in China, which over the past year has been doing experiments that seek to demonstrate quantum supremacy via Gaussian BosonSampling; and the group of Sergio Boixo at Google, which had a recent paper on a polynomial-time classical algorithm to sample approximately from the same distributions.  I reported the facts as I understood them at the time.  Since then, though, a long call with the Google team gave me a new and different understanding, and I feel duty-bound to share that here.

A week ago, I considered it obvious that if, using a classical spoofer, you could beat the USTC experiment on a metric like total variation distance from the ideal distribution, then you would’ve completely destroyed USTC’s claim of quantum supremacy.  The reason I believed that, in turn, is a proposition that I hadn’t given a name but needs one, so let me call it Hypothesis H:

The only way a classical algorithm to spoof BosonSampling can possibly do well in total variation distance, is by correctly reproducing the high-order correlations (correlations among the occupation numbers of large numbers of modes) — because that’s where the complexity of BosonSampling lies (if it lies anywhere).

Hypothesis H had important downstream consequences.  Google’s algorithm, by the Google team’s own admission, does not reproduce the high-order correlations.  Furthermore, because of limitations on both samples and classical computation time, Google’s paper calculates the total variation distance from the ideal distribution only on the marginal distribution on roughly 14 out of 144 modes.  On that marginal distribution, Google’s algorithm does do better than the experiment in total variation distance.  Google presents a claimed extrapolation to the full 144 modes, but eyeballing the graphs, it was far from clear to me what would happen: like, maybe the spoofing algorithm would continue to win, but maybe the experiment would turn around and win; who knows?

Chaoyang, meanwhile, made a clear prediction that the experiment would turn around and win, because of

  1. the experiment’s success in reproducing the high-order correlations,
  2. the admitted failure of Google’s algorithm in reproducing the high-order correlations, and
  3. the seeming impossibility of doing well on BosonSampling without reproducing the high-order correlations (Hypothesis H).

Given everything my experience told me about the central importance of high-order correlations for BosonSampling, I was inclined to agree with Chaoyang.

Now for the kicker: it seems that Hypothesis H is false.  A classical spoofer could beat a BosonSampling experiment on total variation distance from the ideal distribution, without even bothering to reproduce the high-order correlations correctly.

This is true because of a combination of two facts about the existing noisy BosonSampling experiments.  The first fact is that the contribution from the order-k correlations falls off like 1/exp(k).  The second fact is that, due to calibration errors and the like, the experiments already show significant deviations from the ideal distribution on the order-1 and order-2 correlations.

Put these facts together and what do you find?  Well, suppose your classical spoofing algorithm takes care to get the low-order contributions to the distribution exactly right.  Just for that reason alone, it could already win over a noisy BosonSampling experiment, as judged by benchmarks like total variation distance from the ideal distribution, or for that matter linear cross-entropy.  Yes, the experiment will beat the classical simulation on the higher-order correlations.  But because those higher-order correlations are exponentially attenuated anyway, they won’t be enough to make up the difference.  The experiment’s lack of perfection on the low-order correlations will swamp everything else.

Granted, I still don’t know for sure that this is what happens — that depends on whether I believe Sergio or Chaoyang about the extrapolation of the variation distance to the full 144 modes (my own eyeballs having failed to render a verdict!).  But I now see that it’s logically possible, maybe even plausible.

So, let’s imagine for the sake of argument that Google’s simulation wins on variation distance, even though the experiment wins on the high-order correlations.  In that case, what would be our verdict: would USTC have achieved quantum supremacy via BosonSampling, or not?

It’s clear what each side could say.

Google could say: by a metric that Scott Aaronson, the coinventor of BosonSampling, thought was perfectly adequate as late as last week — namely, total variation distance from the ideal distribution — we won.  We achieved lower variation distance than USTC’s experiment, and we did it using a fast classical algorithm.  End of discussion.  No moving the goalposts after the fact.

Google could even add: BosonSampling is a sampling task; it’s right there in the name!  The only purpose of any benchmark — whether Linear XEB or high-order correlation — is to give evidence about whether you are or aren’t sampling from a distribution close to the ideal one.  But that means that, if you accept that we are doing the latter better than the experiment, then there’s nothing more to argue about.

USTC could respond: even if Scott Aaronson is the coinventor of BosonSampling, he’s extremely far from an infallible oracle.  In the case at hand, his lack of appreciation for the sources of error in realistic experiments caused him to fixate inappropriately on variation distance as the success criterion.  If you want to see the quantum advantage in our system, you have to deliberately subtract off the low-order correlations and look at the high-order correlations.

USTC could add: from the very beginning, the whole point of quantum supremacy experiments was to demonstrate a clear speedup on some benchmark — we never particularly cared which one!  That horse is out of the barn as soon as we’re talking about quantum supremacy at all — something the Google group, which itself reported the first quantum supremacy experiment in Fall 2019, again for a completely artificial benchmark — knows as well as anyone else.  (The Google team even has experience with adjusting benchmarks: when, for example, Pan and Zhang pointed out that Linear XEB as originally specified is pretty easy to spoof for random 2D circuits, the most cogent rejoinder was: OK, fine then, add an extra check that the returned samples are sufficiently different from one another, which kills Pan and Zhang’s spoofing strategy.) In that case, then, why isn’t a benchmark tailored to the high-order correlations as good as variation distance or linear cross-entropy or any other benchmark?

Both positions are reasonable and have merit — though I confess to somewhat greater sympathy for the one that appeals to my doofosity rather than my supposed infallibility!

OK, but suppose, again for the sake of argument, that we accepted the second position, and we said that USTC gets to declare quantum supremacy as long as its experiment does better than any known classical simulation at reproducing the high-order correlations.  We’d still face the question: does the USTC experiment, in fact, do better on that metric?  It would be awkward if, having won the right to change the rules in its favor, USTC still lost even under the new rules.

Sergio tells me that USTC directly reported experimental data only for up to order-7 correlations, and at least individually, the order-7 correlations are easy to reproduce on a laptop (although sampling in a way that reproduces the order-7 correlations might still be hard—a point that Chaoyang confirms, and where further research would be great). OK, but USTC also reported that their experiment seems to reproduce up to order-19 correlations. And order-19 correlations, the Google team agrees, are hard to sample consistently with on a classical computer by any currently known algorithm.

So then, why don’t we have direct data for the order-19 correlations?  The trouble is simply that it would’ve taken USTC an astronomical amount of computation time.  So instead, they relied on a statistical extrapolation from the observed strength of the lower-order correlations — there we go again with the extrapolations!  Of course, if we’re going to let Google rest its case on an extrapolation, then maybe it’s only sporting to let USTC do the same.

You might wonder: why didn’t we have to worry about any of this stuff with the other path to quantum supremacy, the one via random circuit sampling with superconducting qubits?  The reason is that, with random circuit sampling, all the correlations except the highest-order ones are completely trivial — or, to say it another way, the reduced state of any small number of output qubits is exponentially close to the maximally mixed state.  This is a real difference between BosonSampling and random circuit sampling—and even 5-6 years ago, we knew that this represented an advantage for random circuit sampling, although I now have a deeper appreciation for just how great of an advantage it is.  For it means that, with random circuit sampling, it’s easier to place a “sword in the stone”: to say, for example, here is the Linear XEB score achieved by the trivial classical algorithm that outputs random bits, and lo, our experiment achieves a higher score, and lo, we challenge anyone to invent a fast classical spoofing method that achieves a similarly high score.

With BosonSampling, by contrast, we have various metrics with which to judge performance, but so far, for none of those metrics do we have a plausible hypothesis that says “here’s the best that any polynomial-time classical algorithm can possibly hope to do, and it’s completely plausible that even a noisy current or planned BosonSampling experiment can do better than that.”

In the end, then, I come back to the exact same three goals I would’ve recommended a week ago for the future of quantum supremacy experiments, but with all of them now even more acutely important than before:

  1. Experimentally, to increase the fidelity of the devices (with BosonSampling, for example, to observe a larger contribution from the high-order correlations) — a much more urgent goal, from the standpoint of evading classical spoofing algorithms, than further increasing the dimensionality of the Hilbert space.
  2. Theoretically, to design better ways to verify the results of sampling-based quantum supremacy experiments classically — ideally, even ways that could be applied via polynomial-time tests.
  3. For Gaussian BosonSampling in particular, to get a better understanding of the plausible limits of classical spoofing algorithms, and exactly how good a noisy device needs to be before it exceeds those limits.

Thanks so much to Sergio Boixo and Ben Villalonga for the conversation, and to Chaoyang Lu and Jelmer Renema for comments on this post. Needless to say, any remaining errors are my own.

The Physics Nobel, Gaussian BosonSampling, and Dorian Abbot

October 5th, 2021

1. Huge congratulations to the winners of this year’s Nobel Prize in Physics: Syukuro Manabe and Klaus Hasselmann for climate modelling, and separately, Giorgio Parisi for statistical physics. While I don’t know the others, I had the great honor to get to know Parisi three years ago, when he was chair of the committee that awarded me the Tomassoni-Chisesi Prize in Physics, and when I visited Parisi’s department at Sapienza University of Rome to give the prize lecture and collect the award. I remember Parisi’s kindness, a lot of good food, and a lot of discussion of the interplay between theoretical computer science and physics. Note that, while much of Parisi’s work is beyond my competence to comment on, in computer science he’s very well-known for applying statistical physics methods to the analysis of survey propagation—an algorithm that revolutionized the study of random 3SAT when it was introduced two decades ago.


2. Two weeks ago, a group at Google put out a paper with a new efficient classical algorithm to simulate the recent Gaussian BosonSampling experiments from USTC in China. They argued that this algorithm called into question USTC’s claim of BosonSampling-based quantum supremacy. Since then, I’ve been in contact with Sergio Boixo from Google, Chaoyang Lu from USTC, and Jelmer Renema, a Dutch BosonSampling expert and friend of the blog, to try to get to the bottom of this. Very briefly, the situation seems to be that Google’s new algorithm outperforms the USTC experiment on one particular metric: namely, total variation distance from the ideal marginal distribution, if (crucially) you look at only a subset of the optical modes, say 14 modes out of 144 total. Meanwhile, though, if you look at the kth-order correlations for large values of k, then the USTC experiment continues to win. With the experiment, the correlations fall off exponentially with k but still have a meaningful, detectable signal even for (say) k=19, whereas with Google’s spoofing algorithm, you choose the k that you want to spoof (say, 2 or 3), and then the correlations become nonsense for larger k.

Now, given that you were only ever supposed to see a quantum advantage from BosonSampling if you looked at the kth-order correlations for large values of k, and given that we already knew, from the work of Leonid Gurvits, that very small marginals in BosonSampling experiments would be easy to reproduce on a classical computer, my inclination is to say that USTC’s claim of BosonSampling-based quantum supremacy still stands. On the other hand, it’s true that, with BosonSampling especially, more so than with qubit-based random circuit sampling, we currently lack an adequate theoretical understanding of what the target should be. That is, which numerical metric should an experiment aim to maximize, and how well does it have to score on that metric before it’s plausibly outperforming any fast classical algorithm? One thing I feel confident about is that, whichever metric is chosen—Linear Cross-Entropy or whatever else—it needs to capture the kth-order correlations for large values of k. No metric that’s insensitive to those correlations is good enough.


3. Like many others, I was outraged and depressed that MIT uninvited Dorian Abbot (see also here), a geophysicist at the University of Chicago, who was slated to give the Carlson Lecture in the Department of Earth, Atmospheric, and Planetary Sciences about the atmospheres of extrasolar planets. The reason for the cancellation was that, totally unrelatedly to his scheduled lecture, Abbot had argued in Newsweek and elsewhere that Diversity, Equity, and Inclusion initiatives should aim for equality for opportunity rather than equality of outcomes, a Twitter-mob decided to go after him in retaliation, and they succeeded. It should go without saying that it’s perfectly reasonable to disagree with Abbot’s stance, to counterargue—if those very concepts haven’t gone the way of floppy disks. It should also go without saying that the MIT EAPS department chair is free to bow to social-media pressure, as he did, rather than standing on principle … just like I’m free to criticize him for it. To my mind, though, cancelling a scientific talk because of the speaker’s centrist (!) political views completely, 100% validates the right’s narrative about academia, that it’s become a fanatically intolerant echo chamber. To my fellow progressive academics, I beseech thee in the bowels of Bertrand Russell: why would you commit such an unforced error?

Yes, one can imagine views (e.g., open Nazism) so hateful that they might justify the cancellation of unrelated scientific lectures by people who hold those views, as many physicists after WWII refused to speak to Werner Heisenberg. But it seems obvious to me—as it would’ve been obvious to everyone else not long ago—that no matter where a reasonable person draws the line, Abbot’s views as he expressed them in Newsweek don’t come within a hundred miles of it. To be more explicit still: if Abbot’s views justify deplatforming him as a planetary scientist, then all my quantum computing and theoretical computer science lectures deserve to be cancelled too, for the many attempts I’ve made on this blog over the past 16 years to share my honest thoughts and life experiences, to write like a vulnerable human being rather than like a university press office. While I’m sure some sneerers gleefully embrace that implication, I ask everyone else to consider how deeply they believe in the idea of academic freedom at all—keeping in mind that such a commitment only ever gets tested when there’s a chance someone might denounce you for it.

Update: Princeton’s James Madison Program has volunteered to host Abbot’s Zoom talk in place of MIT. The talk is entitled “Climate and the Potential for Life on Other Planets.” Like probably hundreds of others who heard about this only because of the attempted cancellation, I plan to attend!

Unrelated Bonus Update: Here’s a neat YouTube video put together by the ACM about me as well as David Silver of AlphaGo and AlphaZero, on the occasion of our ACM Prizes in Computing.

“Is China Ahead in the Quantum Computing Race?”

September 26th, 2021

Please enjoy an hourlong panel discussion of that question on YouTube, featuring yours truly, my former MIT colleague Will Oliver, and political scientist and China scholar Elsa Kania. If you’re worried that the title sounds too sensationalistic, I hope my fellow panelists and I will pleasantly surprise you with our relative sobriety! Thanks so much to QC Ware for arranging the panel (full disclosure: I’m QC Ware’s scientific adviser).