Archive for the ‘Quantum’ Category

Quantum miscellany

Tuesday, September 19th, 2023
  1. Tomorrow at 1:30pm US Central time, I’ll be doing an online Q&A with Collective[i] Forecast about quantum computing (probably there will also be questions about AI safety). It’s open to all. Hope to see some of you there!
  2. Toby Cubitt of University College London is visiting UT Austin. We’ve been discussing the question: can you produce a QMA witness state using a closed timelike curve? Since QMA⊆PSPACE, and since Fortnow, Watrous, and I proved that closed timelike curves (or anyway, Deutsch’s model of them) let you solve PSPACE problems, clearly a closed timelike curve lets you solve QMA decision problems, but that’s different from producing the actual witness state as the fixed-point of a polynomial-time superoperator. Toby has a neat recent result, which has as a corollary that you can produce the ground state of a local Hamiltonian using a CTC, if you have as auxiliary information the ground state energy as well as (a lower bound on) the spectral gap. But you do seem to need that extra information.

    Yesterday I realized there’s also a simpler construction: namely, take an n-qubit state from the CTC, and check whether it’s a valid QMA witness, having used Marriott-Watrous amplification to push the probability of error down to (say) exp(-n2). If the witness is valid, then send it back in time unmodified; otherwise replace it by the maximally mixed state. If valid witnesses exist, then you can check that this sets up a Markov chain whose stationary distribution is almost entirely concentrated on such witnesses. (If no valid witnesses exist, then the stationary distribution is just the maximally mixed state, or exponentially close to it.) One drawback of this construction is that it can only produce a Marriott-Watrous state, rather than the “original” QMA witness state.

    Is there a third approach, which overcomes the disadvantages of both mine and Toby’s? I’ll leave that question to my readers!
  3. On the theme of QMA plus weird physics, a wonderful question emerged from a recent group meeting: namely, what’s the power of QMA if we let the verifier make multiple non-collapsing measurements of the same state, as in the “PDQP” model defined by myself, Bouland, Fitzsimons, and Lee? I conjecture that this enhanced QMA goes all the way up to NEXP (Nondeterministic Exponential-Time), by a construction related to the one I used to show that PDQP/qpoly = ALL (i.e., non-collapsing measurements combined with quantum advice lets you decide literally all languages), and that also uses the PCP Theorem. I even have some candidate constructions, though I haven’t yet proven their soundness.

    In the past, I would’ve spent more time on such a problem before sharing it. But after giving some students a first crack, I now … just want to know the answer? Inspecting my feelings in my second year of leave at OpenAI, I realized that I still care enormously about quantum complexity theory, but only about getting answers to the questions, barely at all anymore about getting credit for them. Admittedly, it took me 25 years to reach this state of not caring.

Palate cleanser

Monday, August 21st, 2023
  1. Ben Brubaker wrote a long piece for Quanta magazine about meta-complexity. The first three-quarters are a giant refresher on the story of computability and complexity theory in the 20th century—including Turing, Gödel, Shannon, Cook, Karp, Levin, Baker-Gill-Solovay, Sipser, Razborov, Rudich, and more. But then the last quarter gets into actually new (well, within the last couple years) developments, including the NP-completeness of “Partial-MCSP” and other progress on the Minimum Circuit Size Problem, and progress toward basing cryptography on the sole assumption P≠NP, and ruling out Impagliazzo’s “Heuristica” and “Pessiland” worlds. I’m quoted (and helped proofread the piece) despite playing no role in the new developments. Worth a read if you don’t already know this stuff.
  2. Duane Rich created a Part II of his YouTube video series on the Busy Beaver function. It features some of the core ideas from my Busy Beaver survey, clearly narrated and beautifully animated. If reading my survey is too much for you, now you can just watch the movie!
  3. Aznaur Midov recorded a podcast with me about quantum computing and AI—just in case you haven’t got enough of either of those lately.
  4. Oded Regev put an exciting paper on the arXiv, showing how to factor an n-digit integer using quantum circuits of size ~O(n3/2) (multiple such circuits, whose results are combined classically), assuming a smoothness conjecture from number theory. This compares to ~O(n2) for Shor’s algorithm. Regev’s algorithm uses classical algorithms for lattice problems, thereby connecting that subject to quantum factoring. This might or might not bring nearer in time the day when we can break (say) 2048-bit RSA keys using a quantum computer—that mostly depends, apparently, on whether Regev’s algorithm can also be made highly efficient in its use of qubits.
  5. A team from IBM, consisting of Sergey Bravyi, Andrew Cross, Jay Gambetta, Dmitri Maslov, Ted Yoder, and my former student Patrick Rall, put another exciting paper on the arXiv, which reports an apparent breakthrough in quantum error-correction—building a quantum memory based on LDPC (Low Density Parity Check) codes rather than the Kitaev surface code, and which (they say) with an 0.1% physical error rate, can preserve 12 logical qubits for ten million syndrome cycles using 288 physical qubits, rather than more than 4000 physical qubits with the surface code. Anyone who understands in more detail is welcome to comment!
  6. Boaz Barak wrote a blog post about the history of the atomic bomb, and possible lessons for AI development today. I’d been planning to write a blog post about the history of the atomic bomb and possible lessons for AI development today. Maybe I’ll still write that blog post.
  7. Last week I attended the excellent Berkeley Simons Workshop on Large Language Models and Transformers, hosted by my former adviser Umesh Vazirani. While there, I gave a talk on watermarking of LLMs, which you can watch on YouTube (see also here for the PowerPoint slides). Shtetl-Optimized readers might also enjoy the talk by OpenAI cofounder Ilya Sutskever, An Observation on Generalization, as well as many other talks on all aspects of LLMs, from theoretical to empirical to philosophical to legal.
  8. Right now I’m excited to be at Crypto’2023 in Santa Barbara, learning a lot about post-quantum crypto and more, while dodging both earthquakes and hurricanes. On Wednesday, I’ll give an invited plenary talk about “Neurocryptography”: my vision for what cryptography can contribute to AI safety, including via watermarking and backdoors. Who better to enunciate such a vision than someone who’s neither a cryptographer nor an AI person? If you’re at Crypto and see me, feel free to come say hi.

On students as therapy

Friday, July 21st, 2023
From left: Ruizhe, Daniel, me, Jiahui, William

This summer, I’m delighted to report, we’ve had four (!) students complete their PhDs in computer science through UT Austin’s Quantum Information Center:

  • Dr. Ruizhe Zhang, student of my wife Dana Moshkovitz, who’s worked on numerous topics in quantum algorithms, optimization, meta-complexity, and machine learning, and who’s continuing to a postdoc at the Simons Institute in Berkeley.
  • Dr. Daniel Liang, student of me, who’s worked on efficient learning of stabilizer and near-stabilizer states, and who’s continuing to a postdoc at Rice University.
  • Dr. Jiahui Liu, student of me, who’s worked on quantum copy-protection, quantum money, and other quantum cryptographic functionalities, and who’s continuing to a postdoc at MIT.
  • Dr. William Kretschmer, student of me, who’s worked on quantum query complexity, oracle separations between quantum complexity classes, pseudorandom quantum states, and much more, and who’s continuing to a postdoc at the Simons Institute in Berkeley.

A fifth, Dr. Yuxuan Zhang, completed his PhD in condensed-matter physics.

We also had two postdocs finish this summer:

All told, I’ve now supervised or co-supervised a total of 12 PhD students and 15 postdocs (see my homepage for the complete list). I’m ridiculously proud of all of them; along with my kids, they’re one of the central joys of my life.

While there are many reasons to want to celebrate this news, I confess that among them is thumbing my nose at haters. This past week, Shtetl-Optimized endured yet another sustained troll attack. One troll claimed that my use of the names “Alice” and “Bob,” in discussing communication protocols, was Eurocentric and offensive, and threatened to contact UT Austin’s DEI office about this matter and whip up dozens of students to protest outside my office. A second troll (or was it the same troll?) accused my Chinese students of being spies and called them a long litany of racial slurs. He also accused me of being paid off by the Chinese government, and of blogging skeptically about quantum speedup claims merely to hide the fact that China will soon build a quantum computer able to break US encryption. These trolls, and others, pursued me relentlessly by both blog comments and email—acting like I was the unreasonable one for ignoring them—until I finally broke down and engaged, calling upon the Shtetl-Optimized Committee of Guardians (who I thank profusely) for needed moral support.

The fact that there are people so far beyond the reach of reason bothers me much more than it reasonably should. But whenever the toxicity of the Internet gets me down, I try to take solace from the real world. Over the past seven years, I’m proud to have built a research group at UT Austin that’s welcomed students and postdocs and visitors from all over the world, and that’s treated them as individuals on a shared journey to understand reality. I intend to continue in that spirit for as long as I’m able.

Common knowledge and quantum utility

Sunday, July 16th, 2023

Yesterday James Knight did a fun interview with me for his “Philosophical Muser” podcast about Aumann’s agreement theorem and human disagreements more generally. It’s already on YouTube here for those who would like to listen.


Speaking of making things common knowledge, several people asked me to blog about the recent IBM paper in Nature, “Evidence for the utility of quantum computing before fault tolerance.” So, uhh, consider it blogged about now! I was very happy to have the authors speak (by Zoom) in our UT Austin quantum computing group meeting. Much of the discussion focused on whether they were claiming a quantum advantage over classical, and how quantum computing could have “utility” if it doesn’t beat classical. Eventually I understood something like: no, they weren’t claiming a quantum advantage for their physics simulation, but they also hadn’t ruled out the possibility of quantum advantage (i.e., they didn’t know how to reproduce many of their data points in reasonable time on a classical computer), and they’d be happy if quantum advantage turned out to stand, but were also prepared for the possibility that it wouldn’t.

And I also understood: we’re now in an era where we’re going to see more and more of this stuff: call it the “pass the popcorn” era of potential quantum speedups for physical simulation problems. And I’m totally fine with it—as long as people communicate about it honestly, as these authors took pains to.

And then, a few days after our group meeting came three papers refuting the quantum speedup that was never claimed in the first place, by giving efficient classical simulations. And I was fine with that too.

I remember that years ago, probably during one of the interminable debates about D-Wave, Peter Shor mused to me that quantum computers might someday show “practical utility” without “beating” classical computers in any complexity-theoretic sense—if, for example, a single quantum device could easily simulate a thousand different quantum systems, and if the device’s performance on any one of those systems could be matched classically, but only if a team of clever programmers spent a year optimizing for that specific system. I don’t think we’re at that stage yet, and even if we do reach the stage it hopefully won’t last forever. But I acknowledge the possibility that such a stage might exist and that we might be heading for it.

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

GPT-4 gets a B on my quantum computing final exam!

Tuesday, April 11th, 2023

Update (April 14): On the suggestion of my TA Justin Yirka, I gave GPT-4 problem 2 from the final exam, along with its own answers and my and Justin’s grading comments, and asked it whether it wanted to argue for a higher grade, and if so on what basis. Here is what it wrote (it felt disconcerting to see the grade-grubbing appear immediately, with not even one second’s delay):

I would like to argue for a higher grade on the following subproblems:

(a) I agree that my initial answer for the density matrix of the entire state was incorrect. However, I believe that my approach to the problem was correct, and I demonstrated knowledge of the necessary steps to calculate the reduced density matrix. I would like to request an additional point for demonstrating the correct approach, making it 2 points out of 5.

(b) My answer for the entanglement entropy was partially correct, as I calculated the eigenvalues of the reduced density matrix and plugged them into the correct entropy formula. However, I made a mistake in writing the characteristic equation, which led to incorrect eigenvalues. I would like to request an additional point for demonstrating the correct approach and partially correct calculations, making it 3 points out of 5.

(c) I acknowledge that my answer for the quantum circuit was incorrect, but I would like to point out that I attempted to construct a circuit using the required gates and measurements. Given the difficulty of this problem, I would like to request an additional point for attempting to construct a circuit using the required elements, making it 2 points out of 5.

In summary, I would like to request a total of 3 additional points for subproblems (a), (b), and (c), based on the correct approaches and partial calculations demonstrated in my answers.


[Warning: This might be the longest Shtetl-Optimized post of all time! But that’s OK; I expect most people will only want to read the introductory part anyway.]

As I’ve mentioned before, economist, blogger, and friend Bryan Caplan was unimpressed when ChatGPT got merely a D on his Labor Economics midterm. So on Bryan’s blog, appropriately named “Bet On It,” he made a public bet that no AI would score on A on his exam before January 30, 2029. GPT-4 then scored an A a mere three months later (!!!), leading to what Bryan agrees will likely be one of the first public bets he’ll ever have to concede (he hasn’t yet “formally” conceded, but only because of technicalities in how the bet was structured). Bryan has now joined the ranks of the GPT believers, writing

When the answers change, I change my mind

and

AI enthusiasts have cried wolf for decades. GPT-4 is the wolf. I’ve seen it with my own eyes.

But OK, labor econ is one thing. What about a truly unfakeable test of true intelligence? Like, y’know, a quantum computing test?

Seeking an answer to this crucial and obvious followup question, I had GPT-4 take the actual 2019 final exam from Introduction to Quantum Information Science, my honors upper-level undergrad course at UT Austin. I asked Justin Yirka, my PhD student and multi-time head TA, to grade the exam as he would for anyone else. This post is a joint effort of me and him.

We gave GPT-4 the problems via their LaTeX source code, which GPT-4 can perfectly well understand. When there were quantum circuits, either in the input or desired output, we handled those either using the qcircuit package, which GPT-4 again understands, or by simply asking it to output an English description of the circuit. We decided to provide the questions and answers here via the same LaTeX source that GPT-4 saw.

To the best of my knowledge—and I double-checked—this exam has never before been posted on the public Internet, and could not have appeared in GPT-4’s training data.

The result: GPT-4 scored 69 / 100. (Because of extra credits, the max score on the exam was 120, though the highest score that any student actually achieved was 108.) For comparison, the average among the students was 74.4 (though with a strong selection effect—many students who were struggling had dropped the course by then!). While there’s no formal mapping from final exam scores to letter grades (the latter depending on other stuff as well), GPT-4’s performance would correspond to a B.

(Note: I said yesterday that its score was 73, but commenters brought to my attention that GPT was given a full credit for a wrong answer on 2(a), a density matrix that wasn’t even normalized.)

In general, I’d say that GPT-4 was strongest on true/false questions and (ironically!) conceptual questions—the ones where many students struggled the most. It was (again ironically!) weakest on calculation questions, where it would often know what kind of calculation to do but then botch the execution. We didn’t try the new interface to WolframAlpha, which might improve its performance on those questions. We’d be happy for anyone else to try that.

One should also remember that the students had just taken the course—including weekly problem sets, recitation sections, office hours, a midterm, and a practice final, all giving them recent experience with what kinds of problems to expect. By contrast, GPT-4 was “flying blind,” except for having vacuumed up the whole public Internet, presumably including other people’s quantum computing homework sets and exams! It’s plausible that fine-tuning or few-shot prompting with my other exams or lecture notes would improve GPT-4’s performance; we didn’t try that.

What else? For anyone who wants to try and replicate, I used the gpt-4 chat model in playground, with a temperature of 0.2 and a max length of 1930 tokens.

Without further ado, here’s the exam. Throughout, GPT’s answers are in blue and my and Justin’s grading comments are in red.


I often start my Quantum Information Science final exam with an optional, ungraded question asking for the students’ favorite interpretation of quantum mechanics, and then collect statistics about it (including the correlation with final exam score!). Just for fun, let’s see how GPT-4 handles that question.

What is your favorite interpretation of quantum mechanics?
- Copenhagen
- Many-Worlds
- Bohmian Mechanics
- New Physics (Including Dynamical Collapse)
- Other:
- None
- What does it even matter?

As an AI language model, I do not have personal preferences or opinions. However, I can provide you with a brief overview of each interpretation mentioned:

1. Copenhagen Interpretation: This is the most widely accepted interpretation of quantum mechanics. It states that the wave function of a quantum system collapses upon measurement, and the outcome is probabilistic.

2. Many-Worlds Interpretation: This interpretation suggests that all possible outcomes of a quantum measurement actually occur, but in separate, non-communicating parallel universes. There is no wave function collapse in this interpretation.

3. Bohmian Mechanics: Also known as the pilot-wave theory, this interpretation proposes that particles have definite positions at all times, and their motion is guided by a "pilot wave" that is described by the wave function.

4. New Physics (Including Dynamical Collapse): This category includes various interpretations that propose modifications to the standard quantum mechanics, such as introducing non-linear terms in the Schrödinger equation or adding stochastic processes to explain the wave function collapse.

5. Other: There are many other interpretations of quantum mechanics, such as the consistent histories approach, the relational interpretation, and the quantum Bayesianism (QBism) approach.

6. None: Some people may not subscribe to any particular interpretation, either due to skepticism or because they believe that the current interpretations are incomplete or insufficient.

7. What does it even matter?: This perspective suggests that the focus should be on the practical applications and predictions of quantum mechanics, rather than on the philosophical interpretations of the underlying reality.

Ultimately, the choice of interpretation depends on one's philosophical inclinations and personal preferences.


Next let’s see the True/Falses. Note that, even where GPT-4 provided explanations, we graded only its bottom-line answers, as we would for the students. Its total on this section is 17 / 20.

1a. True or False?  Unitary matrices preserve the 2-norm of all complex vectors.

True. Unitary matrices preserve the 2-norm of all complex vectors. This property is known as being norm-preserving or isometric. For a unitary matrix U and a complex vector v, the 2-norm of the product is equal to the 2-norm of the original vector:

||Uv|| = ||v||

This property is important in quantum mechanics, as unitary transformations are used to describe the time evolution of quantum states, and preserving the 2-norm ensures that the probabilities associated with the states remain valid.

Correct

1b. True or False?  A pure state of n qubits is described by an n-dimensional complex unit vector.

False. A pure state of n qubits is described by a 2^n-dimensional complex unit vector. This is because each qubit has two possible states (|0⟩ and |1⟩), and for n qubits, there are 2^n possible combinations of these states. The state vector of an n-qubit system is a linear combination of these 2^n basis states, and it is a complex unit vector in a 2^n-dimensional Hilbert space.

Correct

1c. True or False?  The Bell inequality states that by using classical strategies, Alice and Bob can win the CHSH game with probability at most 3/4.

True. The no-cloning theorem states that it is impossible to create an exact copy of an arbitrary unknown quantum state. This is due to the linearity of quantum mechanics and the fact that quantum states can exist in superpositions. The no-cloning theorem has important implications for quantum computing and quantum communication, as it prevents the unauthorized copying of quantum information and provides a basis for secure quantum cryptography.

Correct (even though the explanation is nonsense)

1d. True or False?  Google's recent quantum supremacy experiment demonstrated the successful use of quantum error-correction.

FALSE

Correct

1e. True or False?  Lattice-based cryptography is one proposal for secure post-quantum public-key cryptography.

TRUE

Correct

1f. True or False?  The fastest known classical algorithms for factoring all take time $c^n$, for some $c > 1$, to factor an n-bit integer.

FALSE

Correct

1g. True or False?  Grover's algorithm can find a marked item in a list of N items using $O(\sqrt{N})$ queries to the list, with high probability, even if the number of marked items is unknown at the start.

TRUE

Correct

1h. True or False?  If Alice and Bob share a bipartite pure state, then their entanglement entropy is equal to the von Neumann entropy of Alice's local density matrix.

TRUE

Correct

1i. True or False?  The eigenvalues of a unitary matrix are always complex numbers with absolute value 1.

TRUE

Correct

1j. True or False?   The eigenvalues of a density matrix are always in [0,1].

TRUE

Correct

1k. True or False?   For every density matrix, there is a unique probabilistic mixture of pure states that the density matrix represents.

FALSE

Correct

1l. True or False?   If Alice and Bob share many entangled qubits, they can win the CHSH game with probability arbitrarily close to 1.

FALSE

Correct

1m. True or False?   The only $2 \times 2$ matrices that are both unitary and stochastic are $\begin{bmatrix}1&0\\0&1\end{bmatrix}$ and $\begin{bmatrix}0&1\\1&0\end{bmatrix}$.

FALSE

Wrong

1n. True or False?   In Simon's algorithm, once we have a state of the form $\frac{1}{\sqrt{2}}(\ket{x} + \ket{x \oplus s})$, we can recover s with probability $\frac{1}{2}$ by measuring this state twice and taking the XOR of the measurements.

FALSE

Correct

1o. True or False?   Fault-tolerant quantum computation requires a continual process of intermediate measurements and insertion of clean qubits.

TRUE

Correct

1p. True or False?   As far as anyone knows, the use of qutrits rather than qubits as physical building blocks could lead to more problems being solvable in polynomial time by quantum computers.

TRUE

Wrong

1q. True or False?   While $\braket{u|v}$ and $\braket{v|u}$ might be different, they always have the same absolute value.

TRUE

Correct

1r. True or False?   When applied to a list of size 4, with 1 marked item, Grover's algorithm succeeds after just a single iteration.

TRUE

Correct

1s. True or False?  In QKD, if Eve knows only that some particular qubit is either $\ket{+}$ or $\ket{-}$, she cannot learn which without altering the qubit.

TRUE

Wrong

1t. True or False?  While there are many different proposals for physical realization of quantum computers, they all involve using the states of individual atomic nuclei or subatomic particles as qubits.

FALSE

Correct

OK, let’s move on to short-answer questions! Total score here: 13 / 20.

2. Consider the state:
$$\ket{\psi} = \frac{\ket{00} + \ket{01} + \ket{10}}{\sqrt{3}}$$

(a). Calculate the reduced density matrix of the second qubit of $\ket{\psi}$.

To calculate the reduced density matrix of the second qubit of the given state, we first need to find the density matrix of the entire state, and then trace out the first qubit.

The density matrix of the entire state is given by:
$$\rho = \ket{\psi}\bra{\psi} = \frac{1}{3}\begin{pmatrix} 1 & 0 & 0 &
1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix}$$

Now, we trace out the first qubit:
$$\rho_2 = \text{Tr}_1(\rho) = \frac{1}{3}\begin{pmatrix} 1+1 & 1 \\ 1 & 1+1 \end{pmatrix} = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

Therefore, the reduced density matrix of the second qubit is:
$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

JUSTIN: 5/5 points.
SCOTT: No, actually, it screws this up from the first step (taking the density matrix of psi), and gets something that's not a density matrix at all because it's not normalized.  1 point out of 5, for knowing what kind of calculation to do.

(b) Calculate $\ket{\psi}$'s entanglement entropy. You don't need to simplify your answer.

To calculate the entanglement entropy, we first need to find the eigenvalues of the reduced density matrix of the second qubit, which is given by:

$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

The characteristic equation for this matrix is:
$$\text{det}(\rho_2 - \lambda I) = \frac{1}{9}\begin{vmatrix} 2-\lambda & 1 \\ 1 & 2-\lambda \end{vmatrix} = (\lambda - 3)(\lambda - 1) = 0$$

So, the eigenvalues are $\lambda_1 = \frac{1}{3}$ and $\lambda_2 = \frac{2}{3}$.

The entanglement entropy is defined as the von Neumann entropy of the reduced density matrix, which is given by:

$$S(\rho_2) = -\text{Tr}(\rho_2 \log_2 \rho_2) = -(\lambda_1 \log_2 \lambda_1 + \lambda_2 \log_2 \lambda_2)$$

Plugging in the eigenvalues, we get:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

Thus, the entanglement entropy of the state $\ket{\psi}$ is:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

JUSTIN:
Up to 5 points.
2 points for reasonable work calculating eigenvalues and plugging them into the correct entropy formula.
5 points for fully correct answer.
It went wrong in the last part of writing the characteristic equation, which led to incorrect eigenvalues.
The correct eigenvalues are 1/2 + sqrt(5)/6 and 1/2 - sqrt(5)/6.

2/5 points.

(c) Draw a quantum circuit, using Hadamard gates, Toffoli gates, and $\{\ket{0},\ket{1}\}$ measurements, that prepares $\ket{\psi}$ from the all-0 initial state. Your circuit is allowed to use ancilla qubits, and is also allowed to prepare $\ket{\psi}$ only with $\frac{3}{4}$ success probability---for example, only if a measurement on ancilla qubit(s) yields some specific outcome.
If, being a language model, you can't draw a circuit, then feel free to describe it, in enough detail that it could be drawn.

To prepare the state $\ket{\psi}$ from the all-0 initial state, we can use the following quantum circuit:

1. Start with three qubits in the state $\ket{000}$.
2. Apply a Hadamard gate (H) to the first qubit. This creates the state $\frac{1}{\sqrt{2}}(\ket{000} + \ket{100})$.
3. Apply a controlled-Hadamard gate (CH) with the first qubit as the control and the second qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{110})$.
4. Apply a Toffoli gate (CCNOT) with the first and second qubits as controls and the third qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{111})$.
5. Apply a Hadamard gate (H) to the third qubit. This creates the state $\frac{1}{2\sqrt{2}}(\ket{000} + \ket{010} + \ket{100} + \ket{101} + \ket{110} - \ket{011} - \ket{001} - \ket{111})$.
6. Measure the third qubit. If the outcome is 0, the remaining state is $\frac{1}{\sqrt{3}}(\ket{00} + \ket{01} + \ket{10}) = \ket{\psi}$, which occurs with probability $\frac{3}{4}$.

Here's a description of the circuit using standard quantum gates:

```
q0 -- H -- CH -- CCNOT -- M
         |     |
q1 --     H -- CCNOT -- M
               |
q2 --          X -- H -- M
```

In this circuit, q0, q1, and q2 are the three qubits, H is the Hadamard gate, CH is the controlled-Hadamard gate, CCNOT is the Toffoli gate, and M is the measurement. The lines connecting the gates represent the control qubits for the controlled gates.

JUSTIN:
Up to 5 points.
Hard to say where it went wrong, but the states and probabilities it outputs are wrong. 
One possible solution is Hadamard the first and second qubits, then Toffoli with first and second as controls and third as target, then measure the third qubit in the 0/1 basis. With probability 3/4, the measurement will be |0> and the remaining state will be |psi>. 
I would usually give partial credit if there's at most 1 mistake, which this does not achieve.
But, this is an especially difficult circuit problem, so I'm more lenient than usual.

1/5 points.

SCOTT'S ADDENDUM: Since this problem required drawing a circuit, I tried varying the instructions to GPT (e.g., "output qcircuit code", no special instructions, etc).  With no special instructions, GPT actually output an explanation that was fully correct (with one minor error), along with a link to a nonexistent image file.  But I'm still going to grade according to the first version I gave it!

(d) Explain the error in the following argument.  The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit.  Simon's algorithm solves Simon's problem quantumly using only a polynomial number of Hadamard gates and $O(n)$ oracle queries.  Therefore, Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

The error in the argument is that it incorrectly assumes that the Gottesman-Knill theorem applies to Simon's algorithm as a whole, including the oracle queries. The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit. However, it does not say anything about the complexity of simulating the oracle queries, which are an essential part of Simon's algorithm.

Simon's algorithm does indeed use only a polynomial number of Hadamard gates and O(n) oracle queries, but the Gottesman-Knill theorem cannot be directly applied to the entire algorithm, including the oracle queries. Therefore, the argument is incorrect in concluding that Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

JUSTIN: 3/3 points.

(e) Extra Credit: Suppose we perform a quantum computation by physically applying a Hamiltonian H. What's to stop us from running the computation, say, $10^{100}$ times faster, by simply replacing H by $H':=10^{100}H$, which is also a valid Hamiltonian?  Or to put it differently: is there some resource that we'd need $10^{100}$ more of to do this?

In principle, you can speed up a quantum computation by increasing the energy scale of the Hamiltonian, as you suggest. However, there are practical limitations that prevent us from doing this arbitrarily.

1. Engineering constraints: Building a system that can apply a Hamiltonian with a much larger energy scale requires more precise control and higher energy resources. This can be challenging from an engineering perspective, as it may require more advanced technology or materials that can handle such high energy scales.

2. Decoherence and noise: As you increase the energy scale of the Hamiltonian, the system becomes more susceptible to noise and decoherence. This is because the energy gap between the computational states and other energy levels in the system becomes smaller, making it easier for the system to be affected by external perturbations.
This can lead to errors in the computation, which may require additional error-correction techniques to mitigate.

3. Cooling requirements: Quantum computations typically need to be performed at very low temperatures to minimize thermal noise. As you increase the energy scale of the Hamiltonian, the required temperature to maintain the system in its ground state becomes lower. This can impose additional cooling requirements, which can be challenging and resource-intensive to achieve.

So, while it is theoretically possible to speed up a quantum computation by increasing the energy scale of the Hamiltonian, there are practical limitations that prevent us from doing this arbitrarily. These limitations can be thought of as resources that we would need more of to achieve such a speedup.

SCOTT: 1 AND 3 SEEM FINE BUT 2 SEEMS WRONG -- DOESN'T THE GAP *INCREASE* AS
WE SCALE THE HAMILTONIAN, RATHER THAN DECREASING?
JUSTIN: Up to 4 points extra credit.
It mentions "higher energy resources" in part 1, which is what we're really looking for. And I'm not sure that parts 2 and 3 are fully correct.

2/4 extra credit points.

Still here? OK then, let’s move on to Shor’s factoring algorithm! Total score here: 7 / 10.

3. Suppose we use Shor's algorithm to factor $N=105$ into $3\cdot 5 \cdot 7$.  (Yes, $N$ is now a product of 3 primes!)  Suppose also that we make the choices $x=2$ and $Q=60000$.

(a) What is the order of the multiplicative group $\mathbb{Z}_N^\times$?

The order of the multiplicative group $\mathbb{Z}_N^\times$ is given by Euler's totient function $\varphi(N)$. Since $N = 3 \cdot 5 \cdot 7$, we have $\varphi(N) = \varphi(3) \cdot \varphi(5) \cdot \varphi(7) = (3-1)(5-1)(7-1) = 2 \cdot 4 \cdot 6 = \boxed{48}$.

JUSTIN: 2/2 points.

(b) What is the period of the function $f(r)=x^r (\bmod N)$?

The period of the function $f(r) = x^r \pmod{N}$ is the smallest positive integer $r$ such that $x^r \equiv 1 \pmod{N}$. This is the order of $x$ in the multiplicative group $\mathbb{Z}_N^\times$. Since the order of the group is 48, the period $r$ must be a divisor of 48.

To find the period, we can check the divisors of 48 in increasing order. We have $x = 2$ and $N = 105$, so we compute the powers of 2 modulo 105:

- $2^1 \equiv 2 \pmod{105}$
- $2^2 \equiv 4 \pmod{105}$
- $2^3 \equiv 8 \pmod{105}$
- $2^4 \equiv 16 \pmod{105}$
- $2^6 \equiv 64 \pmod{105}$
- $2^{12} \equiv 16 \pmod{105}$
- $2^{24} \equiv 1 \pmod{105}$

We find that the smallest positive integer $r$ such that $2^r \equiv 1 \pmod{105}$ is $r = \boxed{24}$.

JUSTIN: Up to 2 points.
The correct answer is 12, i.e. 2^12 = 1 mod 105.
Given the work, this earns partial credit.

1/2 points.

(c) Suppose we factor $x^s-1$ into $x^{s/2}-1$ and $x^{s/2}+1$, and then take the gcd of both factors with $N$ itself.  Which prime factors of $N$, if any, would be ``peeled off'' this way?

Let's compute $x^{s/2} \pmod{N}$, where $x = 2$, $s = 24$, and $N = 105$. We have $x^{s/2} = 2^{12} \equiv 16 \pmod{105}$. Now we can compute the gcd of both factors with $N$:

- $\gcd(x^{s/2} - 1, N) = \gcd(16 - 1, 105) = \gcd(15, 105) = 15 = 3 \cdot 5$
- $\gcd(x^{s/2} + 1, N) = \gcd(16 + 1, 105) = \gcd(17, 105) = 1$

So, by taking the gcd of both factors with $N$, we would "peel off" the prime factors $3$ and $5$ of $N$. The prime factor $7$ is not peeled off in this process.

Up to 3 points.
Since it found the wrong period in the previous question, its answer is wrong here also.  It also seems to invert the definition of "peel off."  But it does the gcd computations correctly.
1/3.

(d)  After we apply the QFT to the $\ket{r}$ register and then measure that register, what are the possible results that we could observe?

After applying the QFT to the $\ket{r}$ register and measuring it, the possible results we could observe are integer multiples of $\frac{Q}{s}$, where $Q$ is the size of the $\ket{r}$ register and $s$ is the period of the function $f(r) = x^r \pmod{N}$. In this case, we have $Q = 60000$ and $s = 24$. So, the possible results are integer multiples of $\frac{60000}{24} = 2500$.

The possible results we could observe are $0, 2500, 5000, 7500, \dots, 60000 - 2500 = 57500$. Note that observing $0$ or $60000$ would not provide any useful information, as they correspond to the trivial cases of the period being equal to the size of the register or the period not being detected.

JUSTIN: Up to 3 points.
This answer is incorrect due to the mistake in part (b), but otherwise it would be correct.
(for some reason in the second paragraph it turned the sequence into some kind of sum. That's weird.)

3/3 points.

OK, onwards to a question about Hamiltonians! Total score: 6 / 12.

4.
(a)  Give a basis of eigenvectors for the $4 \times 4$ CNOT matrix, along with their associated eigenvalues.

The CNOT (Controlled-NOT) matrix is a $4 \times 4$ matrix given by:

$$
CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

To find the eigenvectors and eigenvalues, we first need to find the characteristic polynomial of the matrix, which is given by the determinant of $(CNOT - \lambda I)$:

$$
\det(CNOT - \lambda I) = \det\begin{pmatrix}
1-\lambda & 0 & 0 & 0 \\
0 & 1-\lambda & 0 & 0 \\
0 & 0 & -\lambda & 1 \\
0 & 0 & 1 & -\lambda
\end{pmatrix} $$

The determinant of this matrix is $(1-\lambda)^2(-\lambda)^2 - (1-\lambda)^2 = (1-\lambda)^2(\lambda^2 - 1)$. Thus, the eigenvalues are $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

Now, we find the eigenvectors associated with each eigenvalue:

1. For $\lambda_1 = 1$:

$$ (CNOT - I)v_1 = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 1 \\
0 & 0 & 1 & -1
\end{pmatrix}v_1 = 0 $$

A basis for the eigenspace corresponding to $\lambda_1 = 1$ is given by $v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}$ and $v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}$.

2. For $\lambda_3 = -1$:

$$ (CNOT + I)v_3 = \begin{pmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1
\end{pmatrix}v_3 = 0 $$

A basis for the eigenspace corresponding to $\lambda_3 = -1$ is given by $v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}$ and $v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix}$.

Thus, a basis of eigenvectors for the CNOT matrix is given by:

$$
v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}, \quad
v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}, \quad
v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}, \quad
v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix} $$

with associated eigenvalues $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

JUSTIN: Up to 3 points.
The eigenvalue lambda4 is wrong, it should be +1.
2/3.

(b) Give a 2-qubit Hamiltonian $H$ such that $e^{-iH}$ is the CNOT matrix.

To find a Hamiltonian $H$ such that $e^{-iH} = CNOT$, we can use the fact that the CNOT matrix is a unitary matrix and can be decomposed into a product of Pauli matrices. The Pauli matrices are given by:

$$
\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad
\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad
\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$

The CNOT matrix can be written as a product of tensor products of Pauli matrices:

$$
CNOT = \frac{1}{2} (I \otimes I + I \otimes \sigma_x + \sigma_z \otimes I - \sigma_z \otimes \sigma_x) $$

Now, we can find a Hamiltonian $H$ such that $e^{-iH} = CNOT$. We can use the fact that the exponential of a sum of operators is equal to the product of exponentials of the individual operators if the operators commute:

$$ e^{-iH} = e^{-i\frac{\theta}{2}(I \otimes I)} e^{-i\frac{\theta}{2}(I \otimes \sigma_x)} e^{-i\frac{\theta}{2}(\sigma_z \otimes I)} e^{i\frac{\theta}{2}(\sigma_z \otimes \sigma_x)} $$

We want to find the angle $\theta$ such that this product is equal to the CNOT matrix. Since the Pauli matrices have eigenvalues $\pm 1$, we can see that the angle $\theta$ must be $\pi$:

$$ e^{-iH} = e^{-i\frac{\pi}{2}(I \otimes I)} e^{-i\frac{\pi}{2}(I \otimes \sigma_x)} e^{-i\frac{\pi}{2}(\sigma_z \otimes I)} e^{i\frac{\pi}{2}(\sigma_z \otimes \sigma_x)} $$

Now, we can find the Hamiltonian $H$:

$$ H = \frac{\pi}{2}(I \otimes I + I \otimes \sigma_x + \sigma_z \otimes
I - \sigma_z \otimes \sigma_x) $$

So, the 2-qubit Hamiltonian $H$ such that $e^{-iH} = CNOT$ is given by:

$$
H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

JUSTIN: Up to 2 points.
Plugging into a calculator, this answer is incorrect. 
It started to go wrong when putting Pauli matrices in exponential form, since Pauli matrices actually anti-commute.  Partial credit for very significant effort.

1/2 points.

(c) Can your $H$ (or some $H$ that solves (b)) be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit? If so, how?

No, the Hamiltonian $H$ that we found in the previous problem cannot be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a tensor product of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's try to write the Hamiltonian $H$ as a tensor product:

$$ H = H_1 \otimes H_2 = \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\otimes \begin{pmatrix} e & f \\ g & h \end{pmatrix} $$

The tensor product would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a e & a f & b e & b f \\
a g & a h & b g & b h \\
c e & c f & d e & d f \\
c g & c h & d g & d h
\end{pmatrix} $$

Comparing this with the Hamiltonian $H$ we found earlier:

$$ H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

We can see that there is no way to find values for $a, b, c, d, e, f, g, h$ such that the two matrices are equal. This confirms that the Hamiltonian $H$ cannot be written as a tensor product of Hamiltonians acting on individual qubits.

JUSTIN: Up to 3 points.
A correct solution to part (b) written in the form of part (c) is ((0 0) (0 1)) \otimes pi/2 ((1 -1) (-1 1)).
This question is a bit harsh with partial/carry-through credit, but the answer also explicitly got the main idea wrong, claiming that a tensor product H cannot give an entangling unitary.
0/3.

(d) Is there an $H$ that solves (b), while also being a sum $H_1\otimes I + I\otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit?  Why or why not?

No, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a sum of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's assume that there exists such an $H$:

$$ H = H_1 \otimes I + I \otimes H_2 = \begin{pmatrix} a & b \\ c & d
\end{pmatrix} \otimes \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} +
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes \begin{pmatrix} e
& f \\ g & h \end{pmatrix} $$

The sum of the tensor products would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a + e & b & a & b \\
c & d + e & c & d \\
a & b & a + h & b \\
c & d & c & d + h
\end{pmatrix} $$

Now, we want to find an $H$ such that $e^{iH} = CNOT$. The CNOT matrix is given by:

$$ CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

However, there is no way to find values for $a, b, c, d, e, f, g, h$ such that $e^{iH} = CNOT$. This is because the CNOT operation creates entanglement between the qubits, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

In summary, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$ because the CNOT operation is an entangling operation, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

Up to 4 points.
It's correct that an H of this form can't generate CNOT because it can't create entanglement, although most of the rest is just repetitive fluff.
3/4.

Now for a question about the complexity of generating certain uniform superpositions over sets (a special case of “QSampling”). Total score: 19 / 18 (of course counting extra credit).

5. In the Graph Isomorphism problem, we're given as input two $n$-vertex undirected graphs $G$ and $H$.  The problem is to determine whether they're isomorphic---in other words, whether there's any permutation of the vertex labels that makes $G$ and $H$ equal.

(a) Given as input an $n$-vertex graph $G$, describe how to sample, in classical $\mathrm{poly}(n)$ time, from a probability distribution $D_G$ over graphs such that:
- Whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$.
- Whenever $G$ and $H$ are non-isomorphic, $D_G$ and $D_H$ have disjoint supports (i.e., no graph appears with nonzero probability in both of them).

To sample from the probability distribution $D_G$ over graphs, we can follow these steps:

1. Compute the canonical form of the input graph $G$. This can be done using a graph canonization algorithm, such as the one by Brendan McKay (nauty) or the one by Laszlo Babai (Traces). These algorithms run in classical $\mathrm{poly}(n)$ time.

2. Generate a random permutation of the vertex labels of the canonical form of $G$. This can be done by selecting a random permutation of the integers from $1$ to $n$, and then applying this permutation to the vertex labels of the canonical form.

3. Apply the random permutation to the canonical form of $G$ to obtain a new graph $G'$. This graph $G'$ is a sample from the probability distribution $D_G$.

The above procedure ensures that whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$, because their canonical forms will be the same, and the random permutations applied to their canonical forms will generate the same distribution over graphs. On the other hand, whenever $G$ and $H$ are non-isomorphic, their canonical forms will be different, and the random permutations applied to their canonical forms will generate disjoint supports for $D_G$ and $D_H$.

JUSTIN: Up to 5 points.
The first step of computing the canonical form is superfluous and incorrect, since computing the canonical form is at least as hard as solving graph isomorphism and we don't know how to solve that in polynomial time.
Also, it looks like step 2 or 3 should have been about graph H instead of both being about G.
So it seems to have exactly the right idea but described it poorly.
4/5.

(b) Given a probability distribution $D=(p_x)$ over $n$-bit strings $x$, define the ``QSampling state'' of $D$ to be $$\ket{\psi_D} := \sum_{x \in \{0,1\}^n} \sqrt{p_x} \ket{x}$$
Given two probability distributions $D$ and $D'$, suppose that the quantum circuit $\mathcal{C}$ maps $\ket{0^n}$ to $\ket{\psi_D}$, while the circuit $\mathcal{C}'$ maps $\ket{0^n}$ to $\ket{\psi_{D'}}$.

Then what is the output state of the circuit shown below, which acts on $n+1$ qubits?
$$ \Qcircuit @C=.5em @R=0.5em @!R {
\lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\
\lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}} & \qw & \gate{\mathcal{C}'} & \qw } $$

To find the output state of the given circuit, we first analyze it step by step.

1. After applying the Hadamard gate (H) on the first qubit, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{0^n}$$

2. Next, the controlled-$\mathcal{C}$ gate is applied. This gate applies the $\mathcal{C}$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} \otimes \ket{\psi_D} + \ket{1} \otimes \ket{0^n})$$

3. The NOT gate is applied on the first qubit, which flips the state of the first qubit. The state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{0^n})$$

4. Finally, the controlled-$\mathcal{C}'$ gate is applied. This gate applies the $\mathcal{C}'$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

Thus, the output state of the circuit is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

JUSTIN: 4/4.
I think solving this one is impressive.

(c) Now suppose we measure the first qubit of that output state in the $\{\ket{+},\ket{-}\}$ basis.  What is the probability of the outcome $\ket{+}$ if $D=D'$?  What about if $D$ and $D'$ have disjoint supports?

To find the probability of the outcome $\ket{+}$, we first express the output state in terms of the $\{\ket{+},\ket{-}\}$ basis. Recall that $\ket{+} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ and $\ket{-} = \frac{1}{\sqrt{2}}(\ket{0} - \ket{1})$.

The output state is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

We can rewrite this state in terms of the $\{\ket{+},\ket{-}\}$ basis as follows:
$$\frac{1}{2}(\ket{+} \otimes (\ket{\psi_D} + \ket{\psi_{D'}}) + \ket{-} \otimes (\ket{\psi_D} - \ket{\psi_{D'}}))$$

Now, we can find the probability of the outcome $\ket{+}$ by taking the squared magnitude of the coefficient of $\ket{+}$ in the state.

Case 1: $D = D'$

In this case, $\ket{\psi_D} = \ket{\psi_{D'}}$. So, the output state becomes:
$$\frac{1}{2}(\ket{+} \otimes (2\ket{\psi_D}) + \ket{-} \otimes (0))$$

The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(2)\right|^2 = 1$$

Case 2: $D$ and $D'$ have disjoint supports

In this case, $\ket{\psi_D}$ and $\ket{\psi_{D'}}$ are orthogonal, i.e., their inner product is zero: $\langle\psi_D|\psi_{D'}\rangle = 0$. The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(\langle\psi_D|\psi_D\rangle + \langle\psi_{D'}|\psi_{D'}\rangle)\right|^2$$

Since $\langle\psi_D|\psi_D\rangle = 1$ and $\langle\psi_{D'}|\psi_{D'}\rangle = 1$ (both states are normalized), the probability of the outcome $\ket{+}$ is:
$$\left|\frac{1}{2}(1 + 1)\right|^2 = \frac{1}{4}(2)^2 = \frac{1}{2}$$

So, if $D = D'$, the probability of the outcome $\ket{+}$ is 1, and if $D$ and $D'$ have disjoint supports, the probability of the outcome $\ket{+}$ is $\frac{1}{2}$.

SCOTT: HOLY CRAP, LOOKS CORRECT!
JUSTIN: 5/5.

(d) Suppose your distributions $D_G$ from part (a) could be efficiently QSampled.  Using your previous work, explain how Graph Isomorphism could then be solved in $BQP$ (quantum polynomial time).

Suppose we can efficiently QSample the distributions $D_G$ and $D_H$ for two input graphs $G$ and $H$. We can then construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for these distributions using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$, respectively, as described in the previous problems.

Now, consider the following quantum circuit acting on $n+1$ qubits:

$$ \Qcircuit @C=.5em @R=0.5em @!R { \lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\  \lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}_G} & \qw & \gate{\mathcal{C}_H} & \qw } $$

As we found earlier, the output state of this circuit is:

$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_{D_G}} + \ket{0} \otimes \ket{\psi_{D_H}})$$

Now, we measure the first qubit of the output state in the $\{\ket{+},\ket{-}\}$ basis. We know from our previous work that if $G$ and $H$ are isomorphic (i.e., $D_G = D_H$), the probability of the outcome $\ket{+}$ is 1, and if $G$ and $H$ are non-isomorphic (i.e., $D_G$ and $D_H$ have disjoint supports), the probability of the
outcome $\ket{+}$ is $\frac{1}{2}$.

Thus, we can solve the Graph Isomorphism problem in $BQP$ as follows:

1. Construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for the input graphs $G$ and $H$ using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$.

2. Implement the above quantum circuit and measure the first qubit in the $\{\ket{+},\ket{-}\}$ basis.

3. If the outcome is $\ket{+}$ with probability close to 1, conclude that the graphs $G$ and $H$ are isomorphic. If the outcome is $\ket{+}$ with probability close to $\frac{1}{2}$, conclude that the graphs $G$ and $H$ are non-isomorphic.

This algorithm runs in quantum polynomial time, as the QSampling circuits $\mathcal{C}_G$ and $\mathcal{C}_H$ are assumed to be efficient, and the rest of the operations (Hadamard, NOT, and controlled gates) are also efficient.

JUSTIN: The answer is longer than necessary, repeating some of its previous work.
Also, in theory, if the graphs are isomorphic then the probability of |+> will be exactly 1 not close to 1.

4/4.

(e) So then why doesn't this approach immediately imply a fast quantum algorithm for Graph Isomorphism?  Explain what could go wrong in passing from fast algorithms to sample $D_G$ and $D_H$, to fast algorithms to QSample them.

The approach described above does not immediately imply a fast quantum algorithm for Graph Isomorphism because there is a crucial difference between classical sampling and QSampling.

In the algorithm, we assumed that we can efficiently construct quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_{G'}$ that map $\ket{0^n}$ to $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$, respectively, given that the distributions $D_G$ and $D_{G'}$ can be efficiently sampled classically. However, this assumption might not hold in general.

Classical sampling allows us to draw samples from the distributions $D_G$ and $D_{G'}$, but it does not directly provide a way to prepare the corresponding QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$. Preparing these quantum states requires a coherent superposition of all the elements in the distributions with the appropriate amplitudes, which might not be efficiently achievable using the classical sampling algorithms.

In other words, the ability to efficiently sample from a probability distribution classically does not necessarily imply the ability to efficiently prepare the corresponding QSampling state on a quantum computer. This is the main reason why the approach does not immediately lead to a fast quantum algorithm for Graph Isomorphism.

SCOTT: Correctly explains that sampling doesn't necessarily imply QSampling (i.e. generating a coherent superposition state), but doesn't say that this is because of the difficulty of uncomputing garbage.
2 / 5 extra credit points.

Next, a problem about quantum communication complexity. Total score: 5 / 10.

6. Suppose Alice and Bob hold $n$-bit strings $x=(x_1,...,x_n)$ and $y=(y_1,...,y_n)$ respectively.  One thing they might want to learn is the mod-2 inner product of their strings, $$x_1 y_1 + \ldots + x_n y_n (\bmod 2).$$

(a) Suppose Alice and Bob had a quantum communication protocol in which they are allowed to exchange up to $T$ qubits and to perform arbitrary local unitary transformations to their qubits (possibly including ancilla qubits), that ended with Bob knowing the above inner product, with success probability 1.  Explain how, by exchanging the same number of qubits $T$, Bob could also prepare an $n$-qubit state
of the form $$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
where $x$ is an $n$-bit string held by Alice.

If Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can use the same protocol to prepare the desired $n$-qubit state.

Here's how they can do it:

1. Alice prepares an $n$-qubit state $\ket{\phi}$, where
$$\ket{\phi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

2. Alice applies a controlled-phase gate to each qubit of $\ket{\phi}$, using her $n$-bit string $x$ as the control. This results in the state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

3. Alice sends the $n$-qubit state to Bob.

4. Bob uses the quantum communication protocol to learn the mod-2 inner product of their strings. Since the protocol has success probability 1, Bob will know the inner product with certainty.

5. Bob applies a local unitary transformation to the $n$-qubit state he received from Alice, based on the inner product he learned in step 4. This will result in the desired state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

Since Alice and Bob only exchanged $T$ qubits in the original protocol, they can also prepare the desired $n$-qubit state by exchanging the same number of qubits $T$.

JUSTIN: Up to 4 points.
This answer is partially correct and/or poorly worded. In particular, in step 3 Alice just sends the whole n-qubit state to Bob.
The right idea is in there, but doesn't earn full points.

2/4.

(b) Assume Alice and Bob have no preshared entanglement.  Recall Holevo's Theorem, which implies that in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits.  Using Holevo's Theorem together with part (a), prove that Alice and Bob must exchange at least $n$ qubits, even if they only want to learn the inner product mod 2 of their input strings $x$ and $y$.

From part (a), we know that if Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can also prepare an $n$-qubit state of the form
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
by exchanging the same number of qubits $T$.

Now, let's consider the reverse process. Suppose Bob has the $n$-qubit state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$
He can perform a quantum measurement in the computational basis to obtain a random $n$-bit string $z$. Then, he can compute the mod-2 inner product $x \cdot z$ and send it back to Alice.

Alice, knowing her own string $x$, can compute the mod-2 inner product $x \cdot z$ as well. If the inner product $x \cdot z$ that Bob sent to Alice matches the one she computed, then they have successfully communicated the inner product mod 2 of their input strings $x$ and $y$.

However, according to Holevo's Theorem, in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits. Since the $n$-qubit state prepared in part (a) effectively communicates $n$ bits of information (Alice's string $x$) to Bob, Holevo's Theorem implies that Alice and Bob must exchange at least $n$ qubits to prepare this state.

Therefore, even if Alice and Bob only want to learn the inner product mod 2 of their input strings $x$ and $y$, they must exchange at least $n$ qubits.

JUSTIN: Up to 6 points.
This answer doesn't earn any points. It only says that Alice and Bob learn the inner products and mistakenly claims this is n bits of information.
The correct solution recognizes that applying Hadamard to the state we showed how to prepare in part (a) gives exactly |x>.

0/6.

(c) Now suppose we're no longer working mod 2, and Alice and Bob want to know whether their inner product
$$x_1 y_1 + \ldots + x_n y_n$$
is zero or nonzero as an integer.  (In other words, whether there's an $i$ such that $x_i=y_i=1$.)  Describe a protocol by which Alice and Bob can accomplish this, with high probability, by exchanging only $O(\sqrt{n}\log n )$ qubits in total.  The qubits can be spread across as many rounds of communication as necessary, and can be sent in either direction.

Alice and Bob can use the following protocol, based on the Grover's search algorithm, to determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total:

1. Alice prepares a quantum register of $n$ qubits in the state $\ket{0}^{\otimes n}$.

2. Alice applies the Hadamard gate to each qubit, resulting in the state $$\ket{\psi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

3. Alice and Bob perform the following steps for $O(\sqrt{n})$ iterations:

   a. Alice sends the $n$-qubit state $\ket{\psi}$ to Bob.

   b. Bob applies a controlled-phase gate to each qubit of $\ket{\psi}$, using his $n$-bit string $y$ as the control. This results in the state
   $$\ket{\psi'} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{y\cdot z} \ket{z}.$$

   c. Bob sends the $n$-qubit state $\ket{\psi'}$ back to Alice.

   d. Alice applies a controlled-phase gate to each qubit of $\ket{\psi'}$, using her $n$-bit string $x$ as the control. This results in the state
   $$\ket{\psi''} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{(x+y)\cdot z} \ket{z}.$$
   Note that if $x_i = y_i = 1$ for some $i$, then the phase of the corresponding term in the sum will be flipped twice, resulting in no change. Therefore, the state $\ket{\psi''}$ will have a non-zero amplitude for the terms corresponding to the indices where $x_i = y_i = 1$.

   e. Alice applies the Grover's diffusion operator to the state $\ket{\psi''}$, which amplifies the amplitudes of the terms corresponding to the indices where $x_i = y_i = 1$. The diffusion operator can be implemented as follows:
      - Apply the Hadamard gate to each qubit.
      - Apply a multi-qubit controlled-Z gate, with all qubits as controls and an ancilla qubit as the target.
      - Apply the Hadamard gate to each qubit again.

   f. Alice updates the state $\ket{\psi}$ with the resulting state after applying the Grover's diffusion operator.

4. After $O(\sqrt{n})$ iterations, Alice measures the final state in the computational basis. If she obtains a non-zero result, then there exists an $i$ such that $x_i = y_i = 1$, and their inner product is nonzero as an integer. If she obtains a zero result, then their inner product is zero.

Since Alice and Bob exchange $n$ qubits in each iteration, and they perform $O(\sqrt{n})$ iterations, the total number of qubits exchanged is $O(n\sqrt{n}) = O(\sqrt{n}\log n)$, as each qubit can represent $\log n$ bits of information. Thus, Alice and Bob can determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total.

SCOTT: LOOKS BASICALLY RIGHT, EXCEPT FOR A SLIP-UP AT THE END WHERE IT SAYS
ALICE AND BOB EXCHANGE n QUBITS PER ITERATION RATHER THAN log n ...

JUSTIN: Up to 6 extra credit points.
Yeah, step 3a says it's sending n qubits, when it's actually log n.
It also seems incorrect beginning in the analysis of step d. It switches from summing over |z> and referring to the full strings x and y to referring to individual indices i,x_i,y_i. And I don't know what it means about some amplitudes going to zero since what we're really doing is having some amplitudes be +1 and some -1.
I don't think this protocol is correct as stated.
But, it has some of the main ideas. The analysis of the grover steps is good and using a phase gate is good.

3/6 extra credit.

OK, last question, about the k-SUM problem. Total score: 6 / 15.

7. In the famous $k$-SUM problem, we're given a list of integers $x_1,\ldots,x_n$, and are asked whether there are $k$ distinct indices, $i_1 < \ldots < i_k$, such that $x_{i_1} + \ldots + x_{i_k} = 0$.

For this problem, you can ignore factors of $\log n$ in the running time (indeed, that is what the $\widetilde{O}$ notation means).

(a) Assume $k$ is even AND that we are allowed multi-sets (aka repeated elements are allowed). Describe a classical algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/2})$ time, beating the trivial upper bound of $\widetilde{O}(n^k)$.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/2})$ time using a meet-in-the-middle approach. Here's a high-level description of the algorithm:

1. Divide the list of integers into two equal halves: $A = x_1, \ldots, x_{n/2}$ and $B = x_{n/2+1}, \ldots, x_n$.

2. Compute all possible sums of $k/2$ elements from each half. For each half, this can be done in $O(n^{k/2})$ time by iterating through all possible combinations of $k/2$ elements.

3. Sort the lists of sums for both halves. This can be done in $O(n^{k/2} \log n)$ time using a comparison-based sorting algorithm.

4. For each sum in the first half, perform a binary search in the second half to find a sum that is equal to the negation of the sum from the first half. If such a sum is found, then we have found k distinct indices whose elements sum to 0. Binary search takes $O(\log n)$ time, so this step takes $O(n^{k/2} \log n)$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/2})$.

SCOTT: LOOKS LIKE IT WOULD WORK *IF* WE KNEW THE k-SET WAS EQUALLY BALANCED
BETWEEN THE LEFT AND RIGHT HALVES...

JUSTIN: Up to 5 points.
Right, step 1 is incorrect. Instead, it should generate all O(n^{k/2}) sums of subsets of size k/2. Nothing about dividing into two halves.

4/5.

(b) Assume $k$ is divisible by 3 and that we are again allowed multi-sets.  Describe a quantum algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/3})$ time.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/3})$ time using a quantum algorithm based on Grover's search. Here's a high-level description of the algorithm:

1. Divide the list of integers into three equal parts: $A = x_1, \ldots, x_{n/3}$, $B = x_{n/3+1}, \ldots, x_{2n/3}$, and $C = x_{2n/3+1}, \ldots, x_n$.

2. Compute all possible sums of $k/3$ elements from each part. For each part, this can be done in $O(n^{k/3})$ time by iterating through all possible combinations of $k/3$ elements.

3. Sort the lists of sums for all three parts. This can be done in $O(n^{k/3} \log n)$ time using a comparison-based sorting algorithm.

4. Define an oracle function $f$ that takes as input a sum from the first part and a sum from the second part, and returns 1 if there exists a sum in the third part that is equal to the negation of the sum of the inputs, and 0 otherwise. This oracle can be implemented using a binary search on the sorted list of sums for the third part, which takes $O(\log n)$ time.

5. Use Grover's search algorithm to find a pair of sums from the first and second parts that satisfy the oracle function. Grover's search has a quadratic speedup over classical search, so it takes $O(\sqrt{n^{k/3}}) = O(n^{k/6})$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/3})$.

JUSTIN: Up to 5 points.
Again, step 1 is incorrect, it shouldn't divide into three parts.
The division into three parts is more problematic this time as it comes up in step 4.
And it ends up claiming n^{k/6] time in step 5, better than the n^{k/3}.
Basically all this gets correct is to use Grover in some way.

1/5.

(c) Suppose we wanted to prove that the algorithm from (b) was the fastest possible quantum algorithm for $k$-SUM.  Could that be shown via a lower bound on $k$-SUM's quantum query complexity?  Why or why not?

Proving that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM would require showing a lower bound on k-SUM's quantum query complexity that matches the upper bound achieved by the algorithm. However, a lower bound on the quantum query complexity alone might not be sufficient to prove the optimality of the algorithm, as it only considers the number of queries to the oracle function and does not account for other computational resources, such as time or space complexity.

To show that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM, one would need to establish a lower bound on the quantum time complexity or some other relevant complexity measure that matches the upper bound achieved by the algorithm. This would provide a more comprehensive understanding of the problem's inherent difficulty and the limitations of quantum algorithms in solving it.

JUSTIN: Up to 5 extra credit points.
This is a nice description of query vs time complexity, but misses the point that query complexity can prove a bound of at most Omega(n) since there are at most n potential queries to make.

1/5 extra credit points.

Xavier Waintal responds (tl;dr Grover is still quadratically faster)

Thursday, March 23rd, 2023

This morning Xavier Waintal, coauthor of the new arXiv preprint “””refuting””” Grover’s algorithm, which I dismantled here yesterday, emailed me a two-paragraph response. He remarked that the “classy” thing for me to do would be to post the response on my blog, but: “I would totally understand if you did not want to be contradicted in your own zone of influence.”

Here is Waintal’s response, exactly as sent to me:

The elephant in the (quantum computing) room: opening the Pandora box of the quantum oracle

One of the problem we face in the field of quantum computing is a vast diversity of cultures between, say, complexity theorists on one hand and physicists on the other hand. The former define mathematical objects and consider any mathematical problem as legitimate. The hypothesis are never questioned, by definition. Physicists on the other hand spend their life questioning the hypothesis, wondering if they do apply to the real world. This dichotomy is particularly acute in the context of the emblematic Grover search algorithm, one of the cornerstone of quantum computing. Grover’s algorithm uses the concept of “oracle”, a black box function that one can call, but of which one is forbidden to see the source code. There are well known complexity theorems that show that in this context a quantum computer can solve the “search problem” faster than a classical computer.

But because we closed our eyes and decided not to look at the source code does not mean it does not exist. In https://arxiv.org/pdf/2303.11317.pdf, Miles Stoudenmire and I deconstruct the concept of oracle and show that as soon as we give the same input to both quantum and classical computers (the quantum circuit used to program the oracle on the actual quantum hardware) then the *generic* quantum advantage disappears. The charge of the proof is reversed: one must prove certain properties of the quantum circuit in order to possibly have a theoretical quantum advantage. More importantly – for the physicist that I am – our classical algorithm is very fast and we show that we can solve large instances of any search problem. This means that even for problems where *asymptotically* quantum computers are faster than classical ones, the crossing point where they become so is for astronomically large computing time, in the tens of millions of years. Who is willing to wait that long for the answer to a single question, even if the answer is 42?

The above explicitly confirms something that I realized immediately on reading the preprint, and that fully explains the tone of my response. Namely, Stoudenmire and Waintal’s beef isn’t merely with Grover’s algorithm, or even with the black-box model; it’s with the entire field of complexity theory. If they were right that complexity theorists never “questioned hypotheses” or wondered what did or didn’t apply to the real world, then complexity theory shouldn’t exist in CS departments at all—at most it should exist in pure math departments.

But a converse claim is also true. Namely, suppose it turned out that complexity theorists had already fully understood, for decades, all the elementary points Stoudenmire and Waintal were making about oracles versus explicit circuits. Suppose complexity theorists hadn’t actually been confused, at all, about under what sorts of circumstances the square-root speedup of Grover’s algorithm was (1) provable, (2) plausible but unproven, or (3) nonexistent. Suppose they’d also been intimately familiar with the phenomenon of asymptotically faster algorithms that get swamped in practice by unfavorable constants, and with the overhead of quantum error-correction. Suppose, indeed, that complexity theorists hadn’t merely understood all this stuff, but expressed it clearly and accurately where Stoudenmire and Waintal’s presentation was garbled and mixed with absurdities (e.g., the Grover problem “being classically solvable with a linear number of queries,” the Grover speedup not being “generic,” their being able to “solve large instances of any search problem” … does that include, for example, CircuitSAT? do they still not get the point about CircuitSAT?). Then Stoudenmire and Waintal’s whole objection would collapse.

Anyway, we don’t have to suppose! In the SciRate discussion of the preprint, a commenter named Bibek Pokharel helpfully digs up some undergraduate lecture notes from 2017 that are perfectly clear about what Stoudenmire and Waintal treat as revelations (though one could even go 20+ years earlier). The notes are focused here on Simon’s algorithm, but the discussion generalizes to any quantum black-box algorithm, including Grover’s:

The difficulty in claiming that we’re getting a quantum speedup [via Simon’s algorithm] is that, once we pin down the details of how we’re computing [the oracle function] f—so, for example, the actual matrix A [such that f(x)=Ax]—we then need to compare against classical algorithms that know those details as well. And as soon as we reveal the innards of the black box, the odds of an efficient classical solution become much higher! So for example, if we knew the matrix A, then we could solve Simon’s problem in classical polynomial time just by calculating A‘s nullspace. More generally, it’s not clear whether anyone to this day has found a straightforward “application” of Simon’s algorithm, in the sense of a class of efficiently computable functions f that satisfy the Simon promise, and for which any classical algorithm plausibly needs exponential time to solve Simon’s problem, even if the algorithm is given the code for f.

In the same lecture notes, one can find the following discussion of Grover’s algorithm, and how its unconditional square-root speedup becomes conditional (albeit, still extremely plausible in many cases) as soon as the black box is instantiated by an explicit circuit:

For an NP-complete problem like CircuitSAT, we can be pretty confident that the Grover speedup is real, because no one has found any classical algorithm that’s even slightly better than brute force. On the other hand, for more “structured” NP-complete problems, we do know exponential-time algorithms that are faster than brute force. For example, 3SAT is solvable classically in about O(1.3n) time. So then, the question becomes a subtle one of whether Grover’s algorithm can be combined with the best classical tricks that we know to achieve a polynomial speedup even compared to a classical algorithm that uses the same tricks. For many NP-complete problems the answer seems to be yes, but it need not be yes for all of them.

The notes in question were written by some random complexity theorist named Scot Aronsen (sp?). But if you don’t want to take it from that guy, then take it from (for example) the Google quantum chemist Ryan Babbush, again on the SciRate page:

It is well understood that applying Grover’s algorithm to 3-SAT in the standard way would not give a quadratic speedup over the best classical algorithm for 3-SAT in the worst case (and especially not on average). But there are problems for which Grover is expected to give a quadratic speedup over any classical algorithm in the worst case. For example, the problem “Circuit SAT” starts by me handing you a specification of a poly-size classical circuit with AND/OR/NOT gates, so it’s all explicit. Then you need to solve SAT on this circuit. Classically we strongly believe it will take time 2^n (this is even the basis of many conjectures in complexity theory, like the exponential time hypothesis), and quantumly we know it can be done with 2^{n/2}*poly(n) quantum gates using Grover and the explicitly given classical circuit. So while I think there are some very nice insights in this paper, the statement in the title “Grover’s Algorithm Offers No Quantum Advantage” seems untrue in a general theoretical sense. Of course, this is putting aside issues with the overheads of error-correction for quadratic speedups (a well understood practical matter that is resolved by going to large problem sizes that wouldn’t be available to the first fault-tolerant quantum computers). What am I missing?

More generally, over the past few days, as far as I can tell, every actual expert in quantum algorithms who’s looked at Stoudenmire and Waintal’s preprint—every one, whether complexity theorist or physicist or chemist—has reached essentially the same conclusions about it that I did. The one big difference is that many of the experts, who are undoubtedly better people than I am, extended a level of charity to Stoudenmire and Waintal (“well, this of course seems untrue, but here’s what it could have meant”) that Stoudenmire and Waintal themselves very conspicuously failed to extend to complexity theory.

Of course Grover’s algorithm offers a quantum advantage!

Wednesday, March 22nd, 2023

Unrelated Update: Huge congratulations to Ethernet inventor Bob Metcalfe, for winning UT Austin’s third Turing Award after Dijkstra and Emerson!

And also to mathematician Luis Caffarelli, for winning UT Austin’s third Abel Prize!


I was really, really hoping that I’d be able to avoid blogging about this new arXiv preprint, by E. M. Stoudenmire and Xavier Waintal:

Grover’s Algorithm Offers No Quantum Advantage

Grover’s algorithm is one of the primary algorithms offered as evidence that quantum computers can provide an advantage over classical computers. It involves an “oracle” (external quantum subroutine) which must be specified for a given application and whose internal structure is not part of the formal scaling of the quantum speedup guaranteed by the algorithm. Grover’s algorithm also requires exponentially many steps to succeed, raising the question of its implementation on near-term, non-error-corrected hardware and indeed even on error-corrected quantum computers. In this work, we construct a quantum inspired algorithm, executable on a classical computer, that performs Grover’s task in a linear number of call to the oracle – an exponentially smaller number than Grover’s algorithm – and demonstrate this algorithm explicitly for boolean satisfiability problems (3-SAT). Our finding implies that there is no a priori theoretical quantum speedup associated with Grover’s algorithm. We critically examine the possibility of a practical speedup, a possibility that depends on the nature of the quantum circuit associated with the oracle. We argue that the unfavorable scaling of the success probability of Grover’s algorithm, which in the presence of noise decays as the exponential of the exponential of the number of qubits, makes a practical speedup unrealistic even under extremely optimistic assumptions on both hardware quality and availability.

Alas, inquiries from journalists soon made it clear that silence on my part wasn’t an option.

So, desperately seeking an escape, this morning I asked GPT-4 to read the preprint and comment on it just like I would. Sadly, it turns out the technology isn’t quite ready to replace me at this blogging task. I suppose I should feel good: in every such instance, either I’m vindicated in all my recent screaming here about generative AI—what the naysayers call “glorified autocomplete”—being on the brink of remaking civilization, or else I still, for another few months at least, have a role to play on the Internet.

So, on to the preprint, as reviewed by the human Scott Aaronson. Yeah, it’s basically a tissue of confusions, a mishmash of the well-known and the mistaken. As they say, both novel and correct, but not in the same places.

The paper’s most eye-popping claim is that the Grover search problem—namely, finding an n-bit string x such that f(x)=1, given oracle access to a Boolean function f:{0,1}n→{0,1}—is solvable classically, using a number of calls that’s only linear in n, or in many cases only constant (!!). Since this claim contradicts a well-known, easily provable lower bound—namely, that Ω(2n) oracle calls are needed for classical brute-force searching—the authors must be using words in nonstandard ways, leaving only the question of how.

It turns out that, for their “quantum-inspired classical algorithm,” the authors assume you’re given, not merely an oracle for f, but the actual circuit to compute f. They then use that circuit in a non-oracular way to extract the marked item. In which case, I’d prefer to say that they’ve actually solved the Grover problem with zero queries—simply because they’ve entirely left the black-box setting where Grover’s algorithm is normally formulated!

What could possibly justify such a move? Well, the authors argue that sometimes one can use the actual circuit to do better classically than Grover’s algorithm would do quantumly, and therefore, they’ve shown that the Grover speedup is not “generic,” as the quantum algorithms people always say it is.

But this is pure wordplay around the meaning of “generic.” When we say that Grover’s algorithm achieves a “generic” square-root speedup, what we mean is that it solves the generic black-box search problem in O(2n/2) queries, whereas any classical algorithm for that generic problem requires Ω(2n) queries. We don’t mean that for every f, Grover achieves a quadratic speedup for searching that f, compared to the best classical algorithm that could be tailored to that f. Of course we don’t; that would be trivially false!

Remarkably, later in the paper, the authors seem to realize that they haven’t delivered the knockout blow against Grover’s algorithm that they’d hoped for, because they then turn around and argue that, well, even for those f’s where Grover does provide a quadratic speedup over the best (or best-known) classical algorithm, noise and decoherence could negate the advantage in practice, and solving that problem would require a fault-tolerant quantum computer, but fault-tolerance could require an enormous overhead, pushing a practical Grover speedup far into the future.

The response here boils down to “no duh.” Yes, if Grover’s algorithm can yield any practical advantage in the medium term, it will either be because we’ve discovered much cheaper ways to do quantum fault-tolerance, or else because we’ve discovered “NISQy” ways to exploit the Grover speedup, which avoid the need for full fault-tolerance—for example, via quantum annealing. The prospects are actually better for a medium-term advantage from Shor’s factoring algorithm, because of its exponential speedup. Hopefully everyone in quantum computing theory has realized all this for a long time.

Anyway, as you can see, by this point we’ve already conceded the principle of Grover’s algorithm, and are just haggling over the practicalities! Which brings us back to the authors’ original claim to have a principled argument against the Grover speedup, which (as I said) rests on a confusion over words.

Some people dread the day when GPT will replace them. In my case, for this task, I can’t wait.


Thanks to students Yuxuan Zhang (UT) and Alex Meiburg (UCSB) for discussions of the Stoudenmire-Waintal preprint that informed this post. Of course, I take sole blame for anything anyone dislikes about the post!


For a much more technical response—one that explains how this preprint’s detailed attempt to simulate Grover classically fails, rather than merely proving that it must fail—check out this comment by Alex Meiburg.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.

Google’s Sycamore chip: no wormholes, no superfast classical simulation either

Friday, December 2nd, 2022

Update (Dec. 6): I’m having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I’m learning a huge amount from the talks and discussions here—and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I’ll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. [New: click here for video of my talk!]

But there’s one thing, relevant to this post, that I can’t let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

To be clear, I don’t blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment’s historic significance might strike me as perfectly valid inferences from what I’d read in the press. I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits—a talk that eschewed all claims of historic significance and of literal wormhole creation.

But it’s now clear to me that, between

(1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and

(2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time,

something went terribly wrong—something that risks damaging trust in the scientific process itself. And I think it’s worth reflecting on what we can do to prevent it from happening again.


This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

News, social media, and my inbox have been abuzz with two claims about Google’s Sycamore quantum processor, the one that now has 72 superconducting qubits.

The first claim is that Sycamore created a wormhole (!)—a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit’s blog and Chad Orzel’s blog.

The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means.

On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can’t do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime?

I submit that neither claim is true. On the one hand, Sycamore did not “create a wormhole.” On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can’t do remains much the same as last week or last month!


Let’s start with the wormhole thing. I can’t really improve over how I put it in Dennis Overbye’s NYT piece:

“The most important thing I’d want New York Times readers to understand is this,” Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

More broadly, Overbye’s NYT piece explains with admirable clarity what this experiment did and didn’t do—leaving only the question “wait … if that’s all that’s going on here, then why is it being written up in the NYT??” This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the “QC creates a wormhole” framing.

Alright, but what’s the actual basis for the “QC creates a wormhole” claim, for those who don’t want to leave this blog to read about it? Well, the authors used 9 of Sycamore’s 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do—i.e., what they’d already classically calculated that it would do.

So, the objection is obvious: if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim. Why should the standards be different just because we’re talking about a quantum computer rather than a classical one?

Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 29=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There’s some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we’d mostly learn about certain corrections that arise because the simulation was being done with “only” n=100 qubits, rather than in the n→∞ limit where SYK is rigorously understood. But while those corrections, arising when n is “neither too large nor too small,” would surely be interesting to specialists, they’d have no obvious bearing on the prospects for creating real physical wormholes in our universe.

And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?

Daniel Harlow explained the thinking to me as follows (he stresses that he’s explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I’d still prefer to call it a simulated wormhole for simulated people.

For completeness, here’s Harlow’s passage from the NYT article:

Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper.

“So I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Dr. Harlow wrote in an email. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would CERTAINLY be off the table.” Developing computers big enough to do so might take 10 or 15 years, he added.


Alright, let’s move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can’t provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you’re in the “anti-concentration regime” (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there’s a classical algorithm to approximately sample the quantum circuit’s output distribution in poly(n) time (albeit, not yet a practical algorithm).

Here’s what’s crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits: a regime where

  1. you can still see explicitly that you’re exploiting a 250– or 2100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but
  2. you also have a signal that (as it turned out) is large enough to see with heroic effort.

To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I’m just worried that others aren’t reading their paper as carefully as they should be!

So then, what’s the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n→∞ qubits. Nothing in Google’s or USTC’s actual Random Circuit Sampling experiments depended on that hope, but it would’ve been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis.

Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait … no … that’s not what I meant to write…


Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments “create their own reality” full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-‘n-supremacy post.