## Another axe swung at the Sycamore

So there’s an interesting new paper on the arXiv by Feng Pan and Pan Zhang, entitled “Simulating the Sycamore supremacy circuits.” It’s about a new tensor contraction strategy for classically simulating Google’s 53-qubit quantum supremacy experiment from Fall 2019. Using their approach, and using just 60 GPUs running for a few days, the authors say they managed to generate a million *correlated* 53-bit strings—meaning, strings that all agree on a specific subset of 20 or so bits—that achieve a high linear cross-entropy score.

Alas, I haven’t had time this weekend to write a “proper” blog post about this, but several people have by now emailed to ask my opinion, so I thought I’d share the brief response I sent to a journalist.

This does look like a significant advance on simulating Sycamore-like random quantum circuits! Since it’s based on tensor networks, you don’t need the literally largest supercomputer on the planet filling up tens of petabytes of hard disk space with amplitudes, as in the brute-force strategy proposed by IBM. Pan and Zhang’s strategy seems most similar to the strategy previously proposed by Alibaba, with the key difference being that the new approach generates millions of correlated samples rather than just one.

I guess my main thoughts for now are:

- Once you knew about this particular attack, you could evade it and get back to where we were before by switching to a more sophisticated verification test — namely, one where you not only computed a Linear XEB score for the observed samples, you
*also*made sure that the samples didn’t share too many bits in common. (Strangely, though, the paper never mentions this point.) - The other response, of course, would just be to redo random circuit sampling with a slightly bigger quantum computer, like the ~70-qubit devices that Google, IBM, and others are now building!

Anyway, very happy for thoughts from anyone who knows more.

Comment #1 March 8th, 2021 at 8:23 am

It sounds a bit like the arms race? Keep building larger to have a changing leader of the month? Where does this stop.

Sounds like a silly way to settle anything, especially given that even had there been a untoppled supremacist for even many years it would not guarantee anything about the year after that.

Isn’t there a more elegant way to settle this supremacy debate? And if there isn’t, what’s the point in even settling it.

Indeed there are supremacy battles in other domains ( eg the largest ship or biggest wind turbine or biggest dam) but usually there’s an underlying metic ( eg efficiency) that’s actually of interest and improving.

This race sounds like a fools errand.

Comment #2 March 8th, 2021 at 8:27 am

Why would this be considered a valid spoof? If the vectors are so obviously correlated, then it what sense do they sample the desired probability distribution?

Comment #3 March 8th, 2021 at 8:53 am

maline #2: Yeah, if you just stared at the samples returned by this paper’s algorithm, it would be obvious that they share many bits in common and were therefore astronomically unlikely to be independent draws from the output distribution. It’s just that the Linear XEB test that Google was using didn’t happen to include a check for that! My point in the post is that once you understood this, you could add the “do the samples share too many bits in common?” check, and according to this paper’s own data, that would return you to almost exactly the situation we were in before the paper came out.

Comment #4 March 8th, 2021 at 9:07 am

Rahul #1: No, I believe what you’re seeing is merely a transitional era, like Deep Blue and its predecessor versus Kasparov in 1996-97 before computers became overwhelmingly dominant at chess.

Unfortunately, the horse-race aspect will be exacerbated for a while by a limitation of the current quantum supremacy experiments, that verifying the results with a classical computer is essentially as hard as spoofing the results. Once you can do Shor’s algorithm — or anything that, like factoring, is easy to verify classically — with thousands of logical qubits, it will be straightforward to do something that might not be classically simulable within the entire size and lifetime of the universe. To put it differently, this is not a horse race, it’s a race between a horse and a nuclear-propelled car that’s only just starting to work. 🙂

Thus, what I and others really care about here is not whether quantum or classical is ahead in any given month (who cares?), but the broader question of whether the fundamental building blocks and ideas of scalable QC are being successfully demonstrated, and whether there’s any evidence for the sort of correlated errors that Gil Kalai posits as a way to kill scalable QC (so far, there isn’t).

Comment #5 March 8th, 2021 at 9:17 am

maline #2: Think of it in the context of an attack on Scott’s client-certified random number generation (CCRNG) proposal. If such a service actually existed, various clients would be implemented and out in the world with hard coded rules.

This paper would represent a way for a malicious classical attacker to fool a CCRNG client unless that client was performing certain statistical checks. So this result would be akin to a zero day exploit, a serious but fixable problem, as opposed to a destruction of the underlying idea. A reasonable analogy would be the discovery that RSA requires padding to be secure.

Comment #6 March 8th, 2021 at 9:35 am

Craig #5: Thanks! Yeah, the analogy to applied crypto here is an excellent one and extends surprisingly far. For example, when people discover attacks on deployed cryptosystems, very often they merely reduce an exponential to a smaller exponential — which means that the attacks could be thwarted

eitherby patching them up specifically as they’re noticed,orby the “universal solvent” of just choosing a larger key size. And exactly the same is true here: you could modify the Linear XEB test to thwart this clever attack, but you could also thwart it by just going to a larger number of qubits.Comment #7 March 8th, 2021 at 9:40 am

Scott #2:

Well, the Sycamore output vectors were published, right? So if anyone cares, it should be trivial to do that check now.

But “while we’re at it”, I guess we would want to check for a whole raft of possible correlations, not just “bits in common”. In which case, designing a good “independence test” without knowing the distribution itself becomes an interesting problem.

Comment #8 March 8th, 2021 at 12:05 pm

Even if a series of these quantum algorithms each leads to someone turning up with a classical equivalent, it’s interesting the way the quantum schemes anticipate the classical ones. Is there such a concept as BQP-completeness and do these sampling problems achieve it?

Comment #9 March 8th, 2021 at 12:07 pm

Oh and meanwhile:

https://www.wired.com/story/microsoft-retracts-disputed-quantum-computing-paper/

Comment #10 March 8th, 2021 at 12:32 pm

asdf #8: It’s not known whether there are any languages complete for BQP, but there are many examples of promise problems complete for PromiseBQP (the promise-problem version of BQP). However, what Google’s supremacy experiment solves is yet another thing, a

samplingproblem. Whether the sampling problem is “BQP-complete” would depend on exactly how you defined it, but I definitely don’t think the task of producing samples that get a large Linear XEB score, given a random quantum circuit as input, is BQP-complete in any interesting sense.Comment #11 March 8th, 2021 at 1:16 pm

“Axe to the Sycamore”–I see what you did there.

You should write more books. If and when you have the time. You could start with a collection of blog essays.

Comment #12 March 8th, 2021 at 1:30 pm

“The other response, of course, would just be to redo random circuit sampling with a slightly bigger quantum computer, like the ~70-qubit devices that Google, IBM, and others are now building!”Do we know how the actual cost of those projects is scaling up (or down) with the number of qubits?

If adding an extra qubit currently requires doubling the investment, I’m not sure the QC hype will be enough to sustain this until some radical new breakthrough is achieved. It would make QC similar to nuclear fusion, we know it’s theoretically feasible, but the cost of developing it to scale makes progress really slow.

On the other hand, if the same teams manage to double the number of qubits for more or less the same cost, every year (like Moore’s law, basically), then that would be great news.

Comment #13 March 8th, 2021 at 4:04 pm

JimV #11:

Detecting interference,

Cryogenics for maintaining coherence,

Birds tweeting from the Sycamore tree,

Dream of solving BQP.

Comment #14 March 8th, 2021 at 4:11 pm

Fred #12, nuclear fusion works perfectly well as long as the reactor is big enough. The technique that works is to build a star and then live in orbit around it. Come to think of it, we are already doing that. (Credit to someone on Hacker News for putting it that way).

Comment #15 March 8th, 2021 at 4:26 pm

This paper (doi.org/10.1103/PhysRevX.10.041038) makes an interesting and much more general argument. If I understand correctly, due to their finite per-gate fidelity (and therefore a total fidelity decreasing exponentially with circuit depth), the NISQ-type quantum computers feasible today and in the intermediate future only ever explore a very small corner of their Hilbert space. Their ability to generate entanglement is thus limited, making them generically vulnerable to attacks based on tensor-networks techniques.

Or so the authors claim. Any expert wishing to comment?

Comment #16 March 8th, 2021 at 4:28 pm

fred #12: Google, IBM, and others are already scaling to 70+ qubits. The one big issue is that,

until you do error correction— a gigantic “until” — the strength of the signal goes down exponentially with the circuit depth. Aside from that well-known issue, there’s absolutely no reason why the cost of scaling should be exponential in the number of qubits, and that hasn’t been people’s experience either.Comment #17 March 8th, 2021 at 8:26 pm

#comment5: where is the client-certified random number generation (CCRNG) proposal ? I didn’t find it… a link please

Comment #18 March 8th, 2021 at 9:14 pm

“Supremacy” is racist, or something, in Clown World.

https://www.nature.com/articles/d41586-019-03781-0

Comment #19 March 9th, 2021 at 8:38 am

Meow #17:

where is the client-certified random number generation (CCRNG) proposal ? I didn’t find it… a link please

Alas, I still haven’t written my paper about this! In the meantime, though, I have these slides.

Comment #20 March 9th, 2021 at 8:42 am

Scott #16

thanks, error correction is an interesting but quite complex topic. From the wiki:

“The syndrome measurement tells us as much as possible about the error that has happened, but nothing at all about the value that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer, which would prevent it from being used to convey quantum information. “Given all the limitations of QM (no-cloning, ireversibility of measurement, etc), it would be really fascinating for nature to make it possible for us to do error correction by only having to add a polynomial number of extra elements.

Comment #21 March 9th, 2021 at 9:44 am

@Meow #17: Scott describes the CCRNG idea 46m into this talk: https://youtu.be/LW-pX4Lsvz4?t=2770 . He doesn’t call it “client certified” just “certified”. I call it “client certified” to disambiguate from other quantum randomness protocols using anti-commuting measurements or Bell tests that have also been described as “certified” but that really have quite a different character.

Comment #22 March 9th, 2021 at 10:05 am

asdf

“Fred #12, nuclear fusion works perfectly well as long as the reactor is big enough.”Devil is in the details – the difficulty in tokamaks is that controlling the dynamics of plasma+fields is really complex. So is finding a good method for extracting useful energy.

Then there’s also the issue of secondary radiations that could degrade the machine over time (and potentially even make it as radioactive as a classic fission plant).

And the goal is that is has to be done in way that’s financially viable too (i.e. cost less than alternative energy sources), unlike building something like the LHC or current QC toys.

So, it’s not clear at this moment whether QC tech will be able to scale up while at the same time offering some form of intermediate ROI.

For nuclear fusion, the end goal is clear – almost unlimited power from almost nothing. There isn’t such a clear win for QC.

Imo the perception problem with QC is that theoretical computer scientists keep contrasting them to classical computers by directly comparing n bits to n qubits, but, from an engineering point of view, adding one more qubit is far more difficult than adding one more bit.

It’s in fact so easy to scale up classical computers that what’s limiting them now are limits like the speed of light and minimum size of elements and heat dissipation.

QC will also pretty much be very specialized machines (at least for a long time), while a classical computers are truly universal.

Comment #23 March 9th, 2021 at 3:24 pm

fred #22,

The distinction between logical qubits and physical qubits is relevant here though.

Logical qubits have a cost in terms of physical qubits, depending on connectivity and error rate (which is ultimately dependent on circuit depth).

AFAIK it’s this explosion of physical qubits per logical qubit that makes it difficult to scale a QC.

i.e. it’s the collective cost rather than the individual cost of adding a qubit that’s expensive.

It reminds me of how classical CPUs resort to branch prediction in order to achieve higher performance.

In the sense that it’s not just about adding one more clock cycle anymore, and you may need to redesign the chip to make it work.

Comment #24 March 9th, 2021 at 3:44 pm

fred#12 and Scott #16: There’s certainly a wide gulf between “number of qubits grows exponentially with incremental cost” and “incremental cost grows exponentially with number of qubits.” Scott, care to take a guess at what the growth rate of the cost-per-qubit or qubits-per dollar function will end up looking like in the next few decades? Or even which of these functions will be O(1)?

Comment #25 March 9th, 2021 at 5:25 pm

Dan Staley #24:

Scott, care to take a guess at what the growth rate of the cost-per-qubit or qubits-per dollar function will end up looking like in the next few decades? Or even which of these functions will be O(1)?

No 🙂

Comment #26 March 10th, 2021 at 12:02 am

It is difficult to reconcile various statements and requirements of the NIST PQ Crypto competition.

Grover’s algorithm isn’t a threat to symmetric crypto. [1] It is desired that a PQ implementation specify an option for security against a depth of Qubit simulated logical gates greater than if the current black budget was devoted to purchasing NAND gates. [2]

I could make inferences, but whatever inferences I could make would be extremely silly.

On another note, the results of the competition is spectacular, the performance of some of the algorithms are massively superior to RSA-4096 for encryption, which does take several million cycles. It does seem there will be the obvious question in the future, “Why didn’t we switch to this cryptosystem sooner?” Which might never be answered in an assuring way.

1. “…it is quite likely that Grover’s algorithm will provide little or no advantage in attacking AES, and AES 128 will remain secure for decades to come. Furthermore, even if quantum computers turn out to be much less expensive than anticipated, the known difficulty of parallelizing Grover’s algorithm suggests that both AES 192 and AES 256 will still be safe for a very long time.” https://csrc.nist.gov/Projects/post-quantum-cryptography/faqs

2. “NIST suggests an approach where quantum attacks are restricted to a fixed running time, or circuit depth. … to no more than 2^96 logical gates (the approximate number of gates that atomic scale qubits with speed of light propagation times could perform in a millennium). ” https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/call-for-proposals-final-dec-2016.pdf

Comment #27 March 10th, 2021 at 12:13 am

Scott #16:

do you have a reference for the exponential decrease of the signal with circuit depth unless you use error correction? I never read this so directly stated and it seems a key issue for me to understand better. I always read (also from you) that “we need error correction” but now maybe I start to understand why. Thanks!

Comment #28 March 10th, 2021 at 12:23 am

LK2 #27: It’s such a basic, inescapable, and well-known fact about NISQ quantum computing that I’m having trouble coming up with a “reference”! Like, yes, if you chain together t events that each have independent success probability p, the probability that they all succeed does indeed go down like p

^{t}. This has certainly been discussed many times on this blog, and surely also in the Google quantum supremacy paper. And yes, it’s why you ultimately need error correction.Comment #29 March 10th, 2021 at 2:27 am

LK2 #27

To give you an idea, current QCs would struggle to run just a few AND gates reliably.

Depending on gate set, something like 10 AND gates might be a stretch.

I mean, I don’t know that for a fact since I’ve only played with simulators, but it’s definitely less than 50 and probably less than 25. At 75% fidelity is it more than 10?

BTW, I actually like the AND gate as a quantum

unit of depth, for various reasons.E.g.

this QC has 72 qubits and a max depth of 10 AND gates(at some agreed-upon sensible fidelity)I expect that we’ll see QCs with 1024 qubits before QCs that can do 128 AND gates.

That’s because 128 AND gates at good fidelity probably requires at least 1024 qubits for error correction alone, and who’s not going to announce a 1024 QC that can run 10 AND gates first?

Seriously.

Comment #30 March 10th, 2021 at 4:22 am

Job #29: Google’s Sycamore chip can do 2-qubit gates with ~99% fidelity. You could probably extract a signal from a circuit with over 200 simulated Toffoli (reversible AND) gates as long as they were arranged into at most 20 or so layers. Doesn’t change your point but just sayin’!

Comment #31 March 10th, 2021 at 5:26 am

Not using the Toffoli decomposition I was using. 🙂

It was something like 40 basic gates per Toffoli (because of the T_dagger gate), but it depends on the gateset.

In any case, I don’t think there is a QC that, given a circuit with 10 AND gates could guarantee 75% fidelity.

I’m only guessing, but I don’t think it’s too far off.

Comment #32 March 10th, 2021 at 8:14 am

Dan Staley #24

I loved your question, took me a few rereads to realize that it’s perfectly correct and in fact achieves an above average information (qustionation?) per letter ratio.

Comment #33 March 10th, 2021 at 9:51 am

I think most Shtetl-Optimized participants agree that QS (Quantum Supremacy) is an unfortunate term. A couple days ago I read (forgot where, anyone know?) that QS is not only offensive, but also a lousy description of the concept. The author furthermore pointed out that “Quantum Advantage” was a much better term.

I agree. Anyone out there have an even better term? If there is agreement on the best term for this concept, on to the next paragraph.

While there is a natural reluctance to entertain the notion of attempting to change a term once it is in reasonably wide use (think: “fool’s errand”), this situation might be different, because a lot of the people involved in QS issues read or are aware of Shtetl-Optimized. If agreement on the best replacement term (Quantum Advantage, or whatever) can be reached, I encourage everyone to start using it and lead by example.

Comment #34 March 10th, 2021 at 10:57 am

Job #23

“AFAIK it’s this explosion of physical qubits per logical qubit that makes it difficult to scale a QC.”I think it’s even simpler than that:

With classical computers, you can *approximately* double the power by simply networking two computers together. I.e. the state of each machine can be easily encoded, transmitted, and merged/manipulated.

But, with quantum computers, there’s just no way to combine two of them while preserving the QM behavior. I.e. the wave functions of two separate macro systems in some state of complex coherence just can’t be merged.

Comment #35 March 10th, 2021 at 11:11 am

Raoul Ohio #33: We’ve already been over this ad nauseam on this blog. The issue with the term “quantum advantage” is that it’s already used to refer to all sorts of quantum advantages that are not specifically quantum computational supremacy (i.e., beating all currently-available classical computers on earth running all known algorithms on some benchmark task). That said, I don’t mind if others use “quantum advantage” and I sometimes use it myself, just as long as the meaning is clear.

Comment #36 March 10th, 2021 at 11:15 am

fred #34:

But, with quantum computers, there’s just no way to combine two of them while preserving the QM behavior.

To whatever extent that means anything, it’s just false. If I have 2000 qubits, of course I’ll

choose to do different thingsthan if I merely had two non-interacting sets of 1000 qubits each—but therearethings I can choose to do that involve creating an entangled state in the full 2^{2000}-dimensional Hilbert space.Comment #37 March 10th, 2021 at 1:25 pm

Scott #36

All I meant to say is that Quantum Computers, as actual physical machines (with internals and I/O), just aren’t “additive”:

if I give you two Quantum Computers with 1000 logical qubits each (i.e. each can keep those 1000 qubits in a coherent state), there’s no obvious minimally intrusive modification or way to connect them to combine the two into a *single* QC that has the power of 2000 connected logical qubits.

In other words, once Google and IBM each have a 70 qubit device, it’s not like they can then get together and hack some way to interface them to create a total system with 140 qubit device (or even anything above 70), right?

Unlike classical computers which are “modular” (and easily scalable), because we can easily manipulate and transmit part of their state classically. That’s why distributed classical computing works.

Comment #38 March 10th, 2021 at 2:01 pm

fred #37: I still don’t see any deep distinction here. Even to get two classical computers to work together as one unit is famously a nontrivial undertaking! With two quantum computers, it’s even

morenontrivial—for one thing, it would certainly help immensely if you had a quantum communications channel (able to send qubits reliably from one machine to the other). But there’s nothing impossible about any of this.Comment #39 March 10th, 2021 at 2:38 pm

Scott #38

For classical computers, it’s fundamentally a consequence of the fact that, in analog electronics, you can simply connect different modules by doing impedance matching, so that the next stage has almost no effect on the previous one. It means you can design and build each module separately, and then connect them, creating an overall system arbitrarily big and complex.

The same applies to digital electronics (it’s still really all analog electronics, with impedance matching, etc).

Practically, that’s why, to double the amount RAM in your computer, you just open it, plug in an extra stick of RAM, close it, and you’re done.

For a QC, it’s fundamentally harder to add more qubits, because you have to preserve quantum coherence. There’s no such thing as “impedance matching”. You need the full system to stay in coherence. QC is fundamentally about keeping macro systems in a pure quantum superposition. And the bigger the macro system, the harder it is, no? Of course that’s hard to quantify at the moment since we don’t even have logical qubits.

Comment #40 March 10th, 2021 at 2:43 pm

In other words, for classical computers, Moore’s law is telling us that their growth has been exponential. I.e. it shows us that scaling classical computers is very easy (scaling can mean doubling the number of transistors on a chip, or networking more and more machines physically, as in AWS, etc).

Again, that’s a consequence of impedance matching, and the ease to transmit classical state without worrying too much about noise.

Do you ever expect that rate of improvement of QC capacity (in number of qubits) will eventually be exponential too?

Comment #41 March 10th, 2021 at 4:09 pm

Scott #36

> I’ll choose to do different things than if I merely had two non-interacting sets of 1000 qubits each—but there are things I can choose to do that involve creating an entangled state in the full 2

^{2000}-dimensional Hilbert space.Are you saying you can create an entangled state between non-interacting qubits ?

How is that possible ? I thought entanglement always arose from physical interactions where certain quantities are constrained by the laws of physics to be conserved across both particles (or sets of particles).

Comment #42 March 10th, 2021 at 4:33 pm

For scale on the networking question, in https://arxiv.org/abs/1905.09749 I estimated that being able to establish entanglement between computers at a rate of 150 qubits per second was sufficient to usefully distribute the quantum factoring computation. You’d probably do this with a noisy physical channel and entanglement distillation, which AFAIK is less expensive than the magic state distillation you’d already be doing.

Comment #43 March 10th, 2021 at 4:37 pm

Gerard #39: Yes, you obviously need interaction to create entanglement—but you can haz interaction! Either by just directly sending qubits from one device to the other, or by indirect methods like quantum teleportation or entanglement swapping.

Comment #44 March 10th, 2021 at 5:27 pm

Hi Scott,

My question is more related to the amount of computational power and the speed of light. I apologize if this is not the right forum.

Sometimes, we all forget that we are moving constantly (or not) at the speed of light through the universe, and that fact is what determines the ‘NOW’ effect. We come up with words for future and past as part of our human existence, but in terms of physics, once an event has happened after the light of the universe has touched it, it’s easy to do standard physics with our standard models for events in the past. But for quantum mechanics, we forget that the light has not reached yet some of our assumptions, states, and therefore the observation has not occurred yet. As soon as the light of the universe reaches the quantum states, the NOW is already gone. What I’m trying to say is that to maintain a constant state level of dataflow, we must pick a state closer to the speed of light, I assume this would be the wavelength, and maintain it, we must glide. We need to stay as close as possible in the NOW state to perform our computations and maintain the model inline, in sync with the readings from the light.

With that said, I think there is a direct correlation between the E=mc

^{2}and the quantum world.. the closer we get to the NOW moment, or speed of light, the c^{2}should be canceling out.. with obvious output for mass and energy. Some kind of tuning nob for our computer..E=mc

^{2}/ge^{2}Where ge

^{2}is the gliding effect, being 1 in any past events, and a variable value with a max at the speed of light for quantum calculations.+ :

+ :

+ [prediction-foo] :

+ : [prediction-bar]

+ [prediction-foobar:]

+ :

+ + + + + + + + + + + + + NOW - - - - -> FUTURE

time, speed of light :

Thanks for your time 🙂

Matias

Comment #45 March 10th, 2021 at 5:45 pm

Scott #41: As a quantum amateur trying to follow along here, am I understanding what you’re saying correctly?

It sounds like, if you connect quantum computers only via traditional networks, you can’t get entanglement or speedups, while traditional computers can. But I think you’re saying that’s not a fair comparison to make, since traditional networks were specifically designed for traditional computers. If you’re going to talk about networking quantum computers, it’s only fair to assume you’re also working on developing ways to share quantum information, just as traditional networks share traditional bits.

Am I in the right ballpark?

Comment #46 March 10th, 2021 at 5:52 pm

Dan Staley #43: Well, you’re on the right continent. 🙂 To get the full benefit of interaction between multiple QCs, you’d want

eithera quantum network,or elsea classical network enhanced by quantum teleportation, which can simulate a quantum network. Anyway, this is not the question of fundamental interest that people here are making it out to be — you could always just build a bigger QC!Comment #47 March 10th, 2021 at 7:56 pm

“you could always just build a bigger QC!”Except that same principles that make “networking” easy or hard apply just as well to the different parts inside a computer (CPU/GPU/RAM/buses), to the different parts within each chip (cores and memory caches with a CPU/GPU), even down to the level of the logical gates and transistors.

https://www.notebookcheck.net/fileadmin/_migrated/pics/prozdiemap_01.gif

Comment #48 March 10th, 2021 at 8:52 pm

fred #45: Right, but another way to say it is that talk of “networking multiple QCs” kind of misses the point—you might as well talk about integrating more and more qubits within a single QC. Which can be done, and which is exactly what the experimentalists are now doing. These are not thoughts that never occurred to people!

Comment #49 March 10th, 2021 at 11:36 pm

Congratulations to Feng Pan and Pan Zhang for what looks like an important and surprising breakthrough. And also to the Alibaba group for the earlier paper.

Here is a technical matter I am puzzled about: the paper claims the ability to compute precisely the amplitudes for a large number of bitstrings. (Apparently computing the amplitudes is even more difficult computational task than sampling.) But then, it is not clear to me where the upper bound of 0.739 comes from? If you have the precise amplitudes it seems that you can sample (by Metropolis algorithm) with close to perfect fidelity. (And, if you wish, you can get a F_XEB score larger than 1.)

I also wonder if the new method gives a quick classical way to compute amplitudes of the Google experiment in the 12-35 qubit regime.

Comment #50 March 11th, 2021 at 1:54 am

Looking at the the paper, the claim does sound quite impressive: after selecting 32 out the 53 output bits and to be fixed at 0, they explicitly and precisely calculated the quantum amplitudes for all of the 2^21 remaining output vectors! They claim they can do this for any desired values of the fixed bits, and also that the method should work for a “general” random circuit — although the fixed bits are chosen in a sophisticated way that depends heavily on the details of the particular circuit.

This explaons their absurdly high linear cross-entropy score: with two million explicit probabilities in hand, they simply selected the million most probable outputs (in their subspace). If they wanted, they could have chosen one million copies of the single most probable vector, and gotten an EXB score of about 15 million. I guess that would look too suspicious…

I think is would have been better to randomly select one million vectors using their (reduced) probability distribution. This would give them a much lower score, but they still would almost certainly beat Google’s score of 0.002.

In that sense, they did in fact succeed at the sampling task: their reduced probability distribution, despite having support only on a tiny subspace, is “closer” to the ideal distribution than Google’s was. Assuming, of course, that we define “closeness” by linear cross-entropy rather than by a more principled metric like KL.

Comment #51 March 11th, 2021 at 2:43 am

Scott #28 and Job #29:

thank you very much!

I feel a bit dumb after reading the probability explanation of Scott 😉

Now I have “just” to understand better error correction and see “how much” it cures

the exponential loss.

Very interesting also the estimates of Job.

Thanks again.

Comment #52 March 11th, 2021 at 6:19 am

Scott #46

Of course I’m aware that Google and IBM are building bigger sets of interacting qubits.

I was merely trying to build a different intuition why the difficulty of scaling up a QC can be illustrated without thinking about “many many physical qubits are required to realized a logical qubits”:

if we had two perfect QCs, as black boxes (each sets of qubits would have to be kept very isolated from their environment, by definition), it’s not obvious at all how to merge them to maintain coherence.

In contrast, to build a new record setting classical super-computer, noone’s creating one single ever bigger giant CPU… they’re doing this instead:

https://scx2.b-cdn.net/gfx/news/hires/2012/introducingm.jpg

I.e. mainly assembling off-the-shelves components, there are many reasons for it (it’s cheap, it’s easy, it works).

Comment #53 March 11th, 2021 at 6:22 am

Raoul #33

“Quantum Supremacy is an unfortunate term. Quantum Advantage was a much better term.”On the other hand “Quantum Supremacy” sounds way better than “Quantum Advantage” where you’re trying to get a research budget as big as possible!

Comment #54 March 11th, 2021 at 6:37 am

ghjk #15: That paper by the Waintal group is an excellent summary of these issues. Furthermore, it very systematically describes how to simulate quantum circuits if only a certain fidelity is required (that is, by truncating the lowest Schmidt coefficients of some two-qubit gates).

However, in a sense, these things have been well known and mentioned every now and then by the community already. For instance, the exponentially small part of the Hilbert space that can be reached by such finite quantum circuit gate sets, the expected exponential decrease in fidelity (see also Scott #16 and below), and also the trick with the Schmidt coefficients is already explained well enough in the supplementary material of the quantum supremacy paper (see Section X.D). Furthermore, they say that they have not been able to use this to simulate the supremacy circuits and/or spoof the test. In this sense, the present paper by Pan & Zhang is original.

Comment #55 March 11th, 2021 at 8:07 am

Responding to myself 🙂

This is explained just before the discussion part of the paper.

The crucial thing is that amplitudes for 2^21 strings are computed and the probabilities for the 2^21 strings are distributed close to Porter-Thomas (exponentials).

If you take samples for them indeed you can get samples with F_XEB between -1 and 15. Picking the highest 10^6 strings from 2^21 get you 0.739 (so this value has no special meaning.)

Probably by using Metropolis sampling you can get (smaller, unless you enlarge 2^21 to 2^25, say) samples with F_XEB close to 1 and size-biased distribution (the distribution of probabilities of sampled strings) that fits the theoretical size biased distribution.

And you can start with the same 2^21 strings and also use metropolis sampling to get a sample of size 10^6 with the correct distribution of probabilities for somewhat smaller fidelity.

Comment #56 March 11th, 2021 at 8:16 am

LK2 #49: Quantum error-correction, once you’d gotten it to work, would

completely and totallycure the exponential loss. Now do you see why it’s such a big deal? 🙂Comment #57 March 11th, 2021 at 9:18 am

This sounds like an interesting paper! (Although, at heart, it seems to be just a really efficient implementation of a tensor network method.) I’m interested in further progress in classical simulation of quantum computers.

—

In general, I have been increasingly disappointed in you. Once an excellent quantum computing researcher, you have now fallen to the status of “washed-up has-been” (yourself) and “arguable elite” (Robin Hanson). Fortunately, I have a number of suggestions for things you could do to alleviate this malaise! Without further ado, here is my list:

1. You could just do more quantum complexity research. Go find an interesting research question (or look at the ones here or here), and publish a few papers on what you’ve discovered.

2. You could join some quantum computing startup/business and have fun working with actual qubits and implementing the quantum algorithms you know and love.

3. You could continue your forcing series and explain the biggest breakthrough of metamathematics* since Gödel.

4. You could write one of your consistently-excellent explainers on the Quantum Yang-Mills theory existence & mass gap problem. (I think that it is much easier than it appears, but mathematicians haven’t been studying it enough because the physics terminology is too scary. In particular, I suspect that it is secretly yet another a PDE regularity problem, but for an infinite family of linear differential equations.)

5. You could collaborate with me (and possibly a bunch of other people) to formally review all of the 116 claims to have resolved \(P\stackrel{?}{=}NP\) listed here at the standards of modern mathematics.

6. You could teach yourself a formal language for mathematical proofs (like Coq, Isabelle, Lean, or Metamath) and painstakingly formalize all of the mathematical research you’ve done over the years.

7. Finally, if all else fails, you could leave this blog and your professorship behind forever, and as Curtis Yarvin once suggested (in a now-deleted Gray Mirror post), launch a Substack to pontificate about your utterly standard Gray Tribe political takes.

*By the way, here’s a hot take about it: Gödel’s Completeness, First Incompleteness, and Second Incompleteness Theorems should be collectively rebranded as the “Fundamental Theorem of Metamathematics”.

Comment #58 March 11th, 2021 at 9:27 am

duck_master #57: Wonderful, thank you, and I’ll take this under advisement. Would you volunteer to babysit my kids and answer most of my emails while I work my way through your list? 😀

Incidentally, while my political views might be utterly standard ones for idealistic STEM nerds who still believe in the original, pre-woke ideals of the Enlightenment, hate posturing and cant regardless of the source, vote straight-ticket Democrat, and want to make the world better, what’s interesting is that none of that has stopped me from being floridly denounced!

Comment #59 March 15th, 2021 at 4:22 am

Sorry I thought I posted. What would a professor look in a older student that a younger student cannot provide?

Comment #60 March 15th, 2021 at 8:28 am

duck_master #57:

As long as no one’s watching him, Scott is actually doing all those things at once.

Comment #61 May 24th, 2021 at 4:45 pm

[…] I blogged about this here. It was a clever paper, which showed that if you focus narrowly on spoofing the linear […]

Comment #62 September 18th, 2021 at 10:22 pm

[…] (source, 2021) (try singing it in your […]

Comment #63 December 15th, 2021 at 4:35 pm

This debate reminds me of the loophole-free Bell inequality violation. Experimental results keep being challenged on the basis of different, more paranoid loopholes until one or two experiments become good enough to close the most important loopholes, and at the same time being useful, lets say, for entanglement distribution over long distances, at least in principle.

Here, instead of loopholes, people keep challenging the results of supremacy with metrics, which is good until the point these metrics become just too particular to be taken seriously by experimentalists, as in some far fetched classical algorithm being equal or more efficient than some quantum circuit implementation under such metric. This is how I think the argument of who gets the recognition for quantum supremacy would eventually settle in general.

This was a great post, I wish you could have the time to expand it, particularly when talking about measuring “too many bits in common” and the brute-force approach by IBM?