The Google Willow thing
Yesterday I arrived in Santa Clara for the Q2B (Quantum 2 Business) conference, which starts this morning, and where I’ll be speaking Thursday on “Quantum Algorithms in 2024: How Should We Feel?” and also closing the conference via an Ask-Us-Anything session with John Preskill. (If you’re at Q2B, reader, come and say hi!)
And to coincide with Q2B, yesterday Google’s Quantum group officially announced “Willow,” its new 105-qubit superconducting chip with which it’s demonstrated an error-corrected surface code qubit as well as a new, bigger quantum supremacy experiment based on Random Circuit Sampling. I was lucky to be able to attend Google’s announcement ceremony yesterday afternoon at the Computer History Museum in Mountain View, where friend-of-the-blog-for-decades Dave Bacon and other Google quantum people explained exactly what was done and took questions (the technical level was surprisingly high for this sort of event). I was also lucky to get a personal briefing last week from Google’s Sergio Boixo on what happened.
Meanwhile, yesterday Sundar Pichai tweeted about Willow, and Elon Musk replied “Wow.” It cannot be denied that those are both things that happened.
Anyway, all yesterday, I then read comments on Twitter, Hacker News, etc. complaining that, since there wasn’t yet a post on Shtetl-Optimized, how could anyone possibly know what to think of this?? For 20 years I’ve been trying to teach the world how to fish in Hilbert space, but (sigh) I suppose I’ll just hand out some more fish. So, here are my comments:
- Yes, this is great. Yes, it’s a real milestone for the field. To be clear: for anyone who’s been following experimental quantum computing these past five years (say, since Google’s original quantum supremacy milestone in 2019), there’s no particular shock here. Since 2019, Google has roughly doubled the number of qubits on its chip and, more importantly, increased the qubits’ coherence time by a factor of 5. Meanwhile, their 2-qubit gate fidelity is now roughly 99.7% (for controlled-Z gates) or 99.85% (for “iswap” gates), compared to ~99.5% in 2019. They then did the more impressive demonstrations that predictably become possible with more and better qubits. And yet, even if the progress is broadly in line with what most of us expected, it’s still of course immensely gratifying to see everything actually work! Huge congratulations to everyone on the Google team for a well-deserved success.
- I already blogged about this!!! Specifically, I blogged about Google’s fault-tolerance milestone when its preprint appeared on the arXiv back in August. To clarify, what we’re all talking about now is the same basic technical advance that Google already reported in August, except now with the PR blitz from Sundar Pichai on down, a Nature paper, an official name for the chip (“Willow”), and a bunch of additional details about it.
- Scientifically, the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter. So, this is a very important threshold that’s now been crossed. As Dave Bacon put it to me, “eddies are now forming”—or, to switch metaphors, after 30 years we’re now finally tickling the tail of the dragon of quantum fault-tolerance, the dragon that (once fully awoken) will let logical qubits be preserved and acted on for basically arbitrary amounts of time, allowing scalable quantum computation.
- Having said that, Sergio Boixo tells me that Google will only consider itself to have created a “true” fault-tolerant qubit, once it can do fault-tolerant two-qubit gates with an error of ~10-6 (and thus, on the order of a million fault-tolerant operations before suffering a single error). We’re still some ways from that milestone: after all, in this experiment Google created only a single encoded qubit, and didn’t even try to do encoded operations on it, let alone on multiple encoded qubits. But all in good time. Please don’t ask me to predict how long, though empirically, the time from one major experimental QC milestone to the next now seems to be measured in years, which are longer than weeks but shorter than decades.
- Google has also announced a new quantum supremacy experiment on its 105-qubit chip, based on Random Circuit Sampling with 40 layers of gates. Notably, they say that, if you use the best currently-known simulation algorithms (based on Johnnie Gray’s optimized tensor network contraction), as well as an exascale supercomputer, their new experiment would take ~300 million years to simulate classically if memory is not an issue, or ~1025 years if memory is an issue (note that a mere ~1010 years have elapsed since the Big Bang). Probably some people have come here expecting me to debunk those numbers, but as far as I know they’re entirely correct, with the caveats stated. Naturally it’s possible that better classical simulation methods will be discovered, but meanwhile the experiments themselves will also rapidly improve.
- Having said that, the biggest caveat to the “1025 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~1025 years for a classical computer to simulate, it would also take ~1025 years for a classical computer to directly verify the quantum computer’s results!! (For example, by computing the “Linear Cross-Entropy” score of the outputs.) For this reason, all validation of Google’s new supremacy experiment is indirect, based on extrapolations from smaller circuits, ones for which a classical computer can feasibly check the results. To be clear, I personally see no reason to doubt those extrapolations. But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.
- In his remarks yesterday, Google Quantum AI leader Hartmut Neven talked about David Deutsch’s argument, way back in the 1990s, that quantum computers should force us to accept the reality of the Everettian multiverse, since “where else could the computation have happened, if it wasn’t being farmed out to parallel universes?” And naturally there was lots of debate about that on Hacker News and so forth. Let me confine myself here to saying that, in my view, the new experiment doesn’t add anything new to this old debate. It’s yet another confirmation of the predictions of quantum mechanics. What those predictions mean for our understanding of reality can continue to be argued as it’s been since the 1920s.
- Cade Metz did a piece about Google’s announcement for the New York Times. Alas, when Cade reached out to me for comment, I decided that it would be too awkward, after what Cade did to my friend Scott Alexander almost four years ago. I talked to several other journalists, such as Adrian Cho for Science.
- No doubt people will ask me what this means for superconducting qubits versus trapped-ion or neutral-atom or photonic qubits, or for Google versus its many competitors in experimental QC. And, I mean, it’s not bad for Google or for superconducting QC! These past couple years I’d sometimes commented that, since Google’s 2019 announcement of quantum supremacy via superconducting qubits, the trapped-ion and neutral-atom approaches had seemed to be pulling ahead, with spectacular results from Quantinuum (trapped-ion) and QuEra (neutral atoms) among others. One could think of Willow as Google’s reply, putting the ball in competitors’ courts likewise to demonstrate better logical qubit lifetime with increasing code size (or, better yet, full operations on logical qubits exceeding that threshold, without resorting to postselection). The great advantage of trapped-ion qubits continues to be that you can move the qubits around (and also, the two-qubit gate fidelities seem somewhat ahead of superconducting). But to compensate, superconducting qubits have the advantage that the gates are a thousand times faster, which makes feasible to do experiments that require collecting millions of samples.
- Of course the big question, the one on everyone’s lips, was always how quantum computing skeptic Gil Kalai was going to respond. But we need not wonder! On his blog, Gil writes: “We did not study yet these particular claims by Google Quantum AI but my general conclusion apply to them ‘Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.’ ” Most of Gil’s post is devoted to re-analyzing data from Google’s 2019 quantum supremacy experiment, which Gil continues to believe can’t possibly have done what was claimed. Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results. I predict that Gil, and others who take it as axiomatic that scalable quantum computing is impossible, will continue to have their work cut out for them in this new world.
Update: Here’s Sabine Hossenfelder’s take. I don’t think she and I disagree about any of the actual facts; she just decided to frame things much more negatively. Ironically, I guess 20 years of covering hyped, dishonestly-presented non-milestones in quantum computing has inclined me to be pretty positive when a group puts in this much work, demonstrates a real milestone, and talks about it without obvious falsehoods!
Follow
Comment #1 December 10th, 2024 at 10:53 am
For a non Quantum guy, thanks for the explanation and references. We are living through great change but humanity through our acts of violence confounds me. Keep up the excellent work.
Comment #2 December 10th, 2024 at 10:55 am
[…] what it’s worth, Google has decided to equate “fault-tolerance” with a logical error rate below 1e-6. It’s all a bit […]
Comment #3 December 10th, 2024 at 11:12 am
Hi Scott,
Sorry that this is off-topic, but I couldn’t resist asking your opinion about dramatic recent global events.
The Assad regime in Syria collapsed suddenly, in only a span of fourteen days or so, after a civil war that lasted fourteen *years.* I don’t think anyone expected this, even the rebels themselves—especially because the conflict was frozen since 2020!
So I have two questions for you about this—one straightforward, and one hypothetical.
1. How do you feel about Assad? Do you think the collapse of his regime is a good thing for Syria? Do you think he was truly a brutal ruthless dictator, or was he somewhat misunderstood, the atrocities exaggerated? Do you think the rebels are better for Syria?
2. I recently learned that Bashar’s son, Hafez, recently got his PhD in algebraic number theory in Moscow. By all accounts he’s a smart mathematician. This evokes an interesting hypothetical: if Hafez al-Assad was pursuing a PhD at UT Austin in theoretical computer science or quantum information, would you work with him? Would you treat him differently than other students? What does your moral compass say about working with a student whose father is a ruthless dictator?
Comment #4 December 10th, 2024 at 11:43 am
Cyrus #3: Here’s what I wrote on Facebook:
Good riddance to Syria’s murderous dictator, and best wishes to its new, possibly slightly more reform-minded murderous dictators
Assad was a monster. I’m cautiously hopeful that the Syrian people finally have the chance to choose a better path, but of course I’m acutely aware of the long history of revolutions in the Middle East (and elsewhere) that replaced despotisms by even worse ones. In any case, I’m of course gratified that Iran’s terror axis has been weakened by the regime change in Syria.
If Hafez al-Assad disavowed the murderous ideology of his father (as, for example, the “Son of Hamas” famously did), then there’d be no issue whatsoever about doing math with him; it would even be a privilege. Collaborating with someone who believes you deserve to die as a Zionist infidel would of course present some practical safety problems. I guess I’m lucky that that issue hasn’t arisen even once in 27 years of research, including with at least four Iranians.
Comment #5 December 10th, 2024 at 11:48 am
Do you know why did Google perform the random circuit sampling with only 67 qubits when the chip has 105 qubits?
(I’m getting the 67 figure from https://www.nature.com/articles/s41586-024-07998-6, which is what Google’s blog post links to)
Did they attempt RCS with 105 qubits and failed ?
I still have in mind my threshold number of qubits at 76, because Avogadro’s constant is 10^23 =2^76, and if you exceed this threshold then it means that it’s impossible for the quantum state to be encoded classically in some magical way by the particles in the system, and that the quantum system really does use an exponential Hilbert space from nature instead of encoding all the possible states of the Hilbert space in some classical way.
I’d also be more convinced if they did some test such as running some unitary and then its inverse and testing the error rate of that, because XEB can’t be verified and can only be extrapolated.
Comment #6 December 10th, 2024 at 11:56 am
Hi Scott,
You wrote: “besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results.”
Scott, are you sure about IBM? One of my reasons (beside the “skeptillions” 🙂 ) not to take the Google claims too seriously is that the largest IBM’s random circuit sampling experiment I am aware of is for 7 qubits and I am not aware of IBM’s RCS experiments with good results. (This is item “D” in my post and if I am wrong I will gladly stand corrected.)
Comment #7 December 10th, 2024 at 12:07 pm
Thanks for your reply.
I really disagree about one thing, though. I highly doubt that Hafez (or Bashar, for that matter) would view you as a Zionist infidel. Bashar is not an idealogue and not particularly religious. Before 2011, most Western leaders saw him as a pro-Western secular dictator. He briefly flirted with normalizing relations with Israel before 2011. Of course he’s come to rely on Iranian military support—but not because he’s particularly sympathetic to Iranian islamist ideology, but just because nobody else would support him.
Wikileaks released a trove of Assad emails during the revolution. It’s clear that he’s not particularly religious (possible even an athetist), nor is he ideological. He very likely doesn’t give a shit about Israel and Palestine one way or the other. What he cares about most is keeping his family wealthy and in power—and in particular, preserving his family’s lavish lifestyle—the palaces, the fancy cars, and especially funding his wife’s extravagant spending habits (jewelry and art and designer clothes etc).
To be honest, a secular autocrat who mostly cares about his family’s power and lifestyle would be much safer for Israel than a failed state ruled by Islamist jihadists. You’re aware that al-Jolani used to be leader of al Qaeda in Iraq? That he still has a 10 million dollar state department bounty on his head for terrorism?
Comment #8 December 10th, 2024 at 12:19 pm
IBM’s heron chip has been out for about year, with ostensibly similar error rates and more qubits (but a different topology). What are the main advantages of Willow that allowed them to run error correction codes on this chip? Could heron have run error correction and achieved this? Can the error correction code implemented on Willow be ported to Heron?
When a logical gate gets applied to a logical qubit does it essentially just get “mixed in” to be the error correction step? Are there topological requirements when you want to operate a two logical qubit gate?
Comment #9 December 10th, 2024 at 12:20 pm
Some physicist #5: Sorry, I don’t know the answer to that, nor do I know if they did the inversion test (but note that doubling the circuit depth would square their signal-to-noise ratio). I guess we’ll wait for the paper on their new RCS experiment (or I can try asking them at Q2B).
Comment #10 December 10th, 2024 at 12:23 pm
Gil Kalai #6: IBM never uses the term “quantum supremacy.” Instead they talk about the “quantum volume” of their current device. But if you look at the definition of their “quantum volume” metric, it’s directly based on Random Circuit Sampling.
Comment #11 December 10th, 2024 at 12:25 pm
Christopher Greene #8: Excellent question! I hope someone from IBM and/or Google will chime in about why IBM didn’t demonstrate the same benchmark.
Comment #12 December 10th, 2024 at 12:38 pm
[…] is a nice very positive blog post over SO about the new developments where Scott wrote: “besides the new and more inarguable Google result, […]
Comment #13 December 10th, 2024 at 1:07 pm
Scott, IBM do use the term “random circuit sampling” and have a paper where they run an RCS experiment with six qubits. In addition, indeed, “quantum volume” is a closely related notion and IBM’s record for quantum volume is 2**9. Both these results are considerably weaker compared to Google’s assertions from 2019 (not to speak about newer papers).
As I said in my post, from what I know, there is a large gap between IBM’s and Google’s RCS abilities (and if you have some more information I will correct myself if needed), and, in the context of other concerns about Google’s methodology and Google’s very fantastic claims, this gives, in my opinion, a reason to doubt the Google’s assertions.
Also, are you sure that QuEra have reported Random Circuit Sampling experiments with good results? Can you give a link or a reference?
Comment #14 December 10th, 2024 at 1:30 pm
Some physicist#5: the manuscript you quote was run on a sycamore device, not a willow device. Hence why we only used a smaller number of qubits in this experiment. The announcement made yesterday was about a new RCS experiment run on the willow device.
For your question about inverting the unitary, we actually do it in figure 4b of the manuscript you quote. We called it Loschmidt experiment in the figure/texts.
Comment #15 December 10th, 2024 at 2:01 pm
Gil Kalai #13: Yes, IBM talks about RCS, but never about “quantum supremacy.”
QuEra did a 48-qubit IQP circuit. That was then classically simulated, but if you just want to see that the circuit fidelity degrades like the gate fidelity to the power of the number of gates—which of course has always been your central point of contention—then that already suffices. For that statement, my point stands: you’re no longer arguing against one experiment, but against a half-dozen separate experiments.
Comment #16 December 10th, 2024 at 2:07 pm
Pop media is buzzing about a revolutionary new quantum machine from Google that runs a gazillion times faster than a regular desktop by executing programs in parallel universes. The truth appears to be substantially more subtle.
As far as I can tell (and I might be way off-base here), RCS is a demonstration that your quantum computer runs test case quantum circuits the way you’d expect a quantum computer to run them, to a level of detail that would be infeasible to fake with a classical computer. It’s a specialized problem without any direct utility outside of the benchmark. To frame it as a car analogy: we’ve proved the engine is running, but that doesn’t demonstrate that the vehicle will get us anywhere.
Something about the narrow, contrived nature of the RCS problem makes “supremacy” feel like a stretch, despite technically fitting the definition. Am I wrong to want a more bombastic result out of a QC experiment before I’m ready to declare the age of quantum computing is upon us?
Comment #17 December 10th, 2024 at 2:20 pm
triceratops #16: I’ve pretty much given up on trying to calibrate a single message. Different people need different messages depending on what misconception they have.
If someone thinks we’re about to get personal QCs that will speed up everything we do, they need to be told that “the age of QC” is not upon us (and indeed, might never be).
If, on the other hand, someone thinks QC is all a scam or a misconception, and quantum error-correction can never work in the real world, they need to be told that “the age of QC” is now upon us.
Comment #18 December 10th, 2024 at 2:29 pm
One thing I like about Assad is that he defies popular expectations of who a dictator should be. What’s a Middle Eastern dictator in the popular imagination? He’s tall and strong, square-jawed and handsome. He’s had experience as a soldier. He’s a fighter. He’s socially intelligent and charismatic.
Assad is none of those things. Only a couple years before assuming power, he was a mild-mannered opthamologist in London. His brother was the strong, charismatic one—and his brother’s death paved the way for him to become dictator. People who knew Bashar in his London days remembered him as a quiet, socially awkward, “geeky I.T. guy”—a nerd. He’s gawky and chinless and awkward. He’s soft-spoken. He’s not charismatic or conventionally attractive. He spent his formative years studying. He can’t fight, he doesn’t even play sports.
So yes, he’s a “geek”—and a fearsome dictator. Surely it’s good to show young male geeks an example of someone who achieved fearsome power despite all those impediments?Isn’t it good for representation etc. that he showed the world that you don’t have to be good looking or charismatic to be a powerful ruler? That you can even be a nerd and do all those things?
Comment #19 December 10th, 2024 at 2:43 pm
noob question: how many qubits would it take to implement Shor’s factorization for browser-standard 2048-bit keys?
Comment #20 December 10th, 2024 at 3:31 pm
Okay, so if this is officially Shtetl-Optimized-Approved, then I can be excited.
As someone who’s only watched from afar for awhile now, and who remembers your skepticism to D-Wave back in the day, I’m curious about D-Wave’s role in all this, since they were acquired by Google. Does this mean they actually had something back then after all? Or did it end up being more of an “acquihire” and this is from a different tech tree, so to speak?
Comment #21 December 10th, 2024 at 4:07 pm
Cyrus #18: I know you’re trolling me, but I feel like you kind of turn in your geek card at the point when you’re slaughtering 500,000 people because they challenged your absolute rule.
Comment #22 December 10th, 2024 at 4:10 pm
RubeRad #19:
noob question: how many qubits would it take to implement Shor’s factorization for browser-standard 2048-bit keys?
At least 4000 logical qubits. With known error-correcting codes, that means easily more than a million physical qubits (more or less depending on the gate fidelity).
Comment #23 December 10th, 2024 at 4:12 pm
Scott,
The two central points of contention presented in my post regarding the Google 2019 paper are the following: 1) The calibration process appears to exhibit an undocumented and methodologically flawed global optimization process. 2) The circuit fidelity degrades like the gate fidelity to the power of the number of gates *in a statistically unreasonable way*. The first point may well be relevant for Google’s 2023/2024 advantage paper and to the new septillion paper.
(These specific contention points do not seem to be directly relevant to the Google error correcting claims and not to the QuEra IQP experiment.)
You missed the point about IBM. I don’t mind if they use the term quantum supremacy or not. From what I know the quality of RCS samples produced by IBM is considerably lower than that produced by Google and in my opinion this likely follows from methodologically flawed methods that Google uses related to contentions 1) and 2).
Comment #24 December 10th, 2024 at 4:13 pm
Gabe Durazo #20: D-Wave was not acquired by Google. I have no idea where you heard that. (Are you confusing it with Google having bought a D-Wave machine, years ago?) D-Wave is still around but had no role whatsoever in Willow as far as I know.
Comment #25 December 10th, 2024 at 4:18 pm
Link for Random Circuit Sampling does not work for me.
Comment #26 December 10th, 2024 at 4:40 pm
QC related, there is an Error Correction Zoo: https://errorcorrectionzoo.org/ and Greg K is a vet 🙂 I think it’s very new?
So you can explore the “Quantum kingdom” and go to Kitaev and then read about complexity details and boundaries (that I don’t understand much :))
I blogged about it here: https://fikisipi.substack.com/p/the-error-correction-zoo-and-googles
Comment #27 December 10th, 2024 at 5:06 pm
> Scott #22: 2048-bit key […] At least 4000 logical qubits
Amazingly, this is no longer true! https://eprint.iacr.org/2024/222 shows how to factor RSA2048 with 1730 logical qubits (more generally: n/2 + o(n) logical qubits).
They use residue arithmetic to move the dominant spatial cost away from the mod N accumulator over to the phase estimation register (because the residue arithmetic breaks qubit recycling). They then use Ekera+Hastad’s technique to reduce the size of the phase estimation register ( https://eprint.iacr.org/2017/077 ).
It’s somewhat fortunate the residue arithmetic breaks qubit recycling. Otherwise the space required would have been logarithmic in n, implying a polynomial time classical factoring algorithm by simulating the circuit.
Comment #28 December 10th, 2024 at 5:09 pm
@Cyrus
In order to stay in power, Bashar al-Assad killed — with help from Russia, Hezbollah, and various Shi’a proxy militias from Iraq, Afghanistan and Pakistan recruited by the Iranian IRGC – upwards of *500,000* people. A good number of them were killed in *extermination centers*, that even had crematoria.
As members of the Syrian opposition would tell anyone who listened, when the Assadists said, ”Assad, or we burn the country,’ they meant it literally.’
Given those facts — which are not in dispute — does it really matter that Assad is not a religious man ?
Comment #29 December 10th, 2024 at 5:51 pm
Hey Scott.
You seem to allude to p=np with regards to the solution possibly taking as much time to verify with classical computing.
It is my understanding that p=/=np, it’s just that it’s one of those things that is hard to prove like the collatz conjecture. Of course in a rigorous mathematical discipline like physics, I understand the approach of being 100% correct and not being content with intuitive approximations.
My specific question however is:
Is there anything specific to quantum computing that would change our estimations that P=/=NP? Additionally, is there anything specific to quantum computing that would make the practical implications of a mathematical proof any more impactful than with classical computing?
Comment #30 December 10th, 2024 at 6:08 pm
Craig Gidney #27: Really cool, thanks!
Comment #31 December 10th, 2024 at 6:11 pm
Tom #29: Simulating random quantum circuits and verifying their outputs aren’t believed to be NP problems, so P vs NP isn’t directly relevant, except insofar as it also involves the distinction between finding and verifying.
No, I don’t think quantum computing does anything to change our basic beliefs about P vs NP (or at least mine). It of course leads to many new questions, like whether NP is in BQP (quantum polynomial time), and whether NP=QMA (QMA, or Quantum Merlin Arthur, being a quantum generalization of NP), but that’s different.
Comment #32 December 10th, 2024 at 7:38 pm
Can Grover speedup be demonstrated and verified by chips like this for some well known problems where the lower bound is proven for the classical algorithm ? At least QC can be demonstrably proved on a small scale problem, when hype can clearly be separated from skepticism ?
isIsnt
Comment #33 December 10th, 2024 at 8:17 pm
Preskill is not saying anything very different from Sabine either except that he thinks we will get there one day.
https://www.nytimes.com/2024/12/09/technology/google-quantum-computing.html
Comment #34 December 10th, 2024 at 8:39 pm
She (Sabine Hossenfelder) says “Estimates say that we will need about 1 million qubits for practically useful applications and we’re still about 1 million qubits away from that.”
My passwords are safe.
Comment #35 December 10th, 2024 at 9:15 pm
Thank you Scott for explaining these results in a way that we non-experts can understand and putting them in the context.
This blog is my go to place for serious take on advances in Quantum Computing. I wish you many joyful years of blogging and public education.
Comment #36 December 10th, 2024 at 9:35 pm
HasH #34:
She (Sabine Hossenfelder) says “Estimates say that we will need about 1 million qubits for practically useful applications and we’re still about 1 million qubits away from that.”
That sentence leapt out at me as an especially poor way of thinking. It’s like, has she never heard of exponentials? When COVID started spreading in Wuhan, imagine someone saying: “eh, this will be something to worry about in practice when there are a million infections, and we’re still about 1 million infections away from that.” 🙂
Comment #37 December 10th, 2024 at 9:37 pm
Also thanks to Gil for carefully criticizing the results. That is a very important function of academia, it contributes to people working on building Quantum Computers doing a better job.
If at some point Gil gets convinced by the results, then it is on pretty solid grounds.
We need more people like Gil who respectfully challenge and question every bit in good spirit.
Comment #38 December 10th, 2024 at 9:38 pm
Prasanna #32:
Can Grover speedup be demonstrated and verified by chips like this for some well known problems where the lower bound is proven for the classical algorithm ?
The short answer is no. Firstly because the Grover speedup is only proven when you’re searching an actual physical database—but for that you need a qRAM (i.e., a quantumly addressable memory), which no one currently knows how to build. And secondly because for a deep circuit like Grover, you almost certainly need fault-tolerance—but then you need to be at an enormous scale, before the √N speedup of Grover wins out in practice over the large constant overhead from fault-tolerance.
Comment #39 December 10th, 2024 at 10:16 pm
Gil Kalai #23: So we’re perfectly clear, from my perspective your position has become like that of Saddam Hussein’s information minister, who repeatedly went on TV to explain how Iraq was winning the war even as American tanks rolled into Baghdad. I.e., you are writing to us from an increasingly remote parallel universe.
The smooth exponential falloff of circuit fidelity with the number of gates has by now been seen in separate experiments from Google, IBM, Quantinuum, QuEra, USTC, and probably others I’m forgetting right now. Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same.
And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!
Comment #40 December 11th, 2024 at 12:31 am
> […] years, which are longer than weeks but shorter than decades.
Just want to say that this is the rigorous mathematical analysis I come to Shtetl-Optimized for.
🙂
(Great post, like many other non-experts I was waiting for your take on the situation!)
Comment #41 December 11th, 2024 at 1:49 am
“This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, …”
This statement struck me as kind of a non-sequitur. Is there really a superposition regime, somewhere beyond the double-slit experiment, that lends more “credence” to MWI?
Comment #42 December 11th, 2024 at 1:56 am
Scott, I don’t understand your #6 argument.
Yes, we could not run a super-computer for 10^25 years, but we can run a super-computer for about a month without relying on indirect verification or waiting for efficient experiment designs.
Do we have proof this chip is 10^6 stronger than a super-computer?
In my opinion, It will be more valuable than 10^30 factor claims, for which we have only indirect evidence.
Comment #43 December 11th, 2024 at 3:35 am
What are the complexity barriers to solving random circuit sampling? Would Monte Carlo methods return worse fidelity than observed in the experiment?
If BQP=P, would that have any other implications in complexity theory?
If nature does it somehow, what are the odds that it’s really in P? That there’s some algorithm we’re missing?
Comment #44 December 11th, 2024 at 3:49 am
I meant if
BQP=BPP, as obviously derandomization would be an implication.
Comment #45 December 11th, 2024 at 6:11 am
Riley #41: Yeah, it’s a non-sequitur. If you’re going to accept the philosophical case for Many-Worlds, you should presumably have already accepted it the moment you accepted the reality of quantum mechanics itself (eg from the double-slit experiment) — the actual demonstration of a large quantum computation should’ve been completely superfluous to you. Conversely, if you were a Bohmian or a quantum Bayesian or whatever else, you should’ve made exactly the same prediction for Google’s experiment and therefore had no update. Updating on the experiment is a matter of psychology rather than logic.
Comment #46 December 11th, 2024 at 6:23 am
Evyatar #42: Let me try one more time. The specific computation that Google reported with Willow, the one that (as far as we know) would take 1025 years to classically spoof, would also (for the same reasons) plausibly take 1025 years to classically verify. Random circuit sampling is not like factoring, for which it’s easy to verify the answer however hard it might be to find it.
Obviously Google also tested Willow on smaller problems, ones for which a classical computer can verify the answer in reasonable time, and for those smaller problems things checked out. So the question is whether Willow somehow stops working correctly just where we can no longer check its work. I say “presumably not, but it would still be good to test more directly, as we hope to do in the future with factoring or other classically verifiable problems.”
Asking “how many times stronger is this QC than a classical computer” is a wrong question. For certain specific problems, we expect QCs will be 101000 times faster and more, even while for other problems they won’t be faster at all.
Comment #47 December 11th, 2024 at 6:39 am
Scott #46: Could you please elaborate on this smaller problem scale, which was fully verified by a classical computer(I assume a supercomputer)? I didn’t find any data on the references or in the 2019 claims.
Just to clarify – the discussion is only on RCS and only in the feasible compute time(not asymptomatic scale).
To be accurate, If a super-computer can generate RCS with a given fidelity in 1 month for a given scale – how fast can this chip do it for the same fidelity and scale? Was it verified?
Comment #48 December 11th, 2024 at 8:32 am
FYI, the “Random Circuit Sampling” link is broken.
Scott Comment #36:
The uncertainty still in the field seems like an especially poor justification for claiming we should expect exponential advancements. You wouldn’t suggest Moore’s law until you had actually scaled to something remotely useful, and unlike QC, integrated circuits were immediately useful. Her skeptical position is well warranted IMO.
Comment #49 December 11th, 2024 at 9:00 am
Alright, fixed the link.
Comment #50 December 11th, 2024 at 9:09 am
Sandro #48: Saying “we’re still roughly a million qubits away from a million qubits,” while arithmetically correct, is a way to dismiss the enormity of the progress that’s actually been made in learning how to control these things. If you look at the 2-qubit gate fidelities, those went from 50% to 90% to 99% to now 99.9%, which means (because of the threshold theorem) that the experimentalists are now “most of the way” to being able to error-correct and scale. The fact that they’ve repeatedly been ~doubling the number of qubits they work with is merely one reflection of that.
No serious person expected 105 physical qubits to be useful in themselves. The usefulness or uselessness of QC is a separate question from whether people will be able to scale it, except insofar as it affects their motivation to do so.
Comment #51 December 11th, 2024 at 9:16 am
This is the kind of stuff that makes me feel like we are no longer in the theory phase of becoming a Kardashev Type II civilization, and it also makes me feel a bit insignificant to the progress!
Comment #52 December 11th, 2024 at 9:19 am
Evyatar #47: The chips generate a single sample for RCS in something like 40 microseconds. The samples can then be classically verified by computing a Linear Cross-Entropy Benchmark (LXEB). This takes 2n time if there are n qubits, or somewhat less (depending on the circuit depth) if you use tensor network contraction algorithms. So, if n is less than 30 or so, you can just do the verification on a laptop; if less than 40-50, on a supercomputer. If n is larger than that then you’re relying on extrapolations from circuits that are easier to classically simulate.
To get a valid statistical signal, you need ~1/b2 samples where b is the LXEB score. In current experiments, b is about 0.001 or 0.002, which translates into a few million samples, but those can still be collected in a few minutes on a superconducting device. Those few minutes are then the time to compare against the time that a supercomputer would need for tensor network contraction.
You can read Google’s 2019 paper for the details of how this all works. The paper on their RCS experiments with Willow isn’t out yet, but there seems to have been no change to the basic protocol.
Comment #53 December 11th, 2024 at 9:24 am
That sentence leapt out at me as an especially poor way of thinking. It’s like, has she never heard of exponentials? When COVID started spreading in Wuhan, imagine someone saying: “eh, this will be something to worry about in practice when there are a million infections, and we’re still about 1 million infections away from that.”
Well, by the same token since it took about 3 months for COVID to go from 1 case to 1 million cases, one might extrapolate that in 3 more months there would be 1 trillion people infected with COVID, and that didn’t happen either.
That’s the issue with exponentials — in real-life cases, they are uncommon and, when they do crop up, they don’t remain exponential forever, or even very long.
So as a heuristic, it is generally a good bet to say ‘this trend will not proceed exponentially into the future,’ even when the trend has proceeded exponentially up until the moment you make the prediction!
Comment #54 December 11th, 2024 at 9:33 am
Gilad #43:
What are the complexity barriers to solving random circuit sampling? Would Monte Carlo methods return worse fidelity than observed in the experiment?
See my paper with Sam Gunn for a hardness result for spoofing RCS. This relies on the assumption that it’s hard to estimate a particular amplitude in a random quantum circuit with better-than-trivial variance. We don’t currently know how to get hardness for “practical / noisy” RCS, as validated for example via the Linear Cross-Entropy Benchmark, from an assumption that isn’t itself about quantum circuits.
By contrast, if you ask about sampling in classical polynomial time from the exact probability distribution that the quantum circuit samples from—well, that would imply P#=BPPNP and hence that the polynomial hierarchy collapses. See the BosonSampling paper for example for how this goes. The only problem here is that it’s infeasible to check exactly what distribution is being sampled.
If BQP=P, would that have any other implications in complexity theory?
Well, it would mean that factoring was in P. 😀
Again, though, the situation is much better if you ask about exact sampling problems rather than just decision problems: there, a collapse of quantum and classical polynomial time would imply that PH collapses.
If nature does it somehow, what are the odds that it’s really in P? That there’s some algorithm we’re missing?
I’m willing to stick my neck out and conjecture that PH does not collapse, and therefore that QCs can efficiently sample certain probability distributions that are hard to sample classically.
The case for BPP≠BQP (i.e., for a separation in decision problems), or even PromiseBPP≠PromiseBQP, is less overwhelming, but gun to my head, my guess is still that these complexity classes are different.
Comment #55 December 11th, 2024 at 9:45 am
Scott P. #53: The real lesson here is the danger of blindly extrapolating numbers, with no understanding of the underlying dynamics and what boundary conditions would apply to them.
COVID cases did grow exponentially until they’d reached a large fraction of all the humans on earth—at which point, of course, they stopped growing.
Floating point operations per second per dollar have grown exponentially from the beginning of the computer age all the way to today. They’ll stop growing exponentially at some point, presumably when hard limits on manufacturing and energy are reached.
Likewise, if you know what’s now happening in experimental QC, you see that there’s no barrier whatsoever to exponential scaling in the number of high-performing qubits, and every reason to expect that over the next few years. Yes, the scaling will presumably stop well before the entire galaxy is converted into a quantum computer. 😀
Comment #56 December 11th, 2024 at 10:00 am
Coming at this from a completely ignorant background, it does seem that using the Moore’s law analogy to predict progress in quantum computing is misleading. The major difference is that each generation of integrated circuit chips was earning the company huge profits that could be put into investment in the next generation. With zero profit from quantum computing chips, it seems unreasonable to expect that companies will continue to devote the additional money required to reach a chip that actually is profitable. Do you know, eg, what increase in Google spending was required to go from the 2019 chip to Willow? That would give one some idea of how to extrapolate the investment required for the profitable chip.
Comment #57 December 11th, 2024 at 10:44 am
I don’t think IBM has put any serious effort into random circuit sampling. Other researchers with access to their systems could. IBM’s messaging focuses on advantage and utility, so the lack of verifiability that Scott mentioned leads them to focus on other applications. I don’t know RCS that well, but it might not be surprising if IBM’s system does not work as well as Google’s for it if connectivity is important because IBM’s systems have similar error rates but the device topology has less connectivity between qubits (two or three connections per qubit versus 4 for Google).
Comment #58 December 11th, 2024 at 10:59 am
@Scott P #53: Making a slightly different, but compatible, point from Scott’s response:
The crucial thing is not the idea that exponential growth will definitely continue or even that it will probably continue. The issue is how to think quantitatively about whether and when a process will reach a particular point.
For example if I want to know if and when the global world product will reach 1 quadrillion dollars in today’s units then I might observe that the global world product is currently about $100 trillion dollars and it grows at a rate of about 5% per year so that if current trends continue it would reach that value in about 50 years. I would then have to consider how likely it is that current trends will continue for 50 years, which is a hard problem. But if I say that in the entire history of the world we’ve only gotten 10% of the way there or that we’re still 900 trillion dollars away then I won’t have even started to tackle the problem.
Similarly with other things that have exhibited exponential growth in the past: To understand their future I should first see what happens if exponential growth continues and then consider what might break the exponential growth trend in the relevant time frame and how. But this requires thinking exponentially, for example, measuring progress so far on a log scale.
Comment #59 December 11th, 2024 at 11:22 am
Scott Says:
I don’t think she was dismissing anything. She literally said it “was super impressive from a scientific perspective”, but that we’re a million qubits away from any real world impact.
You said Google doubled the number of qubits since 2019. If that counts as a 5 year doubling time, then we’re arguably 50 years away from any real world impact. Unless you think the improvements in fidelity are going to accelerate doubling time to less than a year, we arguably won’t see any real world impact for at least 10-15 years.
This is good scientific progress, but QC is still overhyped IMO.
Comment #60 December 11th, 2024 at 11:25 am
[…] 5. Thread on quantum computing. And Scott Aaronson on Google Willow. […]
Comment #61 December 11th, 2024 at 11:46 am
What operating temperature was the fault tolerance developed for? What does it look like practically?
Comment #62 December 11th, 2024 at 11:51 am
Sandro #59: If QC will have real-world impact but “only in 10-15 years,” I’m going to mark that as a success!
The older I get, the more the years seem to fly by like weeks or days. Back in 2009, one could say “enough with all this AI hype and fearmongering! It will surely be 10-15 years at least before we need to worry about this…” And now here we are, 15 years later, in the AI future that was promised or warned about.
Comment #63 December 11th, 2024 at 12:13 pm
Scott #52: If I look at the supplementary data for the Nature paper, I see there are only < 5-hour direct simulations (Table VII p46, Fig S50 p58).
Did we run n~40 simulations and verify it directly?
Comment #64 December 11th, 2024 at 12:55 pm
WRT the “can QC ever do anything useful” debate, according to
https://protos.com/googles-quantum-computer-could-break-bitcoin-in-two-ways/
BitCoin prices dropped 3% right after the announcement and have stayed down.
Comment #65 December 11th, 2024 at 1:11 pm
Nit: you should refer to Harvard, not QuEra as the entity for the logical qubit experiment. The work was done on Harvard’s machine predominantly by Harvard researchers.
Comment #66 December 11th, 2024 at 1:21 pm
Sandro #59:
> I don’t think she was dismissing anything. She literally said it “was super impressive from a scientific perspective”, but that we’re a million qubits away from any real world impact.
In which case, isn’t she really making an engineering argument, not a scientific one? In which case, that’s… pretty misleading to present it as a science authority without declaiming (like our host often does) that she’s not necessarily an expert on the manufacturing and economic issues that are relevant to the remaining task? I’m not saying that she shouldn’t have an opinion, just that one should point out when one is pontificating outside of one’s area of expertise.
Put another way, how is this different from an engineer making a prototype, and then doing a bunch of work to make a production-ready prototype, and having their manager say that the new prototype is a great engineering achievement but is still about a million units away from completing a million-unit production run? It would either be a facile argument, or it betrays a misunderstanding of how products are designed and manufactured.
Given the click-bait nature of internet video production, I’m inclined towards the former theory as I can’t seriously claim that she has no understanding of the issues, but… that still doesn’t leave a great impression.
Comment #67 December 11th, 2024 at 8:45 pm
Hello Scott, I wanted to know your opinion on the latest work of Jacob Barandes which claims to recover quantum behaviour from sthocastic classical systems which are ‘indivisible’.
Here are some links to his work
https://www.jacobbarandes.com/
https://shared.jacobbarandes.com/videos/2024-11-16-psa2024-deflationary-account-qm-complex-numbers
Comment #68 December 11th, 2024 at 9:08 pm
harsha #67: We had Jacob give a talk about this work at UT Austin. It looks intriguing but so far, every question I ask about it has an answer that I don’t understand, never seeming to bottom out in something I do understand. It’s like a classical picture underlying QM except not really because that obviously doesn’t work, so then it’s something else? When this picture is used to say something new about any problem I’ve previously cared about, that’s what will force me to spend however long it takes to understand it. Sorry I can’t be more helpful right now. Anyone who’s studied these papers is welcome to chime in.
Comment #69 December 11th, 2024 at 10:56 pm
Scott #55: I’m sorry to pile on, but respectfully, I think that Sabine’s point is at least defensible here. I’m not sure that I agree with your claim that “If you know what’s now happening in experimental QC, you see that there’s no barrier whatsoever to exponential scaling in the number of high-performing qubits, and every reason to expect that over the next few years.” There many not be any fundamental scientific barriers, but there may well be economic and engineering barriers (as David #56 said).
With covid, there was a good first-principles reason to expect exponential growth: it’s trivial to come up with a quantitative model that leads to exponential spread (namely, “each infected person infects R_0 > 1 other people”), and it’s easy to see from the basic biology of infection that this model is plausibly reasonably accurate.
QC is not like that; there’s no simple model (that I can see) that necessarily predicts exponential growth in qubit counts. I don’t think we even have enough historical data yet to confidently say that the previous growth has been exponential, rather than (say) a power law or something.
The best motivation seems to be a rough analogy with Moore’s law. But Moore’s law is not nearly as easy to explain as the exponential spread of covid. The fundamental driver of Moore’s law has been the exponentially decreasing cost per transistor, which is a very complicated techno-economic phenomenon. As David #56 said, the large new economic value unlocked at each new halving of transistor size presumably contributed a lot to that – but we have no evidence that doubling the qubit count will provide any economic value before we reach meaningful fault-tolerance. So I would say that we cannot confidently predict at this point that exponential growth over time in future qubit counts is more likely than ~linear growth.
Comment #70 December 11th, 2024 at 11:27 pm
Ted #69: I suppose the simplest answer would be “wanna bet?” 🙂
Certainly I’d put $100 on there existing a 500-qubit system with 99.9% fidelity programmable 2-qubit gates by the end of 2027. (Much better than that is plausible, but I’m a risk-averse gambler! 🙂 )
My model cares much more about 2-qubit gate fidelity than the number of qubits. We’ve understood for a while that once the 2-qubit fidelity has reached three or four 9’s, you’re basically ready to scale up.
Comment #71 December 11th, 2024 at 11:41 pm
Everyone: I emailed Sergio Boixo my personal summary of the questions for the Google team about the Willow RCS experiment that have arisen on this thread. Sergio replied and very graciously gave me permission to share. Here are my questions and his answers:
Q1: When will a paper about it be out?
A: There will be a paper with these latest results, but I don’t have a date.
Q2: How many qubits did it use, the full 100 or a smaller number?
A: We used 103 qubits, and multiple depths. We quote complexity cost for depth 40 XEB cycles, where the fidelity is ~0.1%.
Q3: Did you try running a circuit on a subset of (say) 30 or 40 Willow qubits, for which you could directly verify the LXEB score? (I assume you did but I can’t point people to anyplace where you said it!)
A: We got the LXEB score in three different ways, and they all agree: a) 3 patches b) 4 patches c) digital error model. We did this at multiple depths.
Q4: Did you try running a random circuit and followed by its inverse to check fidelity to the all-0 state?
A: We did that in our recent phase transition paper, Fig. 4b. That worked fine. The data for that paper is public.
Comment #72 December 12th, 2024 at 5:16 am
Scott, Thank you very much for your efforts and patience.
I hope the paper will directly answer Q3(#71) and show direct verification on larger depths (~40 qubits).
Comment #73 December 12th, 2024 at 7:44 am
The current state of AI and QC is intriguing and paradoxical in the sense that they only seem to be diverging in what can be achieved by both.
1. The AI world is already talking about Trillion dollar models (GPUs in millions) and Energy generation as the ultimate limit which will slow down progress
2. The QC promises exponential speedups with efficient energy usage, though currently it is nowhere in sight for contributing to solving the above mentioned problems of AI
3. May be AI will first contribute to building scalable QCs and they in turn help making progress in AI, though its far from clear how.
4. QC has abundance of theoretical progress, whereas AI is exclusively empirical
Its as if Nature has some quixotic way of revealing itself, in the same way GR and QM ended up being incompatible …
Comment #74 December 12th, 2024 at 9:54 am
Scott #70
Is there a limitation to maintaining 2-qubit fidelity at the 99.99% as you scale up? Or is it an engineering issue for interconnecting, say 500-qubit processors?
Comment #75 December 12th, 2024 at 10:09 am
Hi everybody,
1) I found the analogy in #39 offensive and inappropriate.
2) As I said many times, I don’t take it as axiomatic that scalable quantum computing is impossible. Rather, I take the question of the possibility of scalable quantum computing as one of the greatest scientific problems of our time.
3) The question today is if Google’s current fantastic claim of “septillion years beyond classic” advances us in our quest for a scientific answer. Of course, we need to wait for the paper and data but based on our five-year study of the 2019 Google experiment I see serious reasons to doubt it.
4) Regarding our claim that the fitness of the digital prediction (Formula (77)) and the fidelity estimations are unreasonable, Scott wrote: “And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”
Scott, Our concern is not with the exponential falloff. It is with the actual deviations of Formula (77)’s predictions (the “digital prediction”) from the reported fidelities. These deviations are statistically unreasonable (too small). The Google team provided a statistical explanation for this agreement based on three premises. These premises are unreasonable as well and they contradict various other experimental findings. My post gets into a few more details and our papers get into it with much further details. I will gladly explain and discuss the technical statistical reasons for why the deviations are statistically unreasonable.
5) “Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same”
Scott, do you have a reference or link to this claim that the exponential falloff pattern is the same? Of course, one way (that I always suggested) to study the concern regarding the “too good to be true” a priori prediction in Google’s experiment is to compare with IBM quantum computers.
Comment #76 December 12th, 2024 at 10:11 am
Regarding scale up: I agree w Scott. The fabrication technology to make useful logical qubits has now been demonstrated in the superconducting platform. We also know how to make very large numbers (millions) of qubits – the issues now are about wiring, tuning, and control at scale. These are hard engineering problems, but there is no obvious reason to think they are insoluble or will take five decades to solve.
Comment #77 December 12th, 2024 at 10:38 am
[…] a huge marketing ploy. Comments are available with slides, posts and videos from John Preskill, Scott Aaronson, Brian Lenahan, Bob Sutor, Alan Ho from Qolab, Michel Kurek, and Sabine Hossenfelder, and more are […]
Comment #78 December 12th, 2024 at 2:19 pm
Scott Comment #62:
It would be, but that’s a super unrealistic timeline, contingent not only on super unrealistic doubling times, but on there also not being any sort of roadblocks to scaling up, which is still a big assumption IMO.
cwillu #66:
There’s plenty more science to do before the engineering of scaling for applications really begins.
It’s not a production ready prototype, that’s the point. On a super optimistic timeline with the optimistic assumption of not running into any unexpected issues, we’re still at least 10 years away from that IMO. Time will tell.
Comment #79 December 12th, 2024 at 5:56 pm
Regarding Jacob’s ideas has he ever discussed how a phenomenon like quantum tunneling can be explained? I have heard him say several quantum features such as wave particle duality, interference,etc… can be observed in his explanation.
IF we assume his explanation is closer to the bone then does that ultimately mean QC is impossible? AFAIU a complex wave function needs to exist in meaningful sense in order to gain a speed up on QC friendly problems.
Comment #80 December 13th, 2024 at 1:49 am
[…] The Google Willow Thing 738 by Bootvis | 448 comments on Hacker News. […]
Comment #81 December 13th, 2024 at 7:20 am
Angelo #79: Discussing “a phenomenon like quantum tunneling” is not the problem of Barandes’ formulation (or “ideas” if you prefer). Part of the real problem is that he would essentially use the normal Hilbert space formalism (or something closely related) for any calculation which would come up during that discussion.
The problem I see is that what he does would better be placed in the context of “quantum reconstructions” rather than “quantum interpretations”. Then one could also better judge which of his ideas are new, where are the places where his construction or proofs should put-in a bit more of thoroughness to really give you what you “want” for a nice reconstructions. (I am especially thinking about continuity properties of his constructions, and also of how the rate of change transfers over to the quantum side from the probabilistic side. You could “translate” this into questions like whether his constructions can be made Lipschitz continuous, or something similar.)
But of course, there is also a psychological aspect to this: Initially, he focused on the formal and mathematical side of things. But then at some point he switched to wanting to “explain everything” from his constructions. That is the point when he went for “quantum interpretation,” and left everybody who wanted to understand what he means confused (see for example Scott #68 above).
Comment #82 December 13th, 2024 at 7:50 am
Angelo #79: As far as I know, Jacob doesn’t make any empirical prediction that differs from the predictions of ordinary QM, so QC still presumably has to work. As gentzen #81 suggested, he’s proposing either a reconstruction or an interpretation or something in between.
Relatedly, though, he doesn’t need to explain tunneling or any other specific quantum phenomenon. If he could “explain” the basic rules of QM from his starting postulates (the claim, which people can debate), everything else follows from those rules in already-known ways.
Comment #83 December 13th, 2024 at 1:44 pm
Scott:
“the dragon that (once fully awoken) will let logical qubits be preserved and acted on for basically arbitrary amounts of time”
Does this mean that we would create a sort of meta quantum system that’s perfectly isolated from the world?
Ie from a multiworld point of view this meta quantum system would evolve identically in all the branches it exists in? A perfectly isolated bubble. So even though individual qbits diverge the logical state stays the same.
That’s what classical computers do relative to their digital state (same macro/meta logical state across many branches).
Comment #84 December 13th, 2024 at 2:20 pm
Regarding scaling, here’s a simple question:
If I have 4 qbits, each at the corner of a large room and perfectly isolated from their environment (and one another), what is required to “link” those 4 qbits to create what we refer to as “a quantum computer with 4 qubits”?!
How does the complexity of the “linkage” required grow with the number of qbits?
Comment #85 December 13th, 2024 at 2:24 pm
fred #83: Yes.
Comment #86 December 13th, 2024 at 2:26 pm
Fred #84: The “linkages” are called 2-qubit gates. The complexity depends entirely on how many of them you need to apply to get the n-qubit unitary you want. That’s the whole subject matter of quantum algorithms.
Comment #87 December 14th, 2024 at 12:42 am
Scott #31 #54 – apologies for using this opportunity to ask a tangential question from quantum complexity theory.
Twelve years ago, these were your comments on whether BQP is in NP…
https://physics.stackexchange.com/a/34637/1486
From the perspective of 2024, are there any important updates to what you said then?
Comment #88 December 14th, 2024 at 6:42 am
The Willow chip’s accomplishments indeed have no immediate impact on real-world problems. Sabine is right. The RCS task is designed as a proof of concept for quantum computational power and does not apply to practical applications. Yes, Willow builds directly on the 2019 Sycamore experiment. It scales up the same type of computation RCS and demonstrates improvements in qubit count and coherence but does not fundamentally change the task. True, Sycamore’s supremacy claims have been challenged by advances in classical simulation methods. So yes, Sabine’s caution about overhyping quantum computing is a reasonable critique given the field’s history. However, Willow represents significant technical achievements, including improved qubit coherence and gate fidelity, demonstrating longer-lasting logical qubits in error-correcting surface codes, crossing a critical threshold for fault tolerance, and a larger-scale quantum supremacy experiment that is harder to simulate classically (as described in this blog post, Willow has 105 qubits compared to Sycamore’s 53, with better coherence times – 5 times improvement – and gate fidelities up to 99.85% for “iswap” gates. Unlike Sycamore, Willow demonstrates progress in error correction by scaling surface codes from 3×3 to 7×7 and observing longer lifetimes for logical qubits).
These advancements are essential steps toward the ultimate goal of fault-tolerant quantum computing. So, Willow’s results validate essential quantum error correction and scalability predictions necessary for future breakthroughs. There are limitations in verifying RCS results directly at this scale, but Google’s experimental design and extrapolations are credible, given the consistency with smaller, verifiable circuits. So, to sum up, it seems to me that beauty is in the eye of the beholder… Sabine’s skepticism is more aligned with reality if the focus is on practicality and immediate utility. However, if the focus is on scientific progress, then this post by Scott is right to celebrate Willow as a milestone. This means that the skeptics frame Willow as more of the same. I suppose Gil Kalai will agree with me. Gil will likely argue that his criticisms of Google’s Sycamore chip are equally applicable to the new Willow chip, based on his consistent skepticism about the Sycamore. Since Willow’s new quantum supremacy demonstration is an upscaled version of RCS, Gil will likely argue that the same methodological and extrapolation issues he raised for Sycamore are also present here. However, the skeptics downplay the significance of incremental improvements (by framing the small steps within the broader challenges that quantum computing still faces). However, dismissing progress in small steps underestimates its importance in solving the challenges skeptics highlight. So, who is right?… Gil, what do you think?….. …. ….
Comment #89 December 14th, 2024 at 8:39 am
Mitchell Porter #87: I like what I wrote back then! 🙂
By far the biggest update is that, in 2018, Raz and Tal succeeded in giving an oracle relative to which BQP is not in PH, by proving a version of my conjecture from 2009 that “Forrelation” is not in PH. (And as a consequence, one gets an oracle where P=NP≠BQP.) So now, in the black-box setting, we have the strongest evidence you could possibly hope for of BQP going beyond NP. Unfortunately, still today no one has succeeded at “instantiating” the Forrelation oracle by any real-world problem, so it’s still conceivable that BQP⊆NP in the unrelativized world. Pretty much everything else is as I said 12 years ago.
Comment #90 December 14th, 2024 at 9:16 am
Scott/others
Can someone explain what exactly is the *output* of the Google rcs?
Like you’ve created a uniform superposition of the q-bits….but then what is the actual thing you output? Or is it just the superposition itself?
Comment #91 December 14th, 2024 at 12:40 pm
Ariel #90: It’s not a uniform superposition—that’s the whole point! It’s a nonuniform superposition, and the output is simply a long list of independent samples from the corresponding nonuniform distribution over n-bit strings.
Comment #92 December 14th, 2024 at 2:55 pm
Okay,
How do you feel about Israel immediately stealing more land from the Golan heights, at the moment the Arab socialist is thrown out? Is this not utterly shameful behavior?
Comment #93 December 14th, 2024 at 4:25 pm
Scant Consequence #92: I think that the new rulers of Syria should grab this golden opportunity to sign a peace treaty with Israel—something Syria has never done before, neither in 1948 nor in 1967 nor in 1973. As part of a peace deal, Syria could probably get back the entire Golan. (One difficulty, though, is that many of the Druze living in the border region apparently want to be ruled by Israel rather than Syria … and can you blame them?)
Comment #94 December 15th, 2024 at 7:50 am
Scott #86
so in the case of Shore’s algorithm, how many such 2-gates does it take as a function of n?
Comment #95 December 15th, 2024 at 8:54 am
fred #94: To factor an n-digit number, ~O(n2) with the original Shor’s algorithm, ~O(n3/2) after the recent improvement by Oded Regev (the ~ suppressing logarithmic factors). The constant in the O will in practice be dominated by the error-correction.
Comment #96 December 15th, 2024 at 3:14 pm
Sabine makes a significant claim at the end of her X entry. To wit:
“Also, it’s been a recurring story that we have seen numerous times in the past years, that claims of quantum “utility” or quantum “advantage” or quantum “supremacy” or whatever you want to call it later evaporate because some other group finds a clever way to do it on a conventional computer after all.”
There is no justification given. If true, this claim is significant in my opinion. Any comments?
Comment #97 December 15th, 2024 at 4:52 pm
Bob Y #96: Yes, that happened several times, and you can even read about it in detail on this blog! But it’s crucial to add: the best classical algorithm for simulating general quantum circuits, Johnnie Gray’s optimized tensor network contraction method, has now been stable since 2021, and that’s what Google is comparing against.
Comment #98 December 15th, 2024 at 5:02 pm
Scott #82:
“Relatedly, though, he doesn’t need to explain tunneling or any other specific quantum phenomenon.”
I think he does need to address Bell’s theorem. I looked at a video and a couple of papers, and still can’t tell exactly what he’s saying about that. Sometimes he doubts the assumptions behind Bell’s theorem and goes down the path of superdeterminism. Other times he says his theory is non-local but only as much as QM is. So far I think this theory is ambiguous about the most important thing it needs to explain.
Comment #99 December 16th, 2024 at 6:56 am
A question from non-expert: if known, was it tried to perform factoring on Willow? If yes, how big number could it factor (as a widest known quantum chip possible application)? In other words, what number of “practical” cubits for factorisation do these physical cubits correspond to? As for non-expert it is quite hard to gasp, what random circuit sampling really is. Also, supposing the same chip architecture, how big would the chip have to be to break the current RSA-algorithms? On a side note, when we would start getting there (as said above in 10 to 50 years maybe if everything goes well), it could lead to something similar as nuclear arms race in the sense “store now, decrypt later” or secrecy on achieving it, and potentially whoever posseses the ability, would probably not be so eager to give it away. So maybe any thoughts on philosophy of the big companies like Google and others in the field on this? And also thoughts on goverments trying to interfere with it?
Comment #100 December 16th, 2024 at 7:04 am
H #98 “I think he does need to address Bell’s theorem. I looked at a video and a couple of papers, and still can’t tell exactly what he’s saying about that.”
He talks about Bell’s theorem in https://arxiv.org/abs/2402.16935
My comment was
His talks/videos are what caused me to recommend caution: “there is also a psychological aspect to this”
https://youtu.be/6EJh1-8Z-YQ?t=462
I am not convinced that the Renou et al. paper is as bad as Barandes seems to believe, and I guess I am not alone with this opinion.
And at 17:05 he claims: “By the way, changing othonormal basis on the Hilbert space literally is carrying out a certain kind of canonical transformation, those transformations that mix up the X and Y I was talking about in this picture, so we see a clear linkage between these two symmetries”. Now it is certainly true that linear canonical transformations on the classical-Hamiltonian side have a corresponding change of basis on the Hilbert-space side. But the other direction is far from obvious, and would require a reference to a place where it was proven, should it really turn out to be true. I am sceptical, because the set of basis changes in Hilbert space “feels” bigger to me than the set of linear canonical transformations of a Hamiltonian system.
Comment #101 December 16th, 2024 at 7:46 am
Filip #99: Running Shor’s algorithm on Willow, the biggest number you can factor will probably be bigger than 15 but still very small (2-3 digits). To run Shor’s algorithm at any significant scale, we’ve known for decades that you’ll need error-correction, which (as currently understood) induces a massive blowup, on the order of at least hundreds of physical qubits per logical qubit. That’s exactly why Google and others are now racing to demonstrate the building blocks of error-correction (like the surface code), as well as whatever are the most impressive demos they can do without error-correction (but these tend to look more like RCS and quantum simulation than factoring).
Comment #102 December 16th, 2024 at 8:56 am
the NYTimes wrote
“Google said its quantum computer, based on a computer chip called Willow, needed less than five minutes to perform a mathematical calculation that one of the world’s most powerful supercomputers could not complete in 10 septillion years”
The keyword here is “a calculation”, and noone is ever mentioning that Google’s Quantum “Computer” can’t even compute 99.9999999999% of what a 1980’s 48K ZX Spectrum could.
Of course I don’t mean that QCs need to do that, but almost everyone who reads this news thinks the article is referring to an actual “computer” when it’s referring to a chip that’s highly specialized to do one thing only, one thing that can’t be verified and has zero practical utility.
Of course it’s not surprising that Google is turning a highly specialized technical feat into something it’s not, because they’re a publicly traded company…
If Google were in charge of LIGO they would imply it’s so sensitive to gravity that it could be used to pinpoint the exact location of Kim Jong Un just by sensing his walking pattern.
This deliberate marketing BS is already affecting lots of real engineers day to day in other publicly traded companies that follow all those hype bandwagons (ie all of them).
At my job we’re already dealing with all the LLM BS (we have to take surveys to show how enthusiastic we are about coding AI making our productivity so much higher… cough), but now the CEO is already bringing up QC in company wide townhalls “we don’t understand quite yet how quantum computing can be used by our business, but we’re looking into it”… If only Google’s chip error rate was as high as the percentage of enployees who are totally clueless about quantum tech (at the exec level it’s 100%)… and the handful of employees who do understand have to keep their mouth shut because they’d be instantly flagged as not “engaged”.
Comment #103 December 16th, 2024 at 9:04 am
fred #102: If we agree that the statement is literally correct (relative to the best currently known classical algorithms, yadda yadda), then the question is whether it misleads by omission.
I don’t think it’s misleading to anyone who’s been following QC, and who understands the magnitude of the chasm between “a calculation” and “all calculations.”
I agree that someone who hasn’t followed QC and doesn’t understand that chasm could easily be misled.
My solution, for the past 20+ years, has been to try to educate people about this stuff! 🙂
Comment #104 December 16th, 2024 at 9:27 am
Scott #103
it’s all understood and I certainly know you’ve been fighting the good fight!
I’ve also spent time trying to explain to friends and family what it really means. There is a truly great accomplishment in there of course.
At some level the hype problem is not specific to QC either, except that the complexity of the topic is making it orders of magnitude worse… at least with AI anyone can go ask Google’s overhyped AI to draw them a picture of George Washington and see the results for themselves…lol
Comment #105 December 16th, 2024 at 9:25 pm
@gentzen #100:
The set of no signaling correlations is a very wild place, among other things it has PR boxes at its boundary that can in fact allow information to be transmitted faster than light under a certain limit (see this entertaining “little” paper: arXiv:1407.8530). Maybe we can use “diluted mixtures” of PR boxes to simulate quantum correlations, but then you have to explain why the probabilities are restricted to screening off all that nonlocality and leaving just enough for QM (without just saying QM told you to, would you jump off a bridge if….)
Comment #106 December 16th, 2024 at 10:01 pm
Scott #101 (and also the hive mind): As Smolin et al. pointed out in https://www.nature.com/articles/nature12290, the original demonstrations of Shor’s algorithm on noisy QCs “cheated” a bit by precompiling the algorithm in a non-scalable way. My understanding is that, instead of initially choosing a random base a as in the “textbook” Shor’s algorithm, they deliberately made a particular choice of a that led to a function a^x mod N that happened to have a very short period r, which allowed the QFT to be done using a very small number of noisy qubits. But of course, this is kind of cheating, because you can only reliably choose a base a that leads to a very short period if you already know the factors of N.
Do you think that today’s noisy QCs could factor even a two- or three-digit number without this precompilation cheat? (I.e. with a significant probability of success for a generic base a?) I guess maybe for a two-digit number you could just repeat the whole experiment for many different choices of a until you happen to get lucky – but then aren’t you really just doing a really convoluted and expensive version of trial division?
Comment #107 December 17th, 2024 at 3:52 am
Fred #102:
The question, IMHO, is if quantum+classical > classical.
20 years ago, I assumed the answer was No (for practical quantum chip, not theoretical).
Now we have claims (which should be verified) that it’s true.
If it’s true, even if we have no practical value, I think it’s very inspiring. kind of breaking the Church-Turing hypothesis (not exactly, because it relies on the asymptomatic case).
Comment #108 December 17th, 2024 at 4:53 am
H #105: Are you familiar with Yakir Aharonov’s “Two-State Vector Formalism” and the related notion of “weak measurement”? If you read this entertaining “little” paper: arXiv:1407.8530 carefully, you “might notice” that it is based on an insider joke about “weak measurements”:
Rohrlich (and Popescu) introduced PR boxes, so he certainly knows that there is no requirement that PR boxes must allow for those type of “weak” measurements required for the argument in this paper.
I googled for a definition of PR boxes, and found one in https://arxiv.org/abs/quant-ph/0506180 :
The box is characterized by the joint probability \( P(a_1 a_2 | x_1 x_2) \) of obtaining the output pair \( (a_1, a_2) \) given the input pair \( (x_1, x_2) \). Compatibility with special relativity requires that these joint probabilities satisfy the nosignaling conditions
$$ \sum_{a_2}P(a_1 a_2 | x_1 x_2)=\sum_{a_2}P(a_1 a_2 | x_1 x’_2)=P(a_1 | x_1)$$ for all \(a_1, x_1, x_2, x′_2\), as well as a similar set of conditions obtained by summing over the first observer’s outputs.
If the randomness between “experiments”/”trials” is independent, there won’t be any way for correlations to show up between results of \(a_1\) (and \(a’_1\)) for \( x_1 = a \) and \( x_1 = a’ \). (I replaced the b from the paper by a here, to keep consistency with the formulas above.)
Comment #109 December 17th, 2024 at 5:01 am
The google movie on the page you link to (officially announcing Willow) begins:
“The systems behind our universe are quantum mechanical. They shift and change with the problems they’re tasked to solve. Exploring a vast array of options all at once.”
Are we to understand that the systems behind our universe “solve hard problems instantly by just trying all solutions in parallel”?
Comment #110 December 17th, 2024 at 8:07 am
ben #109: I listened to Google’s technical talks but didn’t watch their movie. From what you say, it sounds better for my mental health if I continue not watching it.
Comment #111 December 17th, 2024 at 11:18 am
Did you notice the strange behaviour of ChatGPT (4o or even o1) on names like Jonathan Turley or David Mayer ? Besides this very annoying and hard to understand censorship, 4o has a crashing message (while at least o1 “explains” this is against their policy), leaving the user with the feeling that basic problems have not been debugged, and the AI skeptics with a new source of fears.
Comment #112 December 17th, 2024 at 2:59 pm
@gentzen 108:
‘Are you familiar with Yakir Aharonov’s “Two-State Vector Formalism” and the related notion of “weak measurement”?’
A little bit, but not enough. I don’t think the argument in Rohrlich’s paper depends too much on weak measurements, just on the existence of a classical limit (but I could be wrong here). I don’t know what that insider joke is!!!
But I also don’t think we need Rohrlich or weak measurements: PR-boxes and other “super-quantum” no signaling correlations are grossly non-local (not in the signaling sense, obviously, but in the classical or even quantum sense). They can involve weird correlations between measurement settings and outcomes, which only by having a conspirational symmetry do not lead to signaling (Rohrlich argument non-withstanding–though it would make the point even stronger if it were true).
Comment #113 December 17th, 2024 at 4:46 pm
Scott,
Since you mentioned Scott Alexander, noticed that he had this post earlier this year https://www.astralcodexten.com/p/practically-a-book-review-rootclaim
Also, saw that you had done a review of Alina Chan’s book Viral nearly 3 years ago. Many of those things that you describe from her book don’t actually stand up to Peter Miller’s scrutiny in the above debate such as the deliberate concealing of the furin cleavage, the Mojiang miners’ incident, whether RaTG13 was concealed etc. Some of these issues are also discussed in Jane Qiu’s profile as described here: https://x.com/janeqiuchina/status/1491445332089384960
Sorry for the OT, but I figured you were not checking old posts. I’d recommend these two presentations by Peter Miller for which there are youtube links also in Scott’s article:
Epidemiology: https://ermsta.com/r/pm_day1_updated_presentation.pdf
Viral genetics: https://ermsta.com/r/pm_day2_updated_presentation.pdf
Comment #114 December 18th, 2024 at 2:20 pm
Why specifically 105 qubits? Yes, I know, 105 = 3*5*7 and has many interesting properties, e.g. A000217(14) = 105 and A013594(2) = 105, but was there a particular reason for Willow, e.g., as you say “as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, …”, is that related?
Another question: When a quantum chip can factor an arbitrary 8-bit nonsquare semiprime? There are 26 test cases to choose from: 129, 133, 141, 143, 145, 155, 159, 161, 177, 183, 185, 187, 201, 203, 205, 209, 213, 215, 217, 219, 221, 235, 237, 247, 249, 253.
Comment #115 December 18th, 2024 at 6:14 pm
H #112: “But I also don’t think we need Rohrlich or weak measurements: PR-boxes … are grossly non-local (…). They can involve weird correlations between measurement settings and outcomes, which only by having a conspirational symmetry do not lead to signaling (…).”
I basically agree (except for the “conspirational symmetry” part). Barandes’ formulation cannot even talk about those weird correlations, or only have empty talk about them without any consequences. In Barandes’ formulation, the outcomes only have consequences for splitting events. So there is stuff missing, somehow. (Because there is some “locality” structure in his formulation, one could try to define “local” splitting events, and thereby give meaning and consequences to “more” outcomes. But even then, there would still be stuff missing.) This would be no problem in the context of quantum reconstructions, but that is not how Barandes presents his work.
Let me now try to clarify the insider joke about “weak measurements” and my disagreement about “conspirational symmetry”. Rohrlich defines:
$$B=\frac{b_1+b_2+\ldots+b_N}{N}\quad,\quad B’=\frac{b’_1+b’_2+\ldots+b’_N}{N}$$ Note that here \(b_i\) and \(b’_i\) both appear at the same time. However, you can always only measure one of them. This is why he invokes “weak measurements”. Actually valid classical limits like
$$B=\frac{b_1+b_3+\ldots+b_{2N-1}}{N}\quad,\quad B’=\frac{b’_2+b’_4+\ldots+b’_{2N}}{N}$$ are not enough for his argument. Maybe his argument was really extremely nice, except for this minor flaw. So he used the occasion where invoking such “weak measurements” was OK, where everybody could decide for himself whether he wanted to regard it as a joke, or wanted to take it seriously.
Comment #116 December 18th, 2024 at 11:18 pm
gentzen #115: The way I read this previously is that even though you can’t measure each b_i and b’_i simultaneously, you can measure B and B’ simultaneously if N is large enough, in the same way you can measure the average magnetization of a macroscopic sample in both the Z and X directions simultaneously even though you can’t do that for any individual atom in the sample. Is this wrong?
Comment #117 December 19th, 2024 at 6:40 pm
H #116: Yes, I would say this is wrong, in the sense that Rohrlich relies more on weak measurements than this comparison suggests. And weak measurements are a bit more tricky than such a simple classical measurement of average magnetization.
What is “classical” about his way to measure B and B’ (for sufficiently large N) is that he assumes that he can repeat it extremely often. But that b_i and b’_i appear simultaneously in both B and B’ is more justified by the existence of weak measurements (in QM) than by the existence of classical limits. Also note that he references (only) [13] when he starts to talk about the classical limit, and [13] consists of two references for weak measurements (and nothing else).
Comment #118 December 20th, 2024 at 8:24 am
gentzen #117: Thanks but I’m not sure I agree. Weak measurements are fine (people have been doing them for decades), weak values are different and they’re the controversial ones. As for Rohrlich, I think if his argument is wrong then it’s for a different reason than incorrect use of weak measurements.
Comment #119 December 21st, 2024 at 2:40 pm
A few comments on the side of classical advantage and classical spoofing:
1) First let me remark that there is a new Random Circuit Sampling paper on the arXive (2412.11924) from Dec.17 by USTC: They used 83 of the 105 qubits of Zuchongzhi and claim that it would take a classical supercomputer 6 billion years to replicate the task with tensor network contraction.
2) One technical point regarding the move from fidelity estimates to computational advantage is the need to take into consideration the algorithm by Gao et al. in the paper Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage (and also a subsequent paper by aharonov et al. A polynomial-time classical algorithm for noisy random circuit sampling. ). Using Gao et al. idea one may achieve samples with some reasonable XEB score for a 100-qubit experiment (say) from computations based on two patches of 50 qubits. These works may be relevant both to the Google 10 septillion claim and the (modest) Zuchongzhi 6 billion claim.
On the asymptotic side, there is perhaps tension between the renewed double exponential advantage claims and the assertions of polynomial-time classical algorithms.
Two additional supplement to this comment:
a) Here we can use a thumb rule suggested by the Google team itself that the computational hardness scales proportional to the fidelity. (So if the Gao et al.’s algorithm gives in a million years a sample of fidelity equal to (Google fidelity/ 1000) this suggests (but does not prove) the existence of a classical algorithm that takes a billion year for samples with Google fidelity.
b) It is plausible that the Gao et al’s algorithm that uses an approximation of the original circuits by two patches can be modified to “elided circuits” (two patches with a few additional interconnecting edges) with higher fidelity and higher running time.
3) There is an interesting paper by Bensten et al.: On the complexity of sampling from shallow Brownian circuits (arXiv: 2411.04169). The authors showed that classical XEB spoofers like those of Gao et. al. do not actually work for certain architectures (like all-to-all), even at shallow depths. So, classical spoofing a la Cau et al. with these models is still an open question.
4) Let me emphasize that while “classical spoofing” is an important question, in my judgement there are reasons to doubt the methodology behind Google’s fidelity claims (even for 12 qubit circuits and across the board) and therefore also the extrapolation argument. This is summarized in my post (and detailed in our papers).
Comment #120 December 22nd, 2024 at 1:15 am
Gil Kalai #119: Can you summarize, succinctly and at a high level, what you think the flaw in their methodology is and why? Do you also think this new paper suffers from a similar flaw? What about the paper from Quantinuum using a different qubit platform and all-to-all random circuits?
If we are seeing multiple different groups and two different qubit technologies making quantum advantage claims, I assume the flaw must be in the problem setup, as it seems unlikely multiple independent groups are committing some experimental methodological flaw.
Comment #121 December 22nd, 2024 at 4:56 am
[…] Commenting the announcement, Scott Aaronson wrote that while not revolutionary, this evolutionary step crowns 30-year long efforts towards fault-tolerance in quantum computing and crosses an important threshold allowing to foresee a moment when "logical qubits [will] be preserved and acted on for basically arbitrary amounts of time, allowing scalable quantum computation". […]
Comment #122 December 23rd, 2024 at 6:12 am
An efficient verification scheme does exist: https://iopscience.iop.org/article/10.1088/1367-2630/ab4fd6
It has been tried on RCS (up to six qubits and depth twenty): https://journals.aps.org/pra/abstract/10.1103/PhysRevA.104.042603
In 2021, larger circuits were too noisy to be verified. Maybe the hardware has improved to have another go.
Comment #123 December 23rd, 2024 at 8:18 am
I wouldn’t be surprised that the classical time can be cut down by orders of magnitude depending on some NP-hard massaging of the problem, and the observation that most NP-hard puzzles aren’t that hard in practice (finding a truly hard instance is in itself quite hard)
Comment #124 December 23rd, 2024 at 8:18 am
[…] Scott Aaronson responds to Google Willow’s advances in quantum computing. Basically, yes it’s a cool advance, but don’t get overexcited yet. […]
Comment #125 December 23rd, 2024 at 11:03 am
Animesh #122: I just scrolled through your linked paper, seeking an answer to the obvious question of “what’s the catch?” I wish the paper was written in a way to simply answer that question for me, so I didn’t have to try to figure the answer out myself! 🙂
But the catch seems to be that you only test for particular types of errors, rather than providing a fully black-box way to verify that the input/output mapping of a given circuit is doing something classically intractable, correct?
Comment #126 December 26th, 2024 at 1:41 pm
Hi still confused (#120),
Let me try to answer your first question:
Two main concerns regarding Google’s 2019 supremacy paper are:
1) The calibration process appears to exhibit an undocumented and methodologically flawed global optimization process. This invalidates the direct and indirect fidelity claims.
2) Over hundreds of experimental circuits, the product of the individual gate fidelities of a circuit (referred to as the “digital error model” fidelity prediction) matches the actual XEB experimental fidelity in a statistically unreasonable way. This finding gives an indirect indication for a flawed optimization process such as described in the first item.
There is various statistical and non-statistical evidence that support these concerns and there are also additional concerns regarding the quality and reliability of the experimental data.
Regarding the calibration process, the Google paper and team made two assertions:
a) The calibration process (consisting of adjusting the definition of the gates) was based on running 1-qubit and 2-qubit circuits. (We call this a “local” process, in contrast to a “global” process based on optimization with respect to full-scale circuits.)
b) The experiments consist of two phases with a clear separation between them: 1) calibration and 2) running the desired circuits for a particular experiment.
There is evidence that both assertions a) and b) are invalid, and for assertion a) the evidence applies already to 12- and 14-qubit circuits.
Regarding the prediction based on the digital error model: There is various supporting statistical evidence for the assertion that the match between the “digital predictions” and the actual XEB-fidelities is “too good to be true” and reflects a methodologically incorrect optimization. For example, this is supported by a detailed analysis of the fidelities of “patch” circuits which are simplified versions of the main experimental circuits.
Another methodological problem with the 2019 experiment is that the Google team did not provide some crucial data needed for scrutinizing their experiment: In more than five years, the Google team did not provide the individual 2-gate fidelities for their experiment. (This seems unreasonable). In addition, neither the calibration programs nor the input for those programs were provided by the Google team and this means that this ingredient of the experiment is under a commercial secrecy curtain.
My post (linked in Scott’s original post) includes more details and links to five papers where these and other issues are studied. As I said, these concerns apply already to Google experiments for circuits with 12 and 14 qubits – for those I did not have any prior beliefs that they could not possibly be done, but our study indicates that they have not been properly done.
In any case, my post is quite short, you are welcome to read it.
To your second and third questions: these specific points are not directly relevant to the Google error correcting papers, the QuEra IQP experiment, and the Quantinuum’s paper. They appear to be relevant directly to other supremacy experiments by Google (and they may shed doubt on other experimental claims by the Google quantum AI team in other directions). The Google error correcting papers, the QuEra IQP experiment, and the Quantinuum’s paper need to be scrutinized individually. We did not study the data for Google’s quantum error correction paper and for the Quantinuum’s papers. We made some preliminary study of the QuEra IQP experiment for 12 logical qubits (and some findings are reported in our fourth joint paper). In all cases we share (and discuss) our findings with the researchers before we publish them.
Comment #127 December 28th, 2024 at 6:49 pm
Hey Scott,
How do you feel about this recent comment by Ramaswamy, which has been hugely controversial among MAGA adherents?
A culture that celebrates the prom queen over the math olympiad champ, or the jock over the valedictorian, will not produce the best engineers. Trump’s election hopefully marks the beginning of a new golden era in America, but only if our culture fully wakes up. A culture that once again prioritizes nerdiness over conformity.
Some MAGA people are very enthusiastically supporting what he’s saying, while other MAGA people are pissed off that an Indian foreigner is attacking American culture and classic Americana like high school football.
Comment #128 December 30th, 2024 at 2:15 am
This just in: You can get a Quantum Computers to work by using Magic, and you can get Magic from the Top Quark!
https://scitechdaily.com/magic-particles-the-large-hadron-colliders-quantum-computing-breakthrough/
I suspected the Top quark would be good for something
Comment #129 December 30th, 2024 at 11:53 am
Sorry if this has been covered in the long thread above, but it’s annoying that this RCS benchmark can’t be verified classically/efficiently.
Is there a better type of problem out there? Something that we know how to solve quantumly, and verify classically? Like if Quantum were an oracle for an NP problem that (by definition) we could easily verify?
I’m guessing we don’t know of such a problem, or we’d already be using it instead of RCS — but is the existence of such a problem provably (proven) non-existent due to quantum-whatever?
Comment #130 December 30th, 2024 at 5:46 pm
Kyle #127 I venture to guess that Scott might say he answered your question in the year 2000 with this piece of short humorous fiction: https://scottaaronson.com/writings/athletes.html
Comment #131 December 30th, 2024 at 10:02 pm
Gil Kalai #126: Thanks. So is your assertion that the projected XEB fidelity is not to be trusted? But we’ve seen those fidelities verified directly…? https://arxiv.org/abs/2212.04749
Comment #132 December 31st, 2024 at 10:28 am
gentzen #117: I had some time during the holiday to look into this and found a paper that answered all my questions: https://arxiv.org/abs/1407.8122.
In short, yes Rorhlich’s argument is correct, but unfortunately it does not seem to work in the general setting (more than 2 parties, more than 2 measurements, etc). Moreover we know no such argument could ever work in general, because of https://arxiv.org/abs/1403.4621, which shows that constraints such as Non-signaling, Non-trivial Communication Complexity, Information Causality, Macroscopic Locality, and Local Orthogonality, all fail to single out quantum theory in the general setting.
Comment #133 January 2nd, 2025 at 5:18 pm
H #132: It is fine, if you decide for yourself that Rohrlich‘s argument convinces you. What you wrote in #118 is good enough for that: „Weak measurements are fine …, weak values are different and they’re the controversial ones“. Gisin‘s paper is nice too, and certainly better than what you did in #112, namely to copy&paste a tongue-in-cheek argument from Rohrlich‘s paper as if it would be a serious argument: „They can involve weird correlations between measurement settings and outcomes, which only by having a conspirational symmetry do not lead to signaling“. (The difference is that #118 are your own words, and you know what they mean. While #112 caused me to try to spell out what Rohrlich is actually arguing for.)
„but unfortunately it does not seem to work in the general setting“: Rohrlich (and Gisin) just want to get the Tsirelson bound out of their argument. Rohrlich‘s claim that he wants more also feels tongue-in-cheek to me. But maybe I am wrong in that respect, and he really means it.
Comment #134 January 3rd, 2025 at 10:19 am
Scott #125: Our noise assumptions are listed as N1 and N2 on page 2 of the 2019 NJP paper, or para 2, Sec. II of the PRA. Loosely speaking, we assume that noise anywhere in the circuit is CPTP and rely on the empirical observation that single qubit gates are of a far better quality than all other components. Thus, running a ‘trap’ circuit on the same hardware with only the single qubit gates replaced (giving a Clifford computation) bounds the correctness (in total variation distance) of the output.
Our scheme does not verify correctness against adversarial noise if that is what you are after. We take a ‘subtle but not malicious’ approach here.
Comment #135 January 3rd, 2025 at 12:17 pm
Kyle #127
the problem is that with H1B visas US workers (which includes US born workers and ex-h1b naturalized workers!) are screwed because corporations want as many h1b as possible because they’re cheaper and their visa chained to a particular corp…. and without H1B visas the US workers are screwed too because corporations just open offices out of the USA (aka centers of excellence in mumbai, chennai, etc).
Ideally H1B shouldn’t exist and be replaced by just giving green cards right away and equal pay to highly qualified foreign workers.
Comment #136 January 5th, 2025 at 10:10 am
Meanwhile in Germany, I was unsure whether to laugh or cry, when I read an article in one of our big newspapers whereof DeepL-translation of headline and abstract go as “Everything everywhere at the same time – When will the present overtake science fiction? Google’s new quantum chip is so fast that its inventors have only one explanation – it computes in several universes in parallel.”
https://www.sueddeutsche.de/kultur/google-quantencomputer-willow-science-fiction-li.3167712
I wrote them a detailed email linking among others to your blog. Thanks for efforts in science communication and happy new year in spite of everything.
Comment #137 January 6th, 2025 at 11:41 am
gentzen #133:
‘certainly better than what you did in #112, namely to copy&paste a tongue-in-cheek argument from Rohrlich‘s paper as if it would be a serious argument: „They can involve weird correlations between measurement settings and outcomes, which only by having a conspirational symmetry do not lead to signaling“.’
Well, those also are my own words!
The point being, if you look at a PR correlations you may ‘naively’ guess that it should be possible to use the box to send messages. However with a closer look you realize that the probabilities are fine-tuned precisely to prevent any messaging (that’s what I meant by conspirational symmetry). So intuitively I think of it as a device that can signal, but chooses not to, whereas classical & quantum devices are constrained not to signal by construction.
Comment #138 January 9th, 2025 at 4:50 am
H #137: Thanks for the clarification. I had confused your words with the following passage:
While reading the paper, I had classified that talk about the PR box being rotten into the same category as his earlier tongue-in-cheek remarks like: “What a disappointment! It should not be so easy to disprove such a lovely conjecture!” You have to admit, the inventor of the PR box calling it rotten in an oral talk has entertainment value.
However, I see now that he first explained abscence of any uncertainty principle for the PR box in a certain scenario, and then argued that in its abscence it is arbitrary (“fig leaf”/”cheap fix”) to insist on complementarity.
Your clarification of what you meant by “conspirational symmetry” is interesting in its own right. What you meant was “non-generic” (at least that is how I interpret your clarification), but you tried to use tongue-in-cheek language similar to Rohrlich for expressing this. The funny part is that symmetry is one reason why sometimes “non-generic” situations cannot be excluded from consideration. (Embeddings are a more trivial reason. Another non-trivial reason are boundary crossings between different regimes, causing those low order degeneracies studied in Rene Thom’s catastrophe theory).
Comment #139 January 21st, 2025 at 6:18 pm
Hello Scott,
Feel free to leave this off the public blog. And of course to ignore it. But maybe you would like to read my take on the quantum computation debate between you and Gil Kalai :-).
https://1drv.ms/b/c/e9219f20aa3a561b/EXr-Fk91xH9InIl9cabagr8B_7vRgt8R36sFMF_EiiwIiQ?e=2owdtA