Quantum supremacy, now with BosonSampling

Update (12/5): The Google team, along with Gil Kalai, have raised questions about whether the results of the new BosonSampling experiment might be easier to spoof classically than the USTC team thought they were, because of a crucial difference between BosonSampling and qubit-based random circuit sampling. Namely, with random circuit sampling, the marginal distribution over any k output qubits (for small k) is exponentially close to the uniform distribution. With BosonSampling, by contrast, the marginal distribution over k output modes is distinguishable from uniform, as Arkhipov and I noted in a 2013 followup paper. On the one hand, these easily-detected nonuniformities provide a quick, useful sanity check for whether BosonSampling is being done correctly. On the other hand, they might also give classical spoofing algorithms more of a toehold. The question is whether, by spoofing the k-mode marginals, a classical algorithm could also achieve scores on the relevant “HOG” (Heavy Output Generation) benchmark that are comparable to what the USTC team reported.

One way or the other, this question should be resolvable by looking at the data that’s already been collected, and we’re trying now to get to the bottom of it. And having failed to flag this potential issue when I reviewed the paper, I felt a moral obligation at least to let my readers know about it as soon as I did. If nothing else, this is an answer to those who claim this stuff is all obvious. Please pardon the science underway!


A group led by Jianwei Pan and Chao-Yang Lu, based mainly at USTC in Hefei, China, announced today that it achieved BosonSampling with 40-70 detected photons—up to and beyond the limit where a classical supercomputer could feasibly verify the results. (Technically, they achieved a variant called Gaussian BosonSampling: a generalization of what I called Scattershot BosonSampling in a 2013 post on this blog.)

For more, see also Emily Conover’s piece in Science News, or Daniel Garisto’s in Scientific American, both of which I consulted on. (Full disclosure: I was one of the reviewers for the Pan group’s Science paper, and will be writing the Perspective article to accompany it.)

The new result follows the announcement of 14-photon BosonSampling by the same group a year ago. It represents the second time quantum supremacy has been reported, following Google’s celebrated announcement from last year, and the first time it’s been done using photonics rather than superconducting qubits.

As the co-inventor of BosonSampling (with Alex Arkhipov), obviously I’m gratified about this.

For anyone who regards it as boring or obvious, here and here is Gil Kalai, on this blog, telling me why BosonSampling would never scale beyond 8-10 photons. (He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.) Here’s Kalai making a similar prediction, on the impossibility of quantum supremacy by BosonSampling or any other means, in his plenary address to the International Congress of Mathematicians two years ago.

Even if we set aside the quantum computing skeptics, many colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. Furthermore, even if 50-photon BosonSampling was possible, after Google reached the supremacy milestone first with superconducting qubits, it wasn’t clear if anyone would still bother. Even when I learned a year ago about the USTC group’s intention to go for it, I was skeptical, figuring I’d believe it when I saw it.

Obviously the new result isn’t dispositive. Nevertheless, as someone whose intellectual origins are close to pure math, it’s strange and exciting to find myself in a field where, once in a while, the world itself gets to weigh in on a theoretical disagreement.

Since excitement is best when paired with accurate understanding, please help yourself to the following FAQ, which I might add more to over the next couple days.

What is BosonSampling? You must be new here! Briefly, it’s a proposal for achieving quantum supremacy by simply passing identical, non-interacting photons through an array of beamsplitters, and then measuring where they end up. For more: in increasing order of difficulty, here’s an MIT News article from back in 2011, here’s the Wikipedia page, here are my PowerPoint slides, here are my lecture notes from Rio de Janeiro, and here’s my original paper with Arkhipov.

What is quantum supremacy? Roughly, the use of a programmable or configurable quantum computer to solve some well-defined computational problem much faster than we know how to solve it with any existing classical computer. “Quantum supremacy,” a term coined by John Preskill in 2012, does not mean useful QC, or scalable QC, or fault-tolerant QC, all of which remain outstanding challenges. For more, see my Supreme Quantum Supremacy FAQ, or (e.g.) my recent Lytle Lecture for the University of Washington.

If Google already announced quantum supremacy a year ago, what’s the point of this new experiment? To me, at least, quantum supremacy seems important enough to do at least twice! Also, as I said, this represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage compared to Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

Is BosonSampling a form of universal quantum computing? No, we don’t even think it can simulate universal classical computing! It’s designed for exactly one task: namely, demonstrating quantum supremacy and refuting Gil Kalai. It might have some other applications besides that, but if so, they’ll be icing on the cake. This is in contrast to Google’s Sycamore processor, which in principle is a universal quantum computer, just with a severe limit on the number of qubits (53) and how many layers of gates one can apply to them (about 20).

Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Are there any applications of BosonSampling? We don’t know yet. There are proposals in the literature to apply BosonSampling to vibronic spectra in quantum chemistry, finding dense subgraphs, and other problems, but I’m not yet sure whether these proposals will yield real speedups over the best we can do with classical computers, for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where BosonSampling almost certainly does yield exponential speedups, but which are rarely the thing practitioners directly care about). [See this comment for further discussion of the issues regarding dense subgraphs.] In a completely different direction, one could try to use BosonSampling to generate cryptographically certified random bits, along the lines of my proposal from 2018, much like one could with qubit-based quantum circuits.

How hard is it to simulate BosonSampling on a classical computer? As far as we know today, the difficulty of simulating a “generic” BosonSampling experiment increases roughly like 2n, where n is the number of detected photons. It might be easier than that, particularly when noise and imperfections are taken into account; and at any rate it might be easier to spoof the statistical tests that one applies to verify the outputs. I and others managed to give some theoretical evidence against those possibilities, but just like with Google’s experiment, it’s conceivable that some future breakthrough will change the outlook and remove the case for quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

Also: when Covid first started, and facemasks were plentiful in China but almost impossible to get in the US, Chao-Yang Lu, one of the leaders of the new work and my sometime correspondent on the theory of BosonSampling, decided to mail me a box of 200 masks (I didn’t ask for it). I don’t think that influenced my later review, but it was appreciated nonetheless.

Huge congratulations to the whole team for their accomplishment!

132 Responses to “Quantum supremacy, now with BosonSampling”

  1. Sniffnoy Says:

    Wow! How on earth did they get all the way up from 14 to 40…?

  2. Scott Says:

    Sniffnoy #1: I mean, it’s less than a factor of 3… 😀

  3. James Gallagher Says:

    Very nice feat of engineering and lovely demonstration of the linearity of quantum mechanics at macroscopic scales.

  4. Juan Miguel Arrazola Says:

    Hi Scott, thanks for sharing your perspective!

    It may be worth clarifying for readers that this experiment was possible due to the proposal of *Gaussian* boson sampling, which uses squeezed light sources to generate photons. This was ultimately the counter to the skeptics: don’t use single photon sources!

    I’m curious: what are your thoughts on the claim of quantum advantage with a device that is not programmable? Does it make sense to talk about a Haar-random unitary in this case?

    Also, did you find the benchmark against classical simulations convincing? I’m personally confident that a careful analysis of state-of-the-art simulation techniques will significantly reduce their quoted time for classical simulation.

    Thanks!

  5. Scott Says:

    James Gallagher #3: Well, 50 photons is not really “macroscopic”! 🙂

  6. Scott Says:

    Juan Miguel Arrazola #4: Thanks! Added a note about that to the beginning of the post.

  7. James Gallagher Says:

    Scott #3

    It’s a big step from 2 entangled photons, even across the Canary Islands.

    It was (almost) all done at room-temperature too.

    More evidence that exact linear Quantum Mechanics describes the world at all scales

  8. Isaac Says:

    Hey Scott, I was wondering whether you have any insight into how much more difficult this Boson Sampler was to build than the past 14 photon Sampler by the same group. Difficulty might be measured in cost, work-hours, energy consumption, etc. Thanks!

  9. Scott Says:

    Isaac #8: A lot more difficult, probably? 🙂 But I don’t know how the cost or work-hours compared — maybe we could get Chao-Yang Lu or another member of the group to address that question here…

    [UPDATE: In comment #29 below, Chaoyang writes “How much more? Very.”]

  10. Job Says:

    When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

    Maybe Amazon’s “requester-pays” model for cloud storage (where it’s the community that pays for the file-transfer fees) would be a good fit here. At least when the costs get up to these numbers.

    Basically, you’d publish the verified experimental results up to 30 or so, then separately post the 30+ blindly and allow others to verify or reject.

    Posting experimental results blindly is a risk/reward thing, it takes a lot of confidence to do that.

    I actually like the fact that they were only going to do verification up to 26-30, since 30+ would still be well within the range where many others would be able to contest their results via simulation.

    Ultimately, it works as a verifiable prediction, which would actually strengthen their result. But i might feel differently as a referee.

  11. Scott Says:

    Job #10: If anyone wants to contest it in that way, they still do have experimental results a good deal beyond what they managed to verify themselves!

  12. Oleg S. Says:

    My sincere congratulations, Scott!

    What do you think about this https://advances.sciencemag.org/content/6/23/eaax1950.full possible application of boson sampling?

  13. Job Says:

    BTW, what do you make of DeepMind’s protein-folding success, in the context of quantum computing?

    It’s interesting to see AI produce results in an area (simulation) that QCs would ultimately be expected to dominate.

    For example, it’s not clear to me whether this means that:
    1. Protein folding is actually classically easy.
    2. DeepMind is producing heuristics that are sufficiently accurate for all instances we care about.
    3. We’ll still need QCs to obtain accurate solutions for many of the instances that we do care about.

    How much hype are we talking about here?

  14. Andres Says:

    I’m just waiting for a company like “Xanadu” to use this research to falsely claim they can solve hard problems using GBS.

  15. Scott Says:

    Job #13: Those are good questions! It seems clear that the new AlphaFold results represent some kind of breakthrough. And to whatever extent protein folding has now been “solved,” that would imply that whatever quantum effects are relevant in proteins, they can either be ignored or else treated with efficient classical heuristics to get acceptably accurate results. So, that would remove one potential application of QC that people sometimes talked about. OK, but has protein folding actually been “solved”? I’m not sure! What does anyone else think—especially any experts who happen to be reading this?

  16. Scott Says:

    Oleg S. #12: Thanks! While I was vaguely familiar with the idea of using Gaussian BosonSampling to sample dense subgraphs, I hadn’t seen the particular paper you linked to. So, I just looked through it, and that led me to also look through the earlier Xanadu papers that it’s based on more carefully than I had before.

    To summarize what I took away, the key point is this. Given a graph G and an even integer k, it seems that there’s a fast way, using Gaussian BosonSampling, to sample a random subgraph in G with k vertices, in such a way that a given subgraph appears with probability proportional to the square of the number of perfect matchings in that subgraph. (In other words, to the square of its “Hafnian,” the function that counts perfect matchings.)

    Is that useful? Well, I’m not sure! I certainly can’t prove that it isn’t.

    But I can tell you what makes me nervous reading these works. I’m nervous about how atheoretical they are. I’m nervous about how quickly they jump to numerical simulations and commercial applications, without pausing to ask questions like:

    So how hard would it be to sample from this same Hafnian2 distribution using a classical computer? Assuming, let’s say, that we’re only interested in the adjacency matrix of a graph—so in particular, a nonnegative matrix, one that gives rise to Hafnians with no cancellations or interference? Could we do it using Markov Chain Monte Carlo, for example?

    And then there’s a second question: assuming this is a classically hard sampling task, does it (as these papers claim) actually help in solving the NP-complete problem of finding dense subgraphs? It’s important to understand just how revolutionary this would be if true! It would yield one of the only known examples of a quantum speedup for an NP-complete problem that was totally unrelated to the Grover speedup. But the high stakes here ought to prompt reflection: does the Hafnian2 distribution really give us a starting point for local search that’s better than any starting point that we could’ve gotten from a classical heuristic? (Comparing the Hafnian2 starting point against a random starting point doesn’t cut it.)

    Anyway, those are the questions that I’d like to see explicitly addressed, even if they can’t be given unconditional answers! And again, if anyone reading this wants to take a stab at them, I look forward to learning something!

  17. Anonymous Says:

    Has Gil Kalai admitted that his predictions have been falsified?

  18. Scott Says:

    Anonymous #17: He hasn’t. See here for his latest volley, where—I’m omitting many subtleties—he takes the position that Google’s result is too incredible to believe, so he’s simply going to proceed on the assumption that something must invalidate it even though he doesn’t know what.

  19. Matthias Says:

    Hi Scott,

    have you seen this PRX-article talking about a much faster simulation method of Google’s Sycamore chip using Matrix product states? What do you think about it and its implications on Google’s quantum supremacy claim?
    I don’t know much about Boson Sampling but it looks like no entanglement between the photons is employed in this case, i.e. this simulation method would not work in that case…

  20. Scott Says:

    Matthias #19: Yes, I’ve seen that paper. It’s important to understand that Sycamore did not use Controlled-Z’s as its 2-qubit gates, but something more complicated, which had the effect of roughly squaring the amount of time that MPS and tensor-network methods need to simulate the chip, compared to if Controlled-Z’s were used. Indeed, that’s precisely why the more complicated 2-qubit gate was chosen!

    Once this is taken into account, my impression is that MPS and tensor-network methods, like the ones in this paper, could be competitive with the brute-force Schrödinger simulation that the IBM group proposed, but that they wouldn’t do much better than that. Happy to be proven wrong though!

  21. Michael Marthaler Says:

    Scott #20: The major point of the PRX-article by Xavier Waintal is that they try to model quantum computers with finite error probability. I think this is an interesting approach and indeed Tensor Networks could be the way to simulate relatively large noisy quantum computers. But I wonder if the noise model they use is correct.

  22. Matthias Says:

    They also simulate the more complicated gate in the last part and reach a fidelity of 95% on a single core within hours. To quote them:
    “Extrapolations from our results suggest that such a parallel implementation should be able to reach fidelities in the 98%–99% range with a few hundred cores and a few terabytes of memory.”
    Although they don’t say how much time they estimate this calculation to take this sounds substantially less then IBM’s proposed brute-force technique?

  23. Matthias Says:

    Michael Marthaler #21: I don’t think they want to correctly model the physics of the quantum computer. Their goal is to just obtain results of the same quality (fidelity). If the same fidelity is reached it does not matter how the results are obtained.

  24. Bogdan Says:

    You write “the difficulty of simulating a generic BosonSampling experiment increases roughly like 2^n”, and “directly verifying the results of n-photon BosonSampling takes ~2^n time for any known classical algorithm”.

    Is this unavoidable in BosonSampling that verification scales as simulation? It these any statistical test to verify the result, which no-one knowns how to fool classically, which works in poly(n) time?

    Without such a test, I have a problem. For n up to 40, where we can verify the result, we can also simulate the sample, so this is not a quantum supremacy. For n near 70 we cannot verify that the result is correct so nothing to talk about.

    Having a poly(n) test which I could apply on my laptop for n=70 would be much more convincing.

  25. Gil Kalai Says:

    his is very interesting news. Congratulation to Jianwei Pan and his group! Clearly we will have to look careful at the results, the data, and the interpretation.

    A few quick remarks.

    a) Regarding the question if Pan’s result demonstrate quantum advantage, a relevant paper is my 2014 paper with Guy Kindler Gaussian Noise Sensitivity and BosonSampling (I am sure Scott is familiar with our work). Unlike the case of random circuits, to achieve a probability distribution with correlation k/n with the noiseless distribution one needs a degree-k polynomial time. From Scott’s post and the paper it is not clear to me how close is the noisy distribution to the noiseless distribution. (But you need much higher fidelity levels here compared to what the Google team claims.)

    At the time, I looked at the paper about 14-photon system that Scott mentioned The assertion from the abstract that the results are validated against uniform samplers with a confidence level of 99.9% could be demonstrated by a probability distribution that truncate the noiseless distribution in degree-1 and degree 2 Hermite Fourier coefficients. (Indeed I planned then to email Jianwei Pan and ask him, but later forgot about it).

    b) In our 2015 discussions that Scott referred to in the post we were talking about the ability to create a Boson Sampling state which is epsilon-close to the ideal state for small value of epsilon. I predicted that you will face serious difficulties already for 10 bosons (with 20 modes, say, or even 8 bosons with 16 modes). As I said, I looked at the time at the paper about 14-photon system that Scott mentioned and it did not look that it refutes my prediction. Again, I’d love to know the details regarding overall fidelity and Jianwei Pan’s take on items a) and b).

    c) Boson sampling (of course, not under this name, and as a tool and not in the context sampling complexity) was proposed in a paper by Lidror Troyanski and Naftali Tishby in 1996.

    d) It is nice to see Pan using the term “advantage” rather than “supremacy”.

    e) Regarding my view (of course, always in need to be reconsidered in view of new experimental successes), my paper that Scott linked to in #18 is certainly a good place to learn about it (but not Scott’s brief description). Another source is my recent Harvard colloquium lecture Statistical, mathematical, and computational aspects of noisy intermediate-scale quantum computers. This lecture also addressed issues that are relevant to NISQ experiments in general.

  26. Vanessa Says:

    I’m confused about the definition of “quantum supremacy”. In your FAQ you write:

    “Q12. Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?

    Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.”

    But, boson sampling is *not* a fully programmable device. In this very post you write “we don’t even think it can simulate universal classical computing”. So, in what sense is boson sampling quantum supremacy, while just any hard-to-simulate quantum physical phenomenon is *not* quantum supremacy?

  27. fulis Says:

    Isaac #8 the experiments are actually very similar, the 14-photon demonstration used 20 inputs modes from a single quantum dot photon source that was actively de-multiplexed. Here the number of physical input modes is 25 instead of 20, and the number of output modes are 100 – to be honest they’re both ridiculous numbers in terms of how much lab work has to go into setting them up, but if you can do 60 then 100 is not that much harder. The second difference is that the 25 sources, in this case single-mode squeezers, need to be phase-stable with respect to each other. The electronics side of that is not tremendously difficult, but it does mean they need yet another 50 optical paths to manage. Because these sources need a lot of optical power they also pump them differently, and use chirped-pulsed amplification to increase their initial laser power.

    Much like how photon loss exponentially kills your count rates as you increase the number of photons, the number of possible failure points in your experiment also exponentially decreases the fraction of time that your set-up is actually running. This demonstration probably has 3-4 times more failure points, so it’s a question of how reliable they’re able to make everything (what is the exponential scaling). In my opinion, the previous 14(20)-photon demonstration was probably a bigger leap, because that’s where they solved the major technical problems with interfering so many modes.

  28. anonymous Says:

    “He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.” – if you refer to the post on Boaz Barak’s blog, it was not Gil Kalai, but me (a casual reader of math books). So I think this should be corrected.

  29. Chaoyang Lu Says:

    Isaac Says:
    Comment #8 December 3rd, 2020 at 5:33 pm
    Hey Scott, I was wondering whether you have any insight into how much more difficult this Boson Sampler was to build than the past 14 photon Sampler by the same group. Difficulty might be measured in cost, work-hours, energy consumption, etc. Thanks!

    I am in the right position to answer this one. It is all about experimental details.

    How much more? Very.

    A whole new laser system with sufficient power and transform-limited pulses (indistinguishability very sensitive to chirp; additional use of transformable mirror is crucial).

    PPKTP crystals with frequency uncorrelated and enough squeezing.

    pump power high enough to have high brightness, low enough to avoid self-phase modulation.

    pump waist large enough to have high collection efficiency, small enough to have enough counts.

    25 beams with 2 m free space and 20 m optical fiber phase locking.

    overcome both fast and slow fluctuations.

    3D interferometer with self-stabilization, many details studied from LISA and LIGO.

    3D interferometer also for reference beam for active locking.

    3D interferometer with 2 orders of magnitude higher precision than those in quantum dot single photons because of the shorter coherent time.

    100+ detectors, coincidence modules, data collection and analysis system.

    characterization and validation of a 10^30 dimensional system…

    social distancing during COVID…

    and more…

  30. Chaoyang Lu Says:

    We (USTC) actually don’t have to pay that money. Our collaborator in Wuxi pays that. Actually, since now it is wintertime, so they recycle the cooling water for the supercomputer to heat up the residents’ houses nearby. In the beginning, we only use 5% of their computational resources (free to use anytime). After “the” review report, I called Professor Lin Gan and ask for 90%! They agreed and delayed all the other tasks in the queue and put our task in the priority.

  31. Abdullah Khalid Says:

    Congrats Scott! From a computational complexity point of view, are there any deficiencies in Gaussian BosonSampling that make it a less preferable system for demonstrating quantum supremacy, compared to single-photon BosonSampling?

  32. Sergio Boixo Says:

    Nice experiment! But how sure are we that a similar sampling is classically hard? Figures 3G and 3H show that the experimental probabilities are higher than thermal states and random bistrings. But we know the ideal two point correlations, which are not hard to compute (Figure 3F). Thermal states and uniform sampling don’t have the correct two point correlations. If we sample classically using the known expected value per mode and two point correlations (which is easy to try), do we get a lower probability than the experiment?

  33. Scott Says:

    Sergio Boixo #32: That’s an excellent question. Given that he’s reading this thread, I’m going to defer to Chaoyang as to whether he tried that and if so what was the result.

  34. J Says:

    Scott #0

    Congrats on everyone having helped this achievement!

    Job #13

    As for alphafold, the one thing that could be described as hype is “problem solved”. It’s definitely an “Alexnet time” for this field, but Alexnet did not end the field of computer vision. In a way, it’s more of a beginning than of an end.

    > 1. Protein folding is actually classically easy.

    Actually it’s never been clear that it could be otherwise, at least if we restrict to naturally occurring proteins (natural selection found it!). The only reason to doubt this was the long, enduring blah in our capability to predict folding, even after throwing a lot of computations and decades of human work. Of course it’s easier to say that in retrospect. Some might even joke that we’re talking about O(1) here.

    > 2. DeepMind is producing heuristics that are sufficiently accurate for all instances we care about.

    That depends on what counts as an instance we care about. If some very obfuscated foldings can reach a state of interest, it might not exist in biology but still we would like to know about it. In the same vein, synthetic biology can include artificial sequences that might fold differently.

    >3. We’ll still need QCs to obtain accurate solutions for many of the instances that we do care about.

    We can expect it’ll soon be ruled out for every naturally occurring foldings, but it’d still be possible that QCs can help us find some artificial instances we’ll care about.

  35. William Gasarch Says:

    I have had the following conversation with my crypto class.

    BILL: I think that for the near term a bigger threat to RSA is some very clever and deep number theory being used to get a better (though not in P) classical factoring algorithm, which will force Alice and Bob to up their parameters, and not QC factoring.

    STUDENT: But didn’t Google show that Quantum computers can do things classical cannot, and hence they can factor?

    BILL: Google (and now others) have shown that Boson Sampling (which I won’t explain to you, though you can read Scott’s blog) IS a problem that can be done fast with a real QC that seems to NOT be able to be done fast classically. That’s really excited for science but its not making factoring any faster.

    STUDENT: But if they are building QC’s to do this, can they then be converted or used to factor?

    BILL: All this effort may lead to practical appliations of QC later, but that seems a long ways off.

    BILL TO SCOTT: Is it a long ways off? I am thinking perhaps its sooner than I think just as Boson Sampling happened sooner than I thought it would.

    PARADOX: `it will happen sooner than I think’ OH, then I do think it will happen sooner so it won’t… (NOTE TO SELF- do a blog on such statements)

  36. Jelmer Renema Says:

    @Sergio Boixo:

    I don’t know about the particular test in this paper, but there are other tests which would efficiently distinguish the output of such a classical sampler from that of a true boson sampler.

  37. Supremazia quantistica utilizzando la fotonica: cosa significa battere i computer tradizionali | NUTesla | The Informant Says:

    […] Maggiori informazioni sullo studio realizzato dagli accademici cinesi sono disponibili in questo articolo. […]

  38. Virat Tara Says:

    Does this disprove the extended Church-Turing thesis?

  39. Scott Says:

    Virat Tara #38: Along with Google’s result from last year, I would say that it puts the ball firmly in the court of anyone who still believes the Extended Church-Turing Thesis, to explain these experiments away one way or another.

  40. Scott Says:

    Abdullah Khalid #31:

      From a computational complexity point of view, are there any deficiencies in Gaussian BosonSampling that make it a less preferable system for demonstrating quantum supremacy, compared to single-photon BosonSampling?

    That’s an excellent question. From my reading of the paper on GBS, it looks like you can formulate a “Hafnian-of-Gaussians Conjecture,” directly analogous to the Permanent-of-Gaussians Conjecture that Arkhipov and I formulated, and then show that GBS is hard to simulate classically, assuming
    (1) the Hafnian-of-Gaussians Conjecture,
    (2) anticoncentration of Hafnians (basically certain to be true), and
    (3) the non-collapse of the polynomial hierarchy.

    This wouldn’t mean that the hardness of approximate BosonSampling (the original kind) would logically imply the hardness of approximate GBS; it would just mean that the two situations were parallel.

    It would be nice to see this worked out in more detail, though, particularly since the GBS paper has mistakes in complexity theory (e.g., they keep saying that a problem is “#P” when they mean to say that the problem is classically hard).

    Informally, any computation of a permanent can be embedded into a computation of a Hafnian. Also, GBS generalizes Scattershot BosonSampling, for which we have a formal reduction showing that it’s hard to simulate classically assuming that ordinary BosonSampling is. These facts suggest that GBS should be at least as hard to simulate as ordinary BosonSampling, unless there’s something screwy about the distribution over instances.

  41. David Sholl Says:

    Congratulations to the USTC group on what sounds like a truly impressive achievement.

    In the early 1990s I taught English for a summer in Hefei and (long story) got to visit the Physics dept at USTC one day. At that time the entire department had something like one very out of date PC for computing and similar levels of equipment for experiments. The transformation of Chinese science since then has been stunning.

  42. So what does Friday bring us? - Marginal REVOLUTION Says:

    […] hum! Here is the FT article, via John-March Russell here is extensive Scott Aaronson commentary, with many useful links.  What is it — Sunday that is the day of […]

  43. Scott Says:

    anonymous #28:

      if you refer to the post on Boaz Barak’s blog, it was not Gil Kalai, but me (a casual reader of math books). So I think this should be corrected.

    No, it was Gil, on this blog.

  44. Scott Says:

    Vanessa #26:

      But, boson sampling is *not* a fully programmable device. In this very post you write “we don’t even think it can simulate universal classical computing”. So, in what sense is boson sampling quantum supremacy, while just any hard-to-simulate quantum physical phenomenon is *not* quantum supremacy?

    It’s a fair question. Here’s what I would say: on-the-fly programmability (along with computational universality, an even stronger property) were clear advantages of the Google experiment over the USTC one. And if someone later achieves BosonSampling-based quantum supremacy, but with a setup where the beamsplitters can be reconfigured on the fly — which is certainly technologically possible, just harder — that will represent a clear advance over what USTC did.

    All the same, the USTC experiment has the following property: while the beamsplitters are in one particular randomly-chosen configuration, there are exponentially many other configurations that could have been chosen just as easily. And, crucially, we know the exact probability distribution from which the configuration was drawn, and we have some reason to think that it gives rise to a hard set of instances. This seems to me like a fundamentally different situation from a single, fixed molecule (or even a collection of molecules) that we don’t know how to simulate classically. If you like, it’s some intermediate stage between programmability and non-programmability — what we might call “programmability at the firmware level.”

  45. Scott Says:

    Bogdan #24:

      Is this unavoidable in BosonSampling that verification scales as simulation? It these any statistical test to verify the result, which no-one knowns how to fool classically, which works in poly(n) time?

    That’s one of the biggest open problems in this whole area, one that I’ve called attention to in my papers and talks for a decade! Bremner and Shepherd had a proposal for how to do this in the Commuting Hamiltonians model in 2008, but alas, their proposal was shown to be spoofable a year ago by Greg Kahanumoku-Meyer. For BosonSampling, Hoi Nguyen and I proved a no-go theorem for one of the simplest ways you could imagine doing the verification (namely, concentrating a 1/poly(n) fraction of the amplitude on a single output), but many other possibilities remain.

    If you’re just collecting samples and then applying a test like HOG or Linear XEB, without changing the circuit to get more control over the outputs, then alas, the very same sorts of arguments that tell you that spoofing the experiments should be hard, also tell you that verifying them should be hard.

    Unless and until the verification problem gets solved, with sampling-based quantum supremacy experiments in the NISQ era, we’ll inherently be limited to making statements like: “yes, a classical computer did the same calculation in order to verify the result, but it needed orders of magnitude more time and other resources to do so. And we continued our experiment past the point where classical simulations with the largest supercomputers on the planet ran out of steam, and did a bunch of sanity checks to see that nothing was changing on the quantum side. And when classical computers become more powerful in the future, people can go back and verify our archived results, in order to confirm directly that we were well into the quantum supremacy regime, relative to the classical computers of the time.”

  46. Davide Says:

    Scott #44

    First of all congratulations to the team who achieved this stunning result and to you and coauthor of the seminal paper.

    I agree that this is an important step toward the universal BosonSampling-based quantum computer. Everything must work with fixed beamsplitters before it can work with reconfigurable ones. I disagree that this is “programmable” at the firmware level. Yes, they could change the beamsplitters configuration, but that is not akin to “firmware programming”: any firmware programming is Turing-complete (may be not reconfigurable or very slow to reconfigure, or have other limitations, but it could do anything computable).

    If my understanding of this result is correct, a better classical parallel would be like I could only build AND gates and nothing else (and manually put them in a configuration, not changing them programmatically). That is nothing useful, and until I can build a NOT, that can’t become a universal computer. Yet, if nobody else has done anything similar before, it would be a milepost toward the goal.

  47. Job Says:

    J #34,

    Actually it’s never been clear that it could be otherwise, at least if we restrict to naturally occurring proteins (natural selection found it!). The only reason to doubt this was the long, enduring blah in our capability to predict folding, even after throwing a lot of computations and decades of human work. Of course it’s easier to say that in retrospect. Some might even joke that we’re talking about O(1) here.

    Natural selection does take a brute-force approach though.
    Plus, the energy constraints in protein folding do make it look like a hard optimization problem.

    Even with AlphaFold i can believe that the generic protein-folding problem is not in P.

    I’m actually fairly confused right now about the true complexity of this problem, having seen some talks where motion-planning techniques (a PSPACE-complete problem) are discussed in the context of protein-folding.

    For example, how efficiently can we verify a solution to a protein-folding problem instance?
    Is it as difficult as solving the instance to begin with? Will AlphaFold act as both a solver and a verifier?

    What other problems in P does protein-folding resemble the most?

  48. Devin Smith Says:

    @Isaac #8: So, at a minimum there’s twice as much of everything, and quadratically more paths to align. Assuming that you can cheat a bit at the path part via being clever, something like 4x as hard.

    On the other hand, this experiment, by dint of being “Gaussian” is much more sensitive to phase noise, which is to say everything had to be actively and passively stabilised to a much greater precision than in the previous experiment. Something like 800x as precise, give or take an order of magnitude. I believe that the previous experiment had no active locks, while this one has 25. Setting up even one lock is a nontrivial process, though I assume (and don’t actually know anyone that’s done anything like this many) that it gets easier with iteration.

    They claimed that the previous experiment took “a month” to align all the input and output couplers, but it’s not clear to me how many people’s months that was/how hard they were working.

    In terms of running costs, this experiment was much faster, which is the zeroth-order answer to the question. Up to details, the cost of any multiphoton experiment in wall-power is fixed per unit time.

  49. Boffins from China push quantum computing envelope for 'supremacy' in emerging photon field - DUK News Says:

    […] quantum supremacy experiment last year and also surpasses it: As Aaronson stated Thursday in a blog post, this is the first time the advantage of quantum computing has been demonstrated using photonics […]

  50. Scott Says:

    Davide #46: At this point the analogy kind of breaks down, because while BosonSampling isn’t universal for quantum or even classical computing, it’s (we think) already beyond what a classical computer can easily simulate. It’s not like only having AND gates. Maybe it’s like only having a Digi-Comp II?

    Also, we definitely need some word that’s like “programmable,” but without the connotation of classical or quantum Turing-universality. Why not just say “universal” when we mean the latter?

  51. Boffins from China push quantum computing envelope for ‘supremacy’ in emerging photon field - World Tech Valley Says:

    […] quantum supremacy experiment last year and also surpasses it: As Aaronson stated Thursday in a blog post, this is the first time the advantage of quantum computing has been demonstrated using photonics […]

  52. Job Says:

    Also, we definitely need some word that’s like “programmable,” but without the connotation of classical or quantum Turing-universality. Why not just say “universal” when we mean the latter?

    I’m ok with “configurable” as a replacement for “programmable” assuming the configuration space is sufficiently large, and unconstrained, to be classically hard in terms of simulation.

    I agree that universality is less of a requirement for Boson Sampling.
    That’s actually one of its advantages right, it’s expected to be hard independently of universality?
    Whereas sampling from random circuits using a non-universal gateset is not necessarily difficult for a classical machine?

    As long as nature isn’t assembling a problem from a solution, retaining the solution, then surfacing it later on…

    Because it would absolutely do that, and I don’t think that’s being too paranoid.

  53. Henning Says:

    Congrats!

    And thanks for the laugh! Huge Chavez is really impressively busy for a dead guy.

  54. Scott Says:

    Matthias #22:

      Although they don’t say how much time they estimate this calculation to take this sounds substantially less then IBM’s proposed brute-force technique?

    You’re right, it does sound substantially less.

    Your comment caused me to reread an email exchange that I had in March of this year with Xavier Waintal, one of the authors of the work. In that exchange, I asked Xavier why he and his coauthors didn’t just say explicitly that they think they can refute Google’s quantum supremacy claim, since they seemed to keep hinting in that direction. In response, Xavier said it was because they hadn’t yet shown that their algorithm would actually scale to 98.6% fidelity or whatever—that there wasn’t some phase transition (for example) that would keep it from getting to that point—and they weren’t prepared to say anything about Google’s experiment until they’d shown that. As I said, that was in March, and I haven’t heard any more since. I’d be happy to hear some news!

    Of course, if Google’s original supremacy claim eventually succumbs to progress in classical simulation algorithms, while the BosonSampling supremacy claim (with, e.g., its billions of times larger state space) stands, that will be an interesting turn of events! The onus will then be on Google to reclaim its throne with a new experiment. 🙂

  55. Scott Says:

    Gil Kalai #25: Thanks for being a sport and commenting here as always! Let me address the history of BosonSampling, and then maybe circle back to some other points later.

    As far as I know, Troyansky and Tishby were indeed the first to make the connection, in print, between bosons and #P-hardness (though interestingly, Les Valiant told me that he and others discussed the connection even in the early 80s!). I myself first heard about the connection from Avi Wigderson, who in in turn heard it from Troyansky-Tishby. I’ve always credited TT for this.

    Of course, the connection between bosons and the permanent goes all the way back to Caianiello in the 1950s, while Valiant proved in 1979 that the permanent is #P-hard, so in some sense it was a question of putting the two things together.

    From the very beginning, Arkhipov and I stressed the “sampling” part of BosonSampling (we put it right in the name! 🙂 ), not as a detail but as the essential new feature. It turned out to be the key to getting a classically hard problem. Even today, it’s something that people trying to find applications of BosonSampling keep going astray by ignoring.

    What Troyansky and Tishby showed, by contrast, was how to use expectation values of measurements on bosons to estimate the permanent of a given matrix. But they then pointed out, astutely, that getting small enough variance in the estimator to solve a #P-complete problem would require exponentially many repetitions of the experiment, so that there would be no advantage over classical brute force. By the end of their short, lovely paper, they seem to say that they’re at a dead end.

  56. Scott Says:

    Gil Kalai #25: Now, on the more immediate question of noise in the USTC experiment. My basic thinking was that, if the noise was large enough for your and Kindler’s simulation algorithm to be relevant, then it should also be large enough that their observed samples wouldn’t pass the HOG test. But they did the experiment, and their samples did pass the HOG test, as far as they were able to check (as I said, up to n=40).

    Am I wrong about your and Kindler’s result? If I am, then the field should be open for anyone to explicitly refute the new supremacy claim, by implementing your algorithm and demonstrating its use to efficiently spoof the HOG scores that they report in the paper. Are you conjecturing that that can/will be done?

  57. Lou Scheffer Says:

    Job #13 speculates that perhaps “DeepMind is producing heuristics that are sufficiently accurate for all instances we care about.” This is exceedingly unlikely. Biology uses only 22 of the many possible amino acids. It is possible to create proteins using any of these amino acids, not just the ones that biology uses. These proteins can already be created in the lab, and there is work in progress to create them biologically by repurposing some of the redundant codons in the genetic code. See “Expanded genetic code” in Wikipedia as a starting point for more information.

    Understanding how these proteins fold is certainly something we care about (in practice they may make better catalysts or reagents, in theory they might tell us about how life might develop elsewhere). But DeepMind seems unlikely to do well on this task this since there is almost no data on which it might train. In contrast a quantum computer might be able to tell us how a protein containing yet-unseen amino acids might fold.

  58. Ben Cordier Says:

    Absolutely love that last tidbit on your $400,000 referee report.

  59. Ben Cordier Says:

    Job #13 and Scott # 15.

    I doubt it’s classically easy and (putting on my computational biologist hat) I highly doubt we can really say it’s ‘solved.’

    Some considerations from my perspective: 1. The model will only be as good as the data it’s trained on. I suspect that the data used is insufficient to fully account for the space of protein structures in human biology, let alone the greater space of possibly useful protein structures (e.g. designed proteins that could be medically useful or enzymes, which serve as reaction catalysts). That said, it does seem their model is approximating the necessary first principles to some extent, so I could also see this not being as big of an issue as it would seem. 2. I don’t believe it’s sufficiently accurate for all the instances we care about, in fact, the desirable resolution I’ve heard is 0.3 Angstrom, and I believe their most accurate simulation had a resolution around 0.9 Angstrom (very close, but shy of what we need!)

    That said, their improvement in the CASP benchmark is astonishing and I do think that if greater resolution is possible then we’ll likely see it next year (and probably in subsequent years). Finally, while the computation required to train their model was on the smaller side as far as state-of-the-art ML training runs go, I imagine extending this to quartenary structures (e.g. a ligand bound to a protein receptor) will be very challenging. The 2020s are certainly going to be an exciting time in this space.

    Note: May be biased, I have an active interest in applying QC to this kind of simulation.

  60. ZephirAWT Says:

    Why physicists believe, that computational power of classical computers (which is limited by physical laws, namely by uncertainty principle) can be beaten by quantum computing, which is obviously based on the same principles? Quantum computing is fast but fuzzy and achieving the same reliability and precision, which classical computers provide would require repeating the operation many times, which would wipe out the advantage in speed.

  61. Scott Says:

    ZephirAWT #60: Have you tried actually studying this subject?

    Learn, for example, how Shor’s algorithm would exploit interference among amplitudes to get a reliable, precise answer (namely, the prime factors of a given integer N) in ~O(log2 N) steps, whereas the fastest currently-known classical algorithms need exponentially more steps. Here’s my popular description, and here are my undergraduate lecture notes.

    Or learn about the very experiments that we’re discussing right now—experiments that are no longer hypothetical, but actually done!

  62. Job Says:

    Of course, if Google’s original supremacy claim eventually succumbs to progress in classical simulation algorithms, while the BosonSampling supremacy claim (with, e.g., its billions of times larger state space) stands, that will be an interesting turn of events! The onus will then be on Google to reclaim its throne with a new experiment.

    That’s not how I would proceed.

    If BosonSampling is proven to be more successful at separating quantum and classical devices, then just let it own that space, it has a better shot at keeping classical machines at bay in the short term.

    Isn’t that the BosonSampling tradeoff: do less, prove more?

    If anything, it frees up the QC teams to focus on delivering error-corrected machines with practical applications.

    Especially with AI making strides into quantum territory.

  63. Vanessa Says:

    Scott #44:

    Suppose we could build a machine that can synthesize any given copolymer composed of some fixed set of units (e.g. a sequence of CH2 and CF2 with an additional H on both ends) and then measure its first excitation energy (or some other physical property). Would that count as quantum supremacy? Admittedly, I don’t know how to prove it can’t be efficiently simulated classically, but I don’t see any reason to expect such a simulation to exist.

  64. Vanessa Says:

    Job #47:

    IIRC, a random sequence of amino acids usually doesn’t “fold” i.e. instead of reaching a specific configuration with very high probability, it can end up in any of a large number of local energy minima. The sequences that actually occur in nature were selected to fold. Therefore, it seems not implausible that finding the global energy minimum of an arbitrary sequence is computationally infeasible, whereas finding the global (or maybe not global but just “naturally occurring”) energy minimum of a natural protein *is* feasible.

  65. Adam T Says:

    I’m not exactly clear on the accuracy/precision of the GBS test, but I can count for the performance. I’ve spent probably half a year working on the hafnian optimization and can state that the hafnian of an [n=40] (80×80) matrix can be computed in about 4 hours on a standard desktop (i5-8600k for me).

    The library implementation is on [Github](https://github.com/XanaduAI/thewalrus).

  66. La supremacía cuántica se vuelve a demostrar, este vez con fotones y el algoritmo de BosonSampling - La Ciencia de la Mula Francis Says:

    […] La nueva demostración de la supremacía cuántica refuta a todos los críticos de la primera (el escéptico de la computación cuántica que más ruido ha hecho es Gil Kalai; LCMF, 29 nov 2016). Por supuesto, hasta que no se demuestre la supremacía cuántica en un ordenador cuántico de propósito general con corrección de errores en la resolución de un problema de interés práctico, todos los hitos solo tienen interés académico (e histórico). El nuevo artículo es Han-Sen Zhong, …, Chao-Yang Lu, Jian-Wei Pan, «Quantum computational advantage using photons,» Science (03 Dec 2020), doi: https://doi.org/10.1126/science.abe8770, arXiv:2012.01625 [quant-ph]; se ha publicado en la web de Science de forma anticipada, por ello aún no tiene ni volumen ni números de página; cuando se publique de forma definitiva será acompañado de una artículo divulgativo (Perspective de Science) escrito por Scott Aaronson (quien ha sido uno de los revisores). A nivel divulgativo recomiendo leer a Emily Conover, «The new light-based quantum computer Jiuzhang has achieved quantum supremacy,» ScienceNews, 03 Dec 2020; Daniel Garisto, «Light-based Quantum Computer Exceeds Fastest Classical Supercomputers,» Scientific American, 03 Dec 2020; Scott Aaronson, «Quantum supremacy, now with BosonSampling,» Shtetl-Optimized, 03 Dec 2020. […]

  67. J Says:

    Job #47

    I doubt brute-force is the best description for natural selection, but let’s say it’s good enough. My point is, if you try brute-force on some hard problem, then most solutions you can realistically find will actually belong to an easier subset of instances.

    Compare this to TSP: we know the generalized version is NP-complete, and we know many heuristics work well IRL. The latter doesn’t contradict the former, it’s just that in practice we can frequently afford an approximate solution, and this is actually in P.

    As you said folding can be seen as PSPACE-complete, and many speculated this was the reason why the problem was seemingly so hard. But now we know it’s kind of easy, at least for many known proteins, the simplest explanation is that complexity is actually about asymptotic behaviour, whereas realistics proteins are not of arbitrarly lenght. Hence the O(1) joke, which might be slightly more than a joke.
    (not pretending to be an expert, just interested third party)

  68. Gil Kalai Says:

    Scott, #56, that’s right. I regard it plausible that the low degree Hermite-Fourier truncation of the Boson Sampling distribution as described in my paper with Guy Kindler will “spoof” the statistical tests of the USTC experiment.  The easiest way to implement it is as follows: Given an n by m matrix you draw (with the appropriate weights based on repeated columns) at random n by n minor M (with repeated columns), then compute the degree k approximation X to the |permanent(M)|^2, (based on formula (8) from our paper) and then take the sample with probability according to the value of X and toss it away otherwise. This may work even for degree-2 truncation which might be related to Sergio’s comment #32. Rather than the straight truncation we can also try the “Beckner-noise” version.

  69. Jelmer Renema Says:

    @Kalai 68: you can compute the expected value of the L1-distance between your truncated distribution for a Gaussian boson sampler with realistic forms of noise (shameless plug: paper here) over Haar-random unitaries (which is equivalent to the expected value for any single unitary of the unitary is large enough wrt to the number of photons).

    This is all very back of the envelope, but if I plug in the numbers reported in Chaoyang’s paper and compute the L1 distance for a second-order truncation, the L1 distance is larger than one. Now I don’t know of any formal connection between the L1 distance and spoofing HOG, but I’d be very surprised if a distribution that is almost completely different than the one we’re interested in manages that…

  70. Scott P. Says:

    PARADOX: `it will happen sooner than I think’ OH, then I do think it will happen sooner so it won’t… (NOTE TO SELF- do a blog on such statements)

    Is this the reverse of Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law”?

  71. Photonic Huge Quantum Advantage ??? | Combinatorics and more Says:

    […] supremacy”) using photonic implementation of BosonSampling.  (I heard about it from a SO post that contains further links.) The claimed advantage is huge and clearly I will have to look […]

  72. Gerard Says:

    J #67

    > As you said folding can be seen as PSPACE-complete

    I thought folding just amounted to calculating the minimum energy configuration of a protein. That would make it a nonlinear optimization problem that is NP-easy (assuming it suffices to solve the problem to some finite energy resolution), so I don’t see how it can be PSPACE-complete given that it’s not known that NP = PSPACE.

  73. Gerard Says:

    Scott #55

    > Valiant proved in 1979 that the permanent is #P-hard,

    Don’t we actually know that computing the permanent is #P-equivalent ?

    From your paper with Arkhipov (https://www.scottaaronson.com/papers/optics.pdf) it looks like computing the output of a bosonic computer is equivalent to computing the permanent. You also say that simulating a bosonic computer is in BQP and we know that \(BQP \subseteq P^{\#P}\). So doesn’t that mean that we can P-time reduce the problem of calculating the permanent to that of solving a #P-complete problem ?

  74. Scott Says:

    Gerard #73: Calculating the permanent of a 0/1 matrix is #P-complete. Calculating the permanent of a real or complex matrix is technically not even in #P (because it’s not a nonnegative integer function), which is why I wrote it was #P-hard. Admittedly this is a hairsplitting point.

    Also, sampling an output of a BosonSampling experiment is not a #P-hard problem, even though a lot of people mistakenly say it is. It’s calculating the probability of a given output that’s #P-hard.

  75. Gerard Says:

    Scott #74

    > Also, sampling an output of a BosonSampling experiment is not a #P-hard problem, even though a lot of people mistakenly say it is. It’s calculating the probability of a given output that’s #P-hard.

    OK, but can’t that still be done in BQP ?

    Your paper says “It is not hard to show that a boson computer can be simulated by a “standard” quantum
    computer (that is, in BQP).”

    Doesn’t simulating a boson computer allow you to determine the probabilities of the outputs ?

    Of course if that were true it would mean BQP = P#P, so I’m obviously missing something.

  76. J Says:

    Gerard #72

    >I thought folding just amounted to calculating the minimum energy configuration of a protein.

    This configuration must also be reachable, which means we need existential quantifiers, which means the generalized version should belong to QBF rather than 3SAT. But I’ve never seen that explicitly explained, so that’s just my take.

  77. Scott Says:

    Gerard #75: In some sense, prying apart the things that you’re conflating here was the entire point of what Arkhipov and I did! No, a BosonSampling experiment does not let you calculate the probability of a given output (which as I said, is a #P-hard problem). What it lets you do, instead, is sample a random output with the appropriate probability. That’s different; at best it gives you a tiny clue that the probability of whichever output you happened to observe might be on the large side!

  78. Gerard Says:

    J #76

    > This configuration must also be reachable

    What would it mean for it to not be reachable ?

    There could be energy barriers but wouldn’t they just affect the reaction rate (ie. folding time) at finite temperature ? Or do you mean the chain might have to do something effectively impossible, like crossing through itself, to reach the minimum ?

  79. J Says:

    Gerard #78

    The latter, like a series of knots that would minimized some metrics but it’s not clear if there is a succession of moves to make them all.

  80. BosonicObserver Says:

    It seems that the USTC team got a comment about the classical simulation by google in the same way as google got by IBM. Martini’s response were twofold:

    1) Dont just theoretize, show that you can do it.
    2) There was a safety sentence in the paper.

    Now the same applies here:
    1) Why google doesnt just make that classical simulation, instead of making some vague public claims?
    2) The last sentence of the USTC paper: “We hope this work will inspire new theoretical efforts to quantitatively characterize large-scale GBS, improve the classical simulation strategies optimized for the realistic parameters (33, 34), and challenge the observed quantum computational advantage of ~10^14.”

  81. Craig Says:

    This is impressive, but not surprising, given last year’s 53 qubit machine result. I will be more impressed if a quantum computer can do “quantum addition” described in a 2000 paper by Thomas Draper, since it is possible to check the answer easily. Even though addition is not impressive in and of itself, doing quantum addition for say two 100 bit numbers and getting the right answer would convince me that QC is possible for the large scale.

  82. Gerard Says:

    J #79

    Given a system trajectory from some initial state to some final state it should be possible to efficiently determine whether that trajectory is feasible. So I think even in that case the problem should be NP-easy. You just need to add a trajectory to the final state to get the certificate for the corresponding NP decision problem.

  83. Karen Morenz Says:

    I appreciate how Gil Kalai is set up as some kind of arch nemesis of quantum computing in this post and I hope he does too.

  84. J Says:

    Gerard #82

    >You just need to add a trajectory to the final state to get the certificate for the corresponding NP decision problem.

    Again not an expert, but it seems to me that you would also need to prove that there’s no one-way traps too close from this trajectory, which means checking the certificate would take more than polynomial time.

  85. Rahul Says:

    Scott

    Isn’t it a bit in bad taste how you make this about proving Gil Kalai wrong and so sarcastic.

    Is this about the science or a personal battle?

  86. Scott Aaronson Comments on the Boson Sampling Quantum Supremacy Paper from China | NextBigFuture.com Says:

    […] Scott Aaronson originated the proposal of Boson Sampling as an area where quantum computers should b… […]

  87. Scott Says:

    Rahul #85: While Gil has repeatedly said he’s flattered for his critique to be the center of attention in this way, regarding the sarcasm, you’re right and I was wrong. I’ve removed it.

  88. b_jonas Says:

    Rahul #85: it’s about science. Even though my guess is that quantum computation advantage is possible in our world, I also guess that this attempt and the next two attempts that Scott will feature on his blog will not demonstrate it convincingly. So we need someone who reads these papers and does all the hard work to find the flaws. Gil Kalai is that researcher, and I’m grateful for his service.

  89. Gerard Says:

    Scott #74

    > Calculating the permanent of a 0/1 matrix is #P-complete. Calculating the permanent of a real or complex matrix is technically not even in #P (because it’s not a nonnegative integer function), which is why I wrote it was #P-hard.

    So is the complex/real permanent calculation reducible to #P ?

    On the one hand it doesn’t seem obvious that being able to count solution sets automatically allows you to compute more general sums over those sets. On the other hand it seems that something like that must occur to bring BQP into P#P.

  90. Scott Says:

    b_jonas #88:

      Even though my guess is that quantum computation advantage is possible in our world, I also guess that this attempt and the next two attempts that Scott will feature on his blog will not demonstrate it convincingly.

    The word “convincingly” is doing a lot of work there! After a year of attempts to knock it down, most of the community regards Google’s speedup claim as battered but still standing (just like democracy in the US 🙂 ). So the question is whether the BosonSampling experiment will add to the case.

  91. Jelmer Renema Says:

    I went ahead and coded up the ‘sample according to two-photon marginals’ sampler as suggested by Sergio Boixo and Gil Kalai earlier in this thread. I generated 100 sets of 100 samples, each on a different random unitary, for photon numbers n = 2, n = 3, n = 4, n = 6, n = 8. Then, I computed the ‘HOG ratio’ given on page 26 of the supplemental material of the paper under discussion.

    For n = 2, the two distributions are identical, so the ratio randomly prefers either (depending on which set of samples happens to have a higher probability). For n = 3, it prefers the correct distribution in 98% of the cases, and for all the other n it always picks the right distribution.

    Since the intuition about the two-photon marginals sampler is that you throw away higher order interference, I fully expect that as you increase the photon number (and hence the number of higher order interference terms) the test only gets more and more selective. This means that I expect these results to hold in the regime reported on by the paper as well.

    I did this for Fock state boson sampling because that’s what I have a library for, but I don’t expect anything to change if I would go to GBS.

    Thanks to Pim Venderbosch who worked with me on this topic earlier this year.

  92. Greg Kuperberg Says:

    Gerhard #89 – Although the permanent of a real or complex matrix M is not technically in #P because of a data type mismatch, there are ways to encode the answer as a non-negative integer so that Enc(Per M) is in #P, and the encoding can easily be translated in FP into any other useful form. Something needs to be said about the data type of the matrix itself. For instance, the entries can floating point with some scalable number of bits of precision, and the answer is the same sort of thing, with an understanding that the answer may have less precision. Or better yet, you can use interval arithmetic to define a rigorously certified answer. Anyway, with encoding and decoding of other data types, #P can handle various constructions of this sort.

    Bringing BQP into “#P” is a case in point. Taking BQP at face value as a decision class, it is in PP. You can think of PP as a simplification of #P using a threshold. Given a threshold function t(x) computable in FP, PP is then yes if and only if the #P answer exceeds t(x). Without loss of generality, t(x) can be very simple, just 2^poly(len(x)). Here as well, BQP is not technically in #P, but it’s so close that the data type mismatch is mainly important as a point of rigor.

    Let’s say mainly, but not strictly always. If you use #P in a flyspeck way, then these distinctions can be important after all. For instance, you can define a subclass of #P, which I will call 2#P, in which the number of accepted witnesses has to be even. Given a problem in 2#P, you can ask whether it is in #P if you divide the answer by 2. My guess is that sometimes it isn’t.

  93. Scott Says:

    Gerard #89: Yes, as Greg Kuperberg said, all the problems we’re talking about here are in P#P, indeed typically with just a single query to the #P oracle.

  94. Gerard Says:

    Greg Kuperberg #92 and Scott #93

    So if Perm(M) and #P are mutually reducible via P-time reductions then what’s the point of introducing separate complexity classes VP and VNP for the determinant (in FP, of course) and permanent computations ?

    (I first heard of VP and VNP in a comment on one of Scott’s last posts.)

  95. Greg Kuperberg Says:

    Yes, saying that the results are in P#P[1] (I hope that the superscript is typeset properly here), meaning a single oracle call to #P with post-processing allowed, is an excellent, succinct way to state my main point. The flyspeck notation for the “P” here is FP, i.e., functional P, but whatever, that letter is becoming optional.

    I particularly have in mind that P#P with polynomially many oracle calls could be a significantly larger complexity class, as far as I know. At least, to me it doesn’t feel like just plain #P anymore.

  96. Scott Says:

    Gerard #94: I hardly ever use Valiant’s VP and VNP definitions—but in any case, the main difference is that they’re classes of polynomials over a field (which could be infinite, like R or C), whereas P, #P, etc. are classes of Boolean problems.

  97. Greg Kuperberg Says:

    Gerard #94 – Sticking to VP, which I understand a bit better, it is an analogue of P/poly, i.e. a potentially non-uniform list of polynomial-sized circuits, defined over a field F or (to generalize it further), over any ring R. In this model, the circuit gates are addition and multiplication rather than Boolean operations, and I would include a 0-input, 1-output gate that produces particular constant values. If F is a fixed finite field, then VP_F is fully equivalent to P/poly. It’s not the same thing for an infinite field because the gates are restricted to just ring operations. For instance, working over the integers ℤ, there is no way to atomize an integer input into a bit string. For one thing, the integer input is allowed to be far larger than the entire circuit. For another, even if you bounded the integer inputs, you still wouldn’t be able to atomize them to separate bits.

    My snap impression is that Valiant wanted to find an algebraic counterpart or algebraic essence to P vs NP and P vs #P. His classes are deliberately similar and deliberately not identical to the standard boolean versions.

  98. Scott Aaronson Comments on the Boson Sampling Quantum Supremacy Paper from China – First Amendment Social Media Says:

    […] Scott Aaronson originated the proposal of Boson Sampling as an area where quantum computers should b… He discusses the new claim from China of Quantum Supremacy where quantum computing systems are 10 trillion times faster than regular supercomputers on a particular problem. […]

  99. In world first, a Chinese quantum supercomputer took 200 seconds to complete a calculation that a regular supercomputer would take 2.5 billion years to complete. : worldnews Says:

    […] https://scottaaronson-production.mystagingwebsite.com/?p=5122 […]

  100. Comment benchmarker les calculateurs quantiques ? Says:

    […] Il est décrit dans la publication scientifique Quantum computational advantage using photons par Han-Sen Zhong et al, 3 décembre 2020 (9 pages) et dans des abondants supplemental materials associés (64 pages). Le contenu de l’expérience est assez bien vulgarisé dans Light-based Quantum Computer Exceeds Fastest Classical Supercomputers par Daniel Garisto dans Scientific American, décembre 2020 et commentée sur le blog de l’ineffable Scott Aaronson. […]

  101. China logra la supremacía cuántica superando a Google | Tecno Geek Says:

    […] términos generales, es el uso de un ordenador cuántico programable o configurable para resolver algún problema […]

  102. Ian M Finn Says:

    I’ve tried to parse Jelmer Renema #91, but I can’t quite tell which way the comment leans. Is it evidence for being able to spoof the HOG or not? It’s killing me waiting for someone to respond! 🙂

  103. Jelmer Renema Says:

    @ Ian M Finn: it’s evidence that being able to spoof this test against a particular mockup distribution is not a proof of hardness (not that this was ever claimed).

  104. Scott Says:

    Jelmer #103: Sorry, still too many embedded negations! 🙂 Wouldn’t being able to spoof the test be, if anything, a proof of non-hardness?

    I had thought you were showing that the HOG test used by the USTC team does discriminate the real GBS from these particular mockups (if not from any mockups), which would be good news for the quantum-supremacist side!

  105. Jelmer Renema Says:

    @ Scott 104:

    Ok, I’ll be more clear: yes, the HOG-test as proposed in the paper DOES distinguish the mockup distribution based on two-particle marginals from the real deal, which shows that you cannot use that distribution to generate samples which look like the ideal distribution.

    What I was trying to get with all the negations was that even if it would not be the case that this test distinguished the two, there is no complexity argument to say that there might not be another test that can distinguish them.

    BTW, can we agree on a different name than HOG? It really is a different beast than the original HOG test.

  106. Scott Says:

    Jelmer #105:

      Ok, I’ll be more clear: yes, the HOG-test as proposed in the paper DOES distinguish the mockup distribution based on two-particle marginals from the real deal, which shows that you cannot use that distribution to generate samples which look like the ideal distribution.

    Thanks! If I’m not mistaken, that refutes a guess that Gil made on his blog, right? Do you expect it to be the same for k-mode marginals for all k=O(1)?

      What I was trying to get with all the negations was that even if it would not be the case that this test distinguished the two, there is no complexity argument to say that there might not be another test that can distinguish them.

    Agreed!

      BTW, can we agree on a different name than HOG? It really is a different beast than the original HOG test.

    In that case, shouldn’t it be the privilege of the USTC group to name their new test?

  107. Jelmer Renema Says:

    @ Scott: Yes I think this does refute Gil’s suggestion, at least when comparing his proposed sampler to samples from the ideal case. The only caveat is that I did the simulations for Fock-BS not GBS. However, I’m quite sure that doesn’t matter, since what we’re using here is the linearity of linear quantum optics which holds no matter what input state you use.

  108. Google and China duke it out over ‘quantum supremacy’ – USA update24 Says:

    […] or else the claim will be stronger for it,” Scott Aaronson, a theoretical computer scientist who runs a popular, if niche, blog about the industry, says of Google’s […]

  109. fred Says:

    It’s a shame that all “quantum supremacy” can do at the moment is recreate a sort of big quantum N-face magic dice, which probabilistic behavior gets harder and harder to compute classically but also harder and harder to sample and verify for absolute correctness – i.e. are we really seeing the result of a fully entangled N qbit system or it’s all getting lost in implementation noise?

    And even if Shore were eventually implemented, there would still be the lingering doubt that prime factorization is efficient classically (but by then we’ll hopefully have a definite theoretical answer).

  110. fred Says:

    * I guess it would have been better to say a 2^N face dice.

  111. fred Says:

    Maybe it would be more productive to flip things around:
    Forget this obsession about showing quantum supremacy using methods that aren’t computations and impossible to validate fully.
    And instead try to reproduce correctly on a quantum system the results of very simple classic computations, in a way that’s proving with a single correct answer that entanglement is well maintained and noise is eliminated, and slowly build it up from there.
    You gotta walk before you can run.

  112. Scott Says:

    fred #111: There are many experiments along the lines of what you say (e.g., do two classical computations in superposition and then measure the interference between them, just to demonstrate the calibration and coherence of your system even though you already know the answer). Rest assured that people are doing those too.

    But most people would say that the experimental quest to demonstrate quantum supremacy is not obviously failing—quite the contrary, actually!

  113. Chinese Research Team Claims to have Demonstrated “Quantum Supremacy” - The Debrief Says:

    […] Aaronson, who co-created BosonSampling in 2013, offers a simple description of it as “a proposal for achieving quantum supremacy by simply passing identical, non-interacting […]

  114. Random Layman Says:

    Sorry if I don’t add much to the amazing conversation that is happening here and I will understand if this stays in moderation just hope you get to read it.

    Just wanted to thank you Scott and you all for being part of this and share all your knowledge, had been looking for actual information on the matter (tired of vague tabloid news) and to find this from the people (I’ve been watching on youtube for years) who are directly involved in it has been amazing.

    Just seeing the mature interchange of posts of the persons directly involved in theorizing, experimenting, upgrading and even trying to refute this like Scott, 陆朝阳, Gil Kalai… just made my long and confined lonely night in a remote town in Mexico great!!!! Best!!!

  115. Eviatar Yadai Says:

    Hi Scott,

    May I ask a basic question regarding the last claims on supremacy (Google and USTC)?
    I just did not found answers for that on your posts and Google paper. Surely I miss something.

    How can we not verify the algorithms at classical computers on the mid-problems (say 30 bit for google). Do classical and quantum performances collide until a very sharp moment in which classical performance rises to an unfeasible level?

    I would expect there would be a point where we can make it faster 100 times on quantum computer but we can still verify it on classical computers.

    Sorry if you already reference to that – any link would be very helpful

  116. Scott Says:

    Random Layman #114: Thanks so much!!

  117. Scott Says:

    Eviatar Yadai #115: What you suggest is precisely what people are doing—namely, comparing the actual quantum results to classical predictions in the regime where classical simulation is still more-or-less feasible.

    No, there’s not a sharp threshold after which classical simulation becomes infeasible. Instead, if all goes well, then the classical hardness should simply increase roughly like 2n where n is the number of photons. If so, with enough effort you can classically verify up to about n=40, but you also see that the optical experiment happened much faster, so you’re happy. The fear is that, after you incorporate all the noise and other imperfections in the experiment, a faster classical simulation algorithm (say, one with polynomial scaling in n) might exist.

  118. La supremazia quantistica dimostrata con i fotoni – Chiara Sabelli Says:

    […] improbabile. Scott Aaronson, professore di computer science alla University of Texas at Austin, fa notare sul suo blog che mentre l’esperimento sul chip Sycamore si confrontava con un sistema che […]

  119. Shtetl-Optimized » Blog Archive » Chinese BosonSampling: The gloves are off Says:

    […] weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have […]

  120. Eviatar Yadai Says:

    Scott #117:
    Thank you for your response.
    Embarrassing, but I’m basically asked why X doesn’t happened but it seem it does and I will certainly need to understand what is the “skeptic” (mainly Gil Kalai ) argument better by addressing them.

    Anyway, I will be happy if you could clarify the opposite standpoint.
    First, I understand well the reason that in addition to classical verification we need to “going deep into the regime where you can no longer classically verify the outputs” as you phrased but since verification of this process is not proved (I refer to formula 77), I think the evidence from the classical regime is very valuable.

    One of Gil Kalai’s arguments was that:
    “A main weakness (which is crucial in my mind) of the experiment is that the experimental support from the regime where the experiments can be tested by a classical computer is too sparse. Much more could have been done and should have been done.”
    and you also argue that “By running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds”

    You claim it was done (although I must say that it seem from the full google paper that verification was up to 1 hour) so I need to learn better the skeptical arguments in order to understand.

    I did want to ask if the next test can stand and if it is valuable:
    – Victor publish quantum circuit C (with ~40 bits)
    – Peggy (Google with quantum device ) compute linear cross-entropy test under a 1-minute time constraint
    – Victor compute the same test in classical computer in the next month. If it compatible with Peggy result he declared quantum supremacy

    I do not see any other step to verify quantum supremacy at the classical regime (I know that Gil Kalai has also suggestions for the indirect proof)

  121. Eviatar Yadai Says:

    Just to clarify #120:
    For simplicity, I ignore IBM result and assume Peggy and Victor do not use super-computer.
    Another simplification is to ignore future improvements at classical computers. When there is such an event, I think the test needs to be repeated with more bits.

    I guess my approach is somewhat “engineering” (Well it’s my line of work ) but I really not see any other option right now for scientist’s consensus on the strong “asymptotic” argument but at least to the weak “practical” one – quantum device can compute certain tasks faster with orders of magnitude than a classical device.

    Personally, it seems to me as a huge milestone and I hope that there will be consensus on this argument although I truly believe google actually achieve it.

  122. La Cina raggiunge il vantaggio quantistico con i fotoni - Guida Studenti Padova Says:

    […] un computer quantistico che è esso stesso una distribuzione misurabile di fotoni. Secondo Scott Aaronson, se Sycamore poteva raggiungere un numero di stati pari a 1015 (l’equivalente di 253), […]

  123. Nicolas Quesada Says:

    Wondering if Adam T (#65 above) could comment which numerical datatypes he is using for their hafnian benchmark?

  124. Jackson Wilde Says:

    Have you verify that the classical algorithm can also achieve scores on the benchmark that are comparable to what the USTC team reported? If so, the quantum propaganda should be stopped, at least for a while.

  125. Radar trends to watch: January 2021 | Teslas Only Says:

    […] Supremacy with BosonSampling: Boson sampling is just another computation that exists just to show quantum supremacy. It’s […]

  126. Radar trends to watch: January 2021 – O’Reilly - DZTECHNO! Says:

    […] Supremacy with BosonSampling: Boson sampling is another computation that exists only to demonstrate quantum supremacy. It’s […]

  127. Radar developments to look at: January 2021 – O’Reilly - WIDE INFO Says:

    […] Supremacy with BosonSampling: Boson sampling is one other computation that exists solely to reveal quantum supremacy. It’s not […]

  128. Amazon Braket wants to put you in a quantum computer – TendTec Says:

    […] claimed Borealis achieved a quantum computational advantage over classic computer simulations for Gaussian boson sampling. They said the sampling required only 36 microseconds versus 9,000 years for classical […]

  129. Amazon Braket wants to put you in a quantum computer - VentureBeat - Says:

    […] claimed Borealis achieved a quantum computational advantage over classic computer simulations for Gaussian boson sampling. They said the sampling required only 36 microseconds versus 9,000 years for classical […]

  130. China unveils sooner light-based quantum laptop Says:

    […] professor of laptop science on the College of Texas at Austin and co-inventor of boson sampling, mentioned in his weblog that he does not suppose boson sampling will change into a type of common quantum […]

  131. Chinese Scientists Claim Record-Smashing Quantum Computing ... - Slashdot | NV Technologies Says:

    […] also the blog entry of scott aaronson (already old, but it is about the same experiment): https://scottaaronson.blog/?p=… [scottaaronson.blog] A group led by Jianwei Pan and Chao-Yang Lu, based mainly at USTC in Hefei, […]

  132. China unveils faster light-based quantum computer - Asia Times < James Webb Store Says:

    […] of Computer Science at the University of Texas at Austin and a co-inventor of boson sampling, said in his blog that he does not think boson sampling will become a form of universal quantum […]