STOC’2021 and BosonSampling

Happy birthday to Alan Turing!

This week I’m participating virtually in STOC’2021, which today had a celebration of the 50th anniversary of NP-completeness (featuring Steve Cook, Richard Karp, Leonid Levin, Christos Papadimitriou, and Avi Wigderson), and which tomorrow will have a day’s worth of quantum computing content, including a tutorial on MIP*=RE, two quantum sessions, and an invited talk on quantum supremacy by John Martinis. I confess that I’m not a fan of GatherTown, the platform being used for STOC. Basically, you get a little avatar who wanders around a virtual hotel lobby and enters sessions—but it seems to reproduce all of the frustrating and annoying parts of experience without any of the good parts.

Ah! But I got the surprising news that Alex Arkhipov and I are among the winners of STOC’s first-ever “Test of Time Award,” for our paper on BosonSampling. It feels strange to win a “Test of Time” award for work that we did in 2011, which still seems like yesterday to me. All the more since the experimental status and prospects of quantum supremacy via BosonSampling are still very much live, unresolved questions.

Speaking of which: on Monday, Alexey Rubtsov, of the Skolkovo Institute in Moscow, gave a talk for our quantum information group meeting at UT, about his recent work with Popova on classically simulating Gaussian BosonSampling. From the talk, I learned something extremely important. I had imagined that their simulation must take advantage of the high rate of photon loss in actual experiments (like the USTC experiment from late 2020), because how else are you going to simulate BosonSampling efficiently? But Rubtsov explained that that’s not how it works at all. While their algorithm is heuristic and remains to be rigorously analyzed, numerical studies suggest that it works even with no photon losses or other errors. Having said that, their algorithm works:

  • only for Gaussian BosonSampling, not Fock-state BosonSampling (as Arkhipov and I had originally proposed),
  • only for threshold detectors, not photon-counting detectors, and
  • only for a small number of modes (say, linear in the number of photons), not for a large number of modes (say, quadratic in the number of photons) as in the original proposal.

So, bottom line, it now looks like the USTC experiment, amazing engineering achievement though it was, is not hard to spoof with a classical computer. If so, this is because of multiple ways in which the experiment differed from my and Arkhipov’s original theoretical proposal. We know exactly what those ways are—indeed, you can find them in my earlier blog posts on the subject—and hopefully they can be addressed in future experiments. All in all, then, we’re left with a powerful demonstration of the continuing relevance of formal hardness reductions, and the danger of replacing them with intuitions and “well, it still seems hard to me.” So I hope the committee won’t rescind my and Arkhipov’s Test of Time Award based on these developments in the past couple weeks!

43 Responses to “STOC’2021 and BosonSampling”

  1. Scott Says:

    Since there’s an uncharacteristic dearth of angry comments, 🙂 maybe I should add: I like Andrew Yang and I’m bummed that he won’t be the next NYC mayor. I hope he stays in politics and wins something else.

  2. fred Says:

    First, congratulations for the award, Scott! 🙂

    Then, as a New Yorker and ardent supporter of Yang, I’m extremely disappointed (and pissed off) he not only lost but ended up 4th in the first ranked choice.
    It’s going to be very hard for an Asian to win any major election, that’s the way it is.

    Instead we have 3 candidates who are part of the system (either tied to the De Blasio administration or tied to previous dem administrations). NYC has huge unions (cops, teachers, …) that are very entangled with politics, so it’s very hard for an outsider who’s not willing to play the game or has enough deep connections and pockets (like Bloomberg).

    Yang was actually the most progressive candidate, with his universal basic income. But he had big institutions against him, like the New York Times (with their constant ridiculous personal attacks on him).

    Thanks to the BLM riots and the massive rise in crimes in NYC (which has hit the black community particularly hard… of course BLM refuses to mention the black on black crime problem, or where the bulk of Asian hate crimes come from.. but we all know what’s going on), we now end up with Eric Adams because he’s an ex-cop (and Brooklyn borough president) and he promises to be tough on crime… (the guy also loves to constantly play the race card in the most ridiculous contexts).

    And on the GOP side, Curtis Sliwa won in a landslide.
    He’s the guy who founded the Guardian Angels in the 80s, a group of volunteer (unarmed) citizens who patrol the NYC subway and streets to try and prevent crime (i.e. do the job the NYPD was failing to do in the 70s, 80s, and 2020/2021).

    So, the two final candidates both are focusing on crime and are somewhat clownish in a Trumpish way.

    A few things about Sliwa:

    I thought he was a Trump supporter, but apparently I was wrong:

    “In December 2020, Sliwa declared that in an interview that he hated then Republican President of the United States Donald Trump, calling him a “screwball and a crackpot”. In February 2020, weeks after Trump left office, Sliwa switched from the Reform party to the Republican party”

    They say that a republican can’t win NYC, but, in the early 90s, the Dinkins administration (he was the first black mayor) was such a total disaster, and New Yorkers were so fed up, that NYC elected a Republican, the now infamous Rudy Giuliani, who got rid of crime, including the mafia. He was the savior of NYC (hard to believe now, considering how much of an idiot he’s become). So a Republican can win.

    Talking about the Mafia, here’s what happened to Sliwa:

    “On June 19, 1992, Sliwa was kidnapped and shot by two gunmen after entering a stolen taxi in Manhattan. The taxi picked up Sliwa near his home in the East Village, and a gunman hiding in the front passenger seat jumped up and fired several shots, hitting him in the groin and legs. The kidnapping was foiled when Sliwa leapt from a front window of the moving cab and escaped. Sliwa underwent surgery for internal injuries and leg wounds.
    Federal prosecutors eventually charged John A. Gotti, the son of Gambino crime family leader John Gotti, with the attempted murder and a raft of other charges. Prosecutors claimed that Gotti was angered by remarks Sliwa had made on his radio program about Gotti’s father.”

    So, yea, that’s where we are, maybe I’ll vote Sliwa, haven’t made up my mind yet…

  3. b_jonas Says:

    > It feels strange to win a “Test of Time” award for work that we did in 2011

    I assumed that “Test of time” must be a weird pun where “time” refers to the time complexity of algorithms and “test” also refers to something mathematical such as decision problems. But from the description you linked to, I was wrong.

  4. 4gravitons Says:

    Can you elaborate on what you found frustrating and annoying in GatherTown? We’re planning on using it in a conference (for a slightly different role, a poster session) and it would be good to know possible problems that could crop up.

  5. Miquel Ramirez Says:

    Not sure what are Scott’s, but here’s mine.

    For whatever reason, the GatherTown setups I have experienced try to simulate those Cyclopean convention centers where it takes no less than 5 minutes to walk from wherever you are to wherever you want to be. You do that the “old school” way: one keystroke at a time. Like up, up, up, left, left, left, … 50 or 60 times. The halls/room are also helpfully named “Room 5A”, or “The Great Ballroom”, just like in real life for immersion. Woe is you if your talk/event is assigned a room far away from the “hub” people appear. As people randomly walk around the chances of they finding you become tiny the farther away they have to navigate the simulation.

  6. Scott Says:

    4gravitons #4: Maybe for a poster session it’s not bad, I’m not sure. But I despise anything with a complicated login procedure that involves waiting and waiting and checking your spam folder for a confirmation email, remembering what email address or password you used previously, etc. etc. All of which is far worse if it happens while you’re missing a talk you’d planned to attend! I also hate the way GatherTown faithfully, and completely pointlessly, reproduces one of the worst aspects of in-person conferences: utter confusion about where to go to get to the talk. It’s enraging and humiliating to wander around a virtual hotel lobby, as an avatar, trying to find the floor that Seminar Room C is on. What I want is Zoom links in my inbox: click here for this, click there for that, done.

  7. Chaoyang Lu Says:

    A few quick points back on 4 June:

    1. Rubtsov & Popova (RP) said that the “4th order scheme can emulate the N = 100 device with an accuracy of about 50%”. However, the linear extrapolation in Fig. 4 will emulate the N = 100 device with a relative deviation of ~100%.

    2. RP’s choice of parameters is quite different from the experiment. The RP’s average photon number (r=1.4, no photon loss, all detectors are close to saturation) is 2~3 times higher than the experimental parameter (r=1.5 and ~30% overall efficiency, less than half of the detectors clicked in average). see point 3 ->

    3. Their method flavors high average photon numbers, as can be seen in Fig. 3a. The probability of all-detector-clicked events is approximated by calculating low-order moments. It is obvious that, with a lower photon number k (when the curves move to the left), the accuracy of probability p(n=N) will decrease exponentially.

    4. In our experiment, most of the computational complexity is contributed by 70-76 clicked events, which is much higher than the average photon number 43. In Fig. 2 and 3a, the authors approximated from n=20 (average clicked number) to n=30 with 4th-order approximation. Thus, the approximation from n=43 to n=76 might require more than a 10th-order approximation (need to verify), whose computational complexity >N^11 is beyond the computing power of classic computers.

    We feel that the accuracy of RP’s method might be severely degraded for the actual experimental parameters, and their claim seems bold before they can actually use the real parameters and generate “quantumly indistinguishable” samples as the experiment.

  8. classical Says:

    > What I want is Zoom links in my inbox:
    > utter confusion about where to go to get to the talk.

    The zoom links for all four rooms do not change from day to day; just bookmark them.

    Press the helpfully unlabeled brochure-like icon in the dock at the bottom of the screen to see the map; or take a screenshot at a map stand.

  9. fred Says:

    “I also hate the way GatherTown faithfully, and completely pointlessly”

    It was probably designed by someone who loves RPGs.

  10. LK2 Says:

    Well Scott, congratulations! It seems to me you are entering a career phase where you get more prices than papers 😉

    MeetAway is similar software I recently used: it is just a “wrapper” for Zoom with poster sessions, coffee break tables etc. Not bad but I’m just tired from whatever virtual conference tool and I hope to go back in presence soon, delta permitting.

  11. BRK477 Says:

    Congratulations on the award. If I was a tenured professor I’d just put my new research on the arxiv and host my own zoom webinars for anyone who wanted to hear about it, and wouldn’t bother in the least with journals, conferences or papers in two-column format.

  12. Scott Says:

    BRK477 #11: With each passing year, my practices have gotten closer to what you describe. 😉

  13. Greg Rosenthal Says:

    > but it seems to reproduce all of the frustrating and annoying parts of experience without any of the good parts
    > Maybe for a poster session it’s not bad, I’m not sure.

    I find gather.town does a good job of simulating impromptu conversations. It’s also great for poster sessions: you can see at a glance where people are (in particular which posters have an author present) and quickly move between nearby posters.

  14. anon Says:

    Scott, to address the dearth of angry comments (just kidding), suppose we replace your conclusion with:

    “So, bottom line, it now looks like the D-Wave experiment, amazing engineering achievement though it was, is not hard to simulate with a classical computer.”

    I’d love to hear why this case is different.

  15. Scott Says:

    anon #14: For starters, it’s different because the USTC team isn’t claiming to have a device that’s already useful for traffic optimization, machine learning, etc. etc., nor are they selling that device to customers, nor are they withholding crucial details like D-Wave did in the ~2007-2010 era. They’re making a scientific claim that’s subject to the usual scientific give-and-take, rather than attempting an end run around the scientific process itself.

  16. Gil Kalai Says:

    Interesting developments. (Of course, everything should be carefully examined including the relevance to the USTC experiment.)

    In hindsight, I would guess that having threshold detectors rather than photon-counting detectors, can be seen as a form of noise that would destroy the high degree Fourier coefficients, just like other forms of noise (e.g. photon losses). When the number of modes is proportional to the number of photons this may leave only the bounded degree Fourier expansion relevant. If this intuition is correct then I would expect it to apply both for Gaussian Boson Sampling and for Fock-state Boson Sampling.

  17. anon Says:

    Scott #15: Why aren’t you demanding a retraction now that the USTC quantum supremacy claim has almost certainly been debunked? Or at least holding their feet to the fire on your blog until they acknowledge that their widely publicized claim of supremacy (which you as a self-acknowledged reviewer of their paper are partly responsible for) was wrong? There is a strong whiff of a double standard in your approach to this case and the D-Wave saga. There is no point in cloaking this in the commercial aspects of D-Wave since what matters is scientific truth, and that’s where the whiff is disturbing. Needless to say, claims by Google, IBM, IonQ, etc., about the imminent commercial applicability of their products are just as disturbing as D-Wave’s but these aren’t getting the same treatment on your blog (notwithstanding one recent post you had about this; it’s just not at the level of the attacks against D-Wave).

    Also, which scientific details do you think D-Wave was withholding? Their publications spelled out in detail what they were doing then, and they’ve continued to be open about their technology and especially about their results. To prove this, here is sample of the papers they published in the ~2007-2010 era you refer to:
    – Probing Noise in Flux Qubits via Macroscopic Resonant Tunneling, PRL 2008
    – Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor, PRB 2010
    – Experimental demonstration of a robust and scalable flux qubit, PRB 2010
    – A scalable readout system for a superconducting adiabatic quantum optimization system, Superconductor Science and Technology 2010
    – A scalable control system for a superconducting adiabatic quantum optimization processor, Superconductor Science and Technology 2010

    Finally, your statement “They’re making a scientific claim that’s subject to the usual scientific give-and-take, rather than attempting an end run around the scientific process itself” is also incomprehensible given the literature. Where is the evidence of the end run, given all the numerous publications (in the hundreds!) D-Wave and others studying their machines have put out in the public domain? There is no question that D-Wave has been subjected to the usual scientific give-and-take.

    Please take a good look in the mirror: are you applying to USTC and to the commercial quantum computing landscape (Google, IBM, IonQ, etc.) the same rigorous standards you demanded and applied to D-Wave?
    I applaud the high anti-hype/BS standards you applied to D-Wave (even if it’s clear you’re unfamiliar with their technical publications), that helped set and define expectations, but I hope you won’t drop these standards for the rest of the community. Unfortunately, at least so far that seems to be the case.

  18. Scott Says:

    anon #17: I’m somewhat at a loss for how to respond, because as far as I can tell, you’re not accusing me of saying anything about either D-Wave or USTC that wasn’t true to the best of my knowledge at the time I said it, nor are you accusing me of withholding any key facts about either. (As for IonQ—my post about that situation is a-comin’, so I can only beg your patience.)

    Your accusation, rather, is one of “emotional inconsistency”: I got angry about D-Wave’s claims in a way that I don’t get angry about more recent QC claims. To this I have two responses.

    First, asking me for “emotional consistency” feels like asking me to set aside all my prior beliefs about what’s likely or unlikely, plausible or implausible. To illustrate, it’s perfectly plausible that you could get quantum supremacy with 50-70 photon BosonSampling. That doesn’t imply that some particular attempt succeeded: maybe photon losses, or insufficiently many modes, or the use of threshold detectors, or whatever else enabled an efficient classical simulation of this experiment. If so, hopefully one can learn from the experience and try again: the principle (as far as I know) was sound.

    By contrast, I find it highly implausible that finite-temperature quantum annealing with no error-correction would give you huge speedups for practical instances of NP-hard optimization problems, especially instances having nothing to do with QM. If that’s true, it overturns a nontrivial chunk of my understanding of computational reality. Which … if so, so be it! It wouldn’t be the first time a theoretical picture of the world got slaughtered by an experiment. But at least own up to the enormity of what’s at stake! Being blase about something that would be so surprising if true is the part that makes my head explode.

    My second response is simply that I’m a different person than I was in 2007-2010, facing a different situation in life. It turns out that blogging under your real name becomes a hell of a lot less fun, once you’re no longer a random junior person joking around with friends and tossing truth-bombs, but a supposedly-important senior person who others will eagerly seek to catch in an error, denounce on social media, etc! I’ve become more cautious in what I write here, and I often blog about recent QC developments more out of leaden obligation than joy. Now that I have kids, I also have much, much less time, which restricts what I choose to write about. For all of these reasons, while I don’t in any essential way regret my D-Wave blogging, I have little doubt that if the saga were unfolding today, my posts about it would read differently.

  19. anon Says:

    Scott #18: I hope you recognize that your “phase 1” (early life) posts about D-Wave did substantial damage to them and the whole field of quantum annealing, while your reputation benefitted from being D-Wave hater in chief. The company has always been filled with dedicated and honest scientists and engineers, but with an executive leadership that was responsible for the hype you deplore. The former really suffered under your pen and the reputational damage you inflicted has lasted and may never wash away. Ok, maybe they deserved it for not reigning in their execs, but then let’s apply identical standards elsewhere.

    And if you’re serious about “while I don’t in any essential way regret my D-Wave blogging, I have little doubt that if the saga were unfolding today, my posts about it would read differently”, then perhaps identical standards should instead be applied retroactively, i.e., a post dedicated to some kind of mea culpa about how you were in phase 1 when you were so critical.

    Now, in your “phase 2” (kids, seniority) other companies and some academic labs are making preposterous and damaging claims (“we’ll solve climate change, design new drugs, revolutionize finance” etc. etc.), but the standards of scientific integrity you applied to D-Wave are no longer convenient to apply.

    Your “emotional inconsistency” thus becomes an intellectual one as well. By not applying standards of truth to these even more preposterous claims, you’re effectively letting them off the hook and allowing investors to be scammed (I don’t really care) and perceptions in the scientific community to be badly skewed (I do care).

    Do you find it plausible that “finite-temperature gate model QC with no error-correction would give you huge speedups for practical instances of NP-hard optimization problems, especially instances having nothing to do with QM”? Setting aside more plausible statements about quantum simulation — but those are also made by D-Wave nowadays — this is precisely the type of statement made by all the new companies that have entered the QC arena. Being blase about something that would be so surprising if true is the part that makes my head explode too. And if the response is that error correction and fault tolerance are the difference, then the answer is that we should be comparing preposterous claims about NISQ devices to each other (today’s NISQ gate model QCs to today’s and yesterday’s D-Wave), and that error correction will be applied to \(all\) models of QC, eventually.

    I could go on and on about the double standard. To me, you’ve regretfully become a selective enabler of the new hype, mostly passively, sometimes actively. I miss the phase 1 Scott who would fight for the truth, especially now that the stakes are so much higher. Signing off now.

  20. Jelmer Renema Says:

    I was reading the Rubtsov & Popova paper, and I was wondering: in the version of their algorithm where you consider clicks on a restricted number of modes, how do you choose the distribution according to which you exclude modes?

  21. Scott Says:

    anon #19: Let me put it this way. If I’d known that my D-Wave blogging was going to create a lifelong obligation to do the exactly same for any future QC hypester, with any silence on my part (even just due to busyness or burnout) being counted as tacit endorsement, I almost surely wouldn’t have done it. Not because anything I said was wrong, only because I can’t bear the weight of that unreasonable expectation.

    In your comment, you weirdly vacillate between wanting me to be as hard on IonQ, USTC, etc. as I was on D-Wave, and wanting me not to be so hard on the poor engineers at D-Wave. If you ever decided which of those two you wanted, I could try to live up to your expectations for me.

  22. Anon14 Says:

    One way to reduce the number of STOC sessions (and therefore make finding virtual rooms easier) would be to require any presented papers to have results verified in a formal proof assistant. There’s a little under 9 months between paper submissions and the conference. It only took about 6 months to formalize a major result of Peter Scholze in the Liquid Tensor Experiment, so computer scientists would have time to hold themselves to similar standards. I’m also guessing that result was actually much harder to formalize than typical computer science theorems due to the facts that it was too complicated for me to understand, was considered important enough to prioritize for formalization by a Fields medalist, and was unrelated to computers or the complexity of theorem-proving procedures. The Flypitch project formalizing the proof of the independence of the continuum hypothesis from ZFC took a little over a year, but in my opinion it was a major enough result to be worth putting in the effort of writing at least two STOC papers. In past blog posts Scott has been dismissive of formal proof assistants but in my view he overestimates the time needed to use them and underestimates the potential for mathematical author and reviewer human error.

  23. Scott Says:

    Anon14: Your proposal seems to re-emerge every year or two in the comment section of this blog, and is fun to debate. In actual reality outside of comment sections, though, it would be the death of algorithms and complexity research, since 99% of us lack the knowledge, the intrinsic motivation, or the time to formally verify what we’re doing. Your requirement, if actually enforced, would hamstring progress to an incredible degree … like an asteroid that preferentially wiped out all the biggest, strongest, most majestic and most recently evolved animals while perhaps sparing some scurrying rodents of 1960s-style formal language theory. 😀 Even Scholze used formal verification only for a very special and unusual situation (from what I know of it), and even if those tools eventually did become popular in some areas of math, I’d expect TCS to be one of the later fields to adopt them.

  24. Yevhenii Says:

    Scott #23: The stated goal of Anon14’s proposal was “to reduce the number of STOC sessions”. If that is your goal, the “death of the algorithms and complexity research” is a feature, not a bug! (Okay, it’s probably not Anon14’s real goal, and they are just using it as an excuse to talk about proof assistants. I’m going to hypocritically do the same.)

    Putting the misaligned utility functions aside, I don’t think it’s fair to blame us for not formalizing our algorithms/complexity results. (takes off the complexity hat; puts on the ITP hat). Instead you should blame us for not making it easy enough to formally prove complexity/algorithms results. (switches hats back). Hopefully in 10 or 100 years we’ll be formalizing many of our results, not because some Anon banned all informal proofs, but because it’ll be easier than doing things on paper.

    Anon14 #22: You shouldn’t just guess how hard something is to formalize. There is almost no correlation between the difficulty of: understanding the problem, coming up with a pen-and-paper proof, understanding the pen-and-paper proof, and formalizing the proof. One extreme example of this is olympiad geometry. A smart middleschooler can solve USAJMO1 from a generic year 20xy in under 10 minutes, but good luck formalizing it. This is not necessarily rhetorical; go ahead and try it! That’d be a neat Xena summer project.

    Another example is complexity theory. As far as I know, no one has ever formalized all of Karp’s 21 NP-complete problems. Of course there’s a lot more to modern complexity theory than statements like “[problem] is NP-complete”, but we can’t even do this most basic thing.

  25. Jelmer Renema Says:

    @Scott OP:

    Okay, so I solved my own question in comment #20 (the answer is on pages 12/13 of the R&P paper, where they give the recipe for chosing which modes will be a priori empty). Now the question is: if you can efficiently compute the probability of there not being photons in some arbitrary subset of modes, then why do you need the restriction that the number of modes is proportional to the number of photons? Surely the scheme would then work for any polynomial scaling of the number of modes vs photons?

    And, thinking along the same lines: since I can make a network of beam splitters and threshold detectors that arbitrarily well approximates a photon-number resolving detector (requiring quadratically many threshold detectors with photon number to achieve a constant probability of observing the correct number of detection events), it seems to me that the statement that this simulation scheme doesn’t work for PNR detectors is also at odds with the idea that you can compute the probability of an empty subset of modes for any # of candidate modes and any # of overall modes.

  26. Gerard Says:

    I find it surprising that more theoretical computer scientists aren’t more interested in formal proofs given that algorithms and many complexity problems are intimately related to formal logic. It seems like the two interests would tend to go hand in hand.

    Yevhenii #24

    I would think that Karp’s reductions should be fairly easy to formalize since they all involve finite structures similar or identical to those used in many computer programs and program verification currently seems to be one of the leading uses of formal proofs.

  27. LK2 Says:

    I’m a physicist and a “threshold detector” has a certain meaning for me. What is the meaning for you CS guys in this context? It seems relevant for understanding these BosonSampling experiments…thanks!

  28. fred Says:

    An interesting discussion between Sean Carroll and John Preskill on QC (I skipped to the bit about error correction)
    https://youtu.be/UYt4QG8iZ6U?t=2462

  29. Scott Says:

    LK2 #27: In this context, “threshold detector” means you can measure a given mode to see whether it contains
    (1) no photons, or
    (2) one or more photon.
    In case (2), you don’t find out how many photons were there.

    For complexity theorists: it’s the NP of photon detectors, whereas a number-resolving detector would be the #P. 😀

  30. LK2 Says:

    Scott #29: thanks for the explanation: it was really needed from a physicist’s point of view.

    LOL for the complexity joke!! Since I understood it, I conclude that all my studying the Arora-Barak is finally paying off 😉

  31. fred Says:

    Can’t we use current QC to test Bell and Wigner’s Room types of scenarios?
    Like, put two QCs in the same state, then feed each of them each half of an entangled pair (qubit), which affects the state of each significantly (a bit like Schrodinger’s cat, if the cat was a macro quantum system), and then do the typical Alice and Bob measurement (treating each QC has each particle in the entangled pair of the classical experiment). And then see if Bell still works out.
    Is anyone currently doing this?

  32. Scott Says:

    fred #31: Of course you can test the Bell inequality; that’s called a Bell experiment and doesn’t require a QC.

    Wigner’s Friend requires placing a conscious being in superposition, so no, not for the foreseeable future. You could of course use a qubit or whatever to stand in for the “conscious being,” but if so, it will just be a standard little QM experiment with the outcome that everyone perfectly well knew from the beginning.

  33. fred Says:

    Scott #32

    Yes, but Bell experiments have been notoriously tricky to properly conduct, no? I was hoping that using an actual QC could help, as an interesting side effect.

    Last time Wigner’s Friend (or some derivation of it) was discussed on this blog (can’t find the post), it was about some “controversial” paper that claimed to have uncovered some QM contradiction, that team mentioned that their proposed experiment could be conducted by substituting conscious beings with QCs (you probably gave the same comment at the time).

  34. fred Says:

    PS:
    that was the blog entry
    https://scottaaronson-production.mystagingwebsite.com/?p=3975
    referencing the paper
    https://www.nature.com/articles/s41467-018-05739-8

    “The idea is to substitute agents F¯ and F by computers. […] In this sense, quantum computers, motivated usually by applications in computing, may help us answering questions in fundamental research.”

    That stuck in my head for some reason, and I was wondering if current QCs, as limited as they are as computing devices, would be already good enough to be used this way.

  35. gentzen Says:

    fred #31: Terry Rudolph has used an email exchange with John Horgan to create a list of Horgan’s questions and his answers. The last question on that FAQs is Do you think work on quantum computation might lead to a clearer understanding of QM?

    Here is annoying answer: It already has, because it brought in smart people and their mathematical tools from other fields.
    I do think computational complexity (both classical and quantum) is already giving us a much deeper understanding about why certain processes do or do not occur easily in nature. But as for the kind of thing I suspect you’re angling towards (“we all live in a simulation”) I’m personally skeptical this kind of thinking will help us make progress.

    One of those smart people is Craig Gidney, who used “current QC to test Bell and Wigner’s Room types of scenarios” in a similar way as you suggest. He analysed a Wigner’s Room type scenario proposed some time ago (but still floating around) using QC. His conclusion is that The Frauchiger-Renner Paradox is a Sleeping Beauty Problem.

    For me, this is the most convincing resolution of the paradox I have read so far. However, I have to admit that I was a fan of Gidney’s way of thinking since the moment he proposed the “before-hand experience” vs. “in-the-moment experience” dichotomy to me as a way to justify the value of density matrix descriptions. So I am certainly biased, especially since I lazily suggested that it would be interesting to see a Craig Gidney style analysis of that thought experiment:

    …, and it is unclear how one could implement this on a quantum computer. With respect to thinking about quantum computations, I find Craig Gidney’s approach to distinguish between “before-hand experience” description vs. “in-the-moment experience” descriptions enlightening:

    https://chat.stackexchange.com/transcript/message/41324513#41324513

    The thought experiment above was described as “in-the-moment experience”, but any real experiment should also allow a “before-hand experience” description. It would be interesting to see that description for the above thought experiment.

  36. gentzen Says:

    (sorry, forgot a “, i.e wrote href=https… instead of href=”https…) Here is the broken sentence again, this time with a correctly working link:
    His conclusion is that The Frauchiger-Renner Paradox is a Sleeping Beauty Problem.

  37. Scott Says:

    fred #33:

      Yes, but Bell experiments have been notoriously tricky to properly conduct, no?

    Yes and no. They were tricky, but the problems were then solved, to the point where experiments a few years ago closed both the locality and detection loopholes simultaneously, and probably got as decisive a result as it’s possible for any experiment to get. In any case, I don’t see how a QC would help with this.

      Last time Wigner’s Friend (or some derivation of it) was discussed on this blog (can’t find the post), it was about some “controversial” paper that claimed to have uncovered some QM contradiction, that team mentioned that their proposed experiment could be conducted by substituting conscious beings with QCs (you probably gave the same comment at the time).

    Yes, I’m sure I did 🙂

  38. fred Says:

    gentzen #35 Scott #37

    Thanks! Super interesting!

    I’m also trying to read another article from 2020 related to this (which supposedly clarifies some of the points)
    https://arxiv.org/pdf/2008.08538.pdf

  39. James Gallagher Says:

    Yes, of course the elephant in the room has always been:

    Quantum Computers simulating Quantum Computers simulating Quantum Computers…

    This would just be silly.

    So I say scalable QCs are impossible by Reductio ad silliness.

  40. fred Says:

    James Gallagher #39

    But, on some level, when we say that a logical qubit can be realized by many physical qubits (in order to implement error correction), can’t this be seen as a case of an “imperfect” QC being able to “simulate” a perfect QC? 😛

  41. fred Says:

    The idea of replacing conscious agents with QCs is quite puzzling because, as far as we know, conscious agents are macroscopic classical systems, i.e. they can’t stay in quantum superposition for long and quickly end up in non-interacting branches of the MW. So a wet-brain is never at once in two different interfering states (chemical and electrical processes reaction time is on the order of 10^-2 sec, while quantum decoherence for a macro classical system like a brain is like 10^-20 sec).
    But obviously QC are macroscopic systems that are constantly in quantum superposition, by definition (so different states at once in the same branch of the MW, interfering with one another).

  42. Eric Cordian Says:

    Hi Scott,

    John Preskill was the guest on this week’s episode of Sean Carroll’s Mindscape Podcast.

    It was a pretty good explanation of the daunting challenges involved in creating a quantum computer with lots of error-free logical qubits.

    He proffered the opinion that we’d be lucky if within five years, quantum computers manage to solve something people are willing to pay for.

    Do you share John’s pessimism?

  43. Scott Says:

    Eric Cordian #42:

      [John Preskill] proffered the opinion that we’d be lucky if within five years, quantum computers manage to solve something people are willing to pay for.

      Do you share John’s pessimism?

    I certainly share his doubts about whether, within five years, QC’s will be able to do anything worth paying for. On the other hand, all experience leads me to believe that even if not, people will be paying anyway. 😀