Scattershot BosonSampling: A new approach to scalable BosonSampling experiments

Update (12/2): Jeremy Hsu has written a fantastic piece for IEEE Spectrum, entitled “D-Wave’s Year of Computing Dangerously.”


Update (11/13): See here for video of a fantastic talk that Matthias Troyer gave at Stanford, entitled “Quantum annealing and the D-Wave devices.” The talk includes the results of experiments on the 512-qubit machine. (Thanks to commenter jim for the pointer. I attended the talk when Matthias gave it last week at Harvard, but I don’t think that one was videotaped.)


Update (11/11): A commenter named RaulGPS has offered yet another great observation that, while forehead-slappingly obvious in retrospect, somehow hadn’t occurred to us.  Namely, Raul points out that the argument given in this post, for the hardness of Scattershot BosonSampling, can also be applied to answer open question #4 from my and Alex’s paper: namely, how hard is BosonSampling with Gaussian inputs and number-resolving detectors?  Raul points out that the latter, in general, is certainly at least as hard as Scattershot BS.  For we can embed Scattershot BS into “ordinary” BS with Gaussian inputs, by first generating a bunch of entangled 2-mode Gaussian states (which are highly attenuated, so that with high probability none of them have 2 or more photons per mode), and then applying a Haar-random unitary U to the “right halves” of these Gaussian states while doing nothing to the left halves.  Then we can measure the left halves to find out which of the input states contained a photon before we applied U.  This is precisely equivalent to Scattershot BS, except for the unimportant detail that our measurement of the “herald” photons has been deferred till the end of the experiment instead of happening at the beginning.  And therefore, since (as I explain in the post) a fast classical algorithm for approximate Scattershot BosonSampling would let us estimate the permanents of i.i.d. Gaussian matrices in BPPNP, we deduce that a fast classical algorithm for approximate Gaussian BosonSampling would have the same consequence.  In short, approximate Gaussian BS can be argued to be hard under precisely the same complexity assumption as can approximate ordinary BS (and approximate Scattershot BS).  Thus, in the table in Section 1.4 of our paper, the entries “Gaussian states / Adaptive, demolition” and “Gaussian states / Adaptive, nondemolition” should be “upgraded” from “Exact sampling hard” to “Apx. sampling hard?”

One other announcement: following a suggestion by commenter Rahul, I hereby invite guest posts on Shtetl-Optimized by experimentalists working on BosonSampling, offering your personal views about the prospects and difficulties of scaling up.  Send me email if you’re interested.  (Or if you don’t feel like writing a full post, of course you can also just leave a comment on this one.)


[Those impatient for a cool, obvious-in-retrospect new idea about BosonSampling, which I learned from the quantum optics group at Oxford, should scroll to the end of this post.  Those who don’t even know what BosonSampling is, let alone Scattershot BosonSampling, should start at the beginning.]

BosonSampling is a proposal by me and Alex Arkhipov for a rudimentary kind of quantum computer: one that would be based entirely on generating single photons, sending them through a network of beamsplitters and phaseshifters, and then measuring where they ended up.  BosonSampling devices are not thought to be capable of universal quantum computing, or even universal classical computing for that matter.  And while they might be a stepping-stone toward universal optical quantum computers, they themselves have a grand total of zero known practical applications.  However, even if the task performed by BosonSamplers is useless, the task is of some scientific interest, by virtue of apparently being hard!  In particular, Alex and I showed that, if a BosonSampler can be simulated exactly in polynomial time by a classical computer, then P#P=BPPNP, and hence the polynomial hierarchy collapses to the third level.  Even if a BosonSampler can only be approximately simulated in classical polynomial time, the polynomial hierarchy would still collapse, if a reasonable-looking conjecture in classical complexity theory is true.  For these reasons, BosonSampling might provide an experimental path to testing the Extended Church-Turing Thesis—i.e., the thesis that all natural processes can be simulated with polynomial overhead by a classical computer—that’s more “direct” than building a universal quantum computer.  (As an asymptotic claim, obviously the ECT can never be decisively proved or refuted by a finite number of experiments.  However, if one could build a BosonSampler with, let’s say, 30 photons, then while it would still be feasible to verify the results with a classical computer, it would be fair to say that the BosonSampler was working “faster” than any known algorithm running on existing digital computers.)

In arguing for the hardness of BosonSampling, the crucial fact Alex and I exploited is that the amplitudes for n-photon processes are given by the permanents of nxn matrices of complex numbers, and Leslie Valiant proved in 1979 that the permanent is #P-complete (i.e., as hard as any combinatorial counting problem, and probably even “harder” than NP-complete).  To clarify, this doesn’t mean that a BosonSampler lets you calculate the permanent of a given matrix—that would be too good to be true!  (See the tagline of this blog.)  What you could do with a BosonSampler is weirder: you could sample from a probability distribution over matrices, in which matrices with large permanents are more likely to show up than matrices with small permanents.  So, what Alex and I had to do was to argue that even that sampling task is still probably intractable classically—in the sense that, if it weren’t, then there would also be unlikely classical algorithms for more “conventional” problems.

Anyway, that’s my attempt at a 2-paragraph summary of something we’ve been thinking about on and off for four years.  See here for my and Alex’s original paper on BosonSampling, here for a recent followup paper, here for PowerPoint slides, here and here for MIT News articles by Larry Hardesty, and here for my blog post about the first (very small, 3- or 4-photon) demonstrations of BosonSampling by quantum optics groups last year, with links to the four experimental papers that came out then.

In general, we’ve been thrilled by the enthusiastic reaction to BosonSampling by quantum optics people—especially given that the idea started out as pure complexity theory, with the connection to optics coming as an “unexpected bonus.”  But not surprisingly, BosonSampling has also come in for its share of criticism: e.g., that it’s impractical, unscalable, trivial, useless, oversold, impossible to verify, and probably some other things.  A few people have even claimed that, in expressing support and cautious optimism about the recent BosonSampling experiments, I’m guilty of the same sort of quantum computing hype that I complain about in others.  (I’ll let you be the judge of that.  Reread the paragraphs above, or anything else I’ve ever written about this topic, and then compare to, let’s say, this video.)

By far the most important criticism of BosonSampling—one that Alex and I have openly acknowledged and worried a lot about almost from the beginning—concerns the proposal’s scalability.  The basic problem is this: in BosonSampling, your goal is to measure a pattern of quantum interference among n identical, non-interacting photons, where n is as large as possible.  (The special case n=2 is called the Hong-Ou-Mandel dip; conversely, BosonSampling can be seen as just “Hong-Ou-Mandel on steroids.”)  The bigger n gets, the harder the experiment ought to be to simulate using a classical computer (with the difficulty increasing at least like ~2n).  The trouble is that, to detect interference among n photons, the various quantum-mechanical paths that your photons could take, from the sources, through the beamsplitter network, and finally to the detectors, have to get them there at exactly the same time—or at any rate, close enough to “the same time” that the wavepackets overlap.  Yet, while that ought to be possible in theory, the photon sources that actually exist today, and that will exist for the foreseeable future, just don’t seem good enough to make it happen, for anything more than a few photons.

The reason—well-known for decades as a bane to quantum information experiments—is that there’s no known process in nature that can serve as a deterministic single-photon source.  What you get from an attenuated laser is what’s called a coherent state: a particular kind of superposition of 0 photons, 1 photon, 2 photons, 3 photons, etc., rather than just 1 photon with certainty (the latter is called a Fock state).  Alas, coherent states behave essentially like classical light, which makes them pretty much useless for BosonSampling, and for many other quantum information tasks besides.  For that reason, a large fraction of modern quantum optics research relies on a process called Spontaneous Parametric Down-Conversion (SPDC).  In SPDC, a laser (called the “pump”) is used to stimulate a crystal to produce further photons.  The process is inefficient: most of the time, no photon comes out.  But crucially, any time a photon does come out, its arrival is “heralded” by a partner photon flying out in the opposite direction.  Once in a while, 2 photons come out simultaneously, in which case they’re heralded by 2 partner photons—and even more rarely, 3 photons come out, heralded by 3 partner photons, and so on.  Furthermore, there exists something called a number-resolving detector, which can tell you (today, sometimes, with as good as ~95% reliability) when one or more partner photons have arrived, and how many of them there are.  The result is that SPDC lets us build what’s called a nondeterministic single-photon source.  I.e., you can’t control exactly when a photon comes out—that’s random—but eventually one (and only one) photon will come out, and when that happens, you’ll know it happened, without even having to measure and destroy the precious photon.  The reason you’ll know is that the partner photon heralds its presence.

Alas, while SPDC sources have enabled demonstrations of a large number of cool quantum effects, there’s a fundamental problem with using them for BosonSampling.  The problem comes from the requirement that n—the number of single photons fired off simultaneously into your beamsplitter network—should be big (say, 20 or 30).  Suppose that, in a given instant, the probability that your SPDC source succeeds in generating a photon is p.  Then what’s the probability that two SPDC sources will both succeed in generating a photon at that instant?  p2.  And the probability that three sources will succeed is p3, etc.  In general, with n sources, the probability that they’ll succeed simultaneously falls off exponentially with n, and the amount of time you’ll need to sit in the lab waiting for the lucky event increases exponentially with n.  Sure, when it finally does happen, it will be “heralded.”  But if you need to wait exponential time for it to happen, then there would seem to be no advantage over classical computation.  This is the reason why so far, BosonSampling has only been demonstrated with 3-4 photons.

At least three solutions to the scaling problem suggest themselves, but each one has problems of its own.  The first solution is simply to use general methods for quantum fault-tolerance: it’s not hard to see that, if you had a fault-tolerant universal quantum computer, then you could simulate BosonSampling with as many photons as you wanted.  The trouble is that this requires a fault-tolerant universal quantum computer!  And if you had that, then you’d probably just skip BosonSampling and use Shor’s algorithm to factor some 10,000-digit numbers.  The second solution is to invent some specialized fault-tolerance method that would apply directly to quantum optics.  Unfortunately, we don’t know how to do that.  The third solution—until recently, the one that interested me and Alex the most—would be to argue that, even if your sources are so cruddy that you have no idea which ones generated a photon and which didn’t in any particular run, the BosonSampling distribution is still intractable to simulate classically.  After all, the great advantage of BosonSampling is that, unlike with (say) factoring or quantum simulation, we don’t actually care which problem we’re solving!  All we care about is that we’re doing something that we can argue is hard for classical computers.  And we have enormous leeway to change what that “something” is, to match the capabilities of current technology.  Alas, yet again, we don’t know how to argue that BosonSampling is hard to simulate approximately in the presence of realistic amounts of noise—at best, we can argue that it’s hard to simulate approximately in the presence of tiny amounts of noise, and hard to simulate super-accurately in the presence of realistic noise.

When faced with these problems, until recently, all we could do was

  1. shrug our shoulders,
  2. point out that none of the difficulties added up to a principled argument that scalable BosonSampling was not possible,
  3. stress, again, that all we were asking for was to scale to 20 or 30 photons, not 100 or 1000 photons, and
  4. express hope that technologies for single-photon generation currently on the drawing board—most notably, something called “optical multiplexing”—could be used to get up to the 20 or 30 photons we wanted.

Well, I’m pleased to announce, with this post, that there’s now a better idea for how to scale BosonSampling to interesting numbers of photons.  The idea, which I’ve taken to calling Scattershot BosonSampling, is not mine or Alex’s.  I learned of it from Ian Walmsley’s group at Oxford, where it’s been championed in particular by Steve Kolthammer(Update: A commenter has pointed me to a preprint by Lund, Rahimi-Keshari, and Ralph from May of this year, which I hadn’t seen before, and which contains substantially the same idea, albeit with an unsatisfactory argument for computational hardness.  In any case, as you’ll see, it’s not surprising that this idea would’ve occurred to multiple groups of experimentalists independently; what’s surprising is that we didn’t think of it!)  The minute I heard about Scattershot BS, I kicked myself for failing to think of it, and for getting sidetracked by much more complicated ideas.  Steve and others are working on a paper about Scattershot BS, but in the meantime, Steve has generously given me permission to share the idea on this blog.  I suggested a blog post for two reasons: first, as you’ll see, this idea really is “blog-sized.”  Once you make the observation, there’s barely any theoretical analysis that needs to be done!  And second, I was impatient to get out to the “experimental BosonSampling community”—not to mention to the critics!—that there’s now a better way to BosonSample, and one that’s incredibly simple to boot.

OK, so what is the idea?  Well, recall from above what an SPDC source does: it produces a photon with only a small probability, but whenever it does, it “heralds” the event with a second photon.  So, let’s imagine that you have an array of 200 SPDC sources.  And imagine that, these sources being unpredictable, only (say) 10 of them, on average, produce a photon at any given time.  Then what can you do?  Simple: just define those 10 sources to be the inputs to your experiment!  Or to say it more carefully: instead of sampling only from a probability distribution over output configurations of your n photons, now you’ll sample from a joint distribution over inputs and outputs: one where the input is uniformly random, and the output depends on the input (and also, of course, on the beamsplitter network).  So, this idea could also be called “Double BosonSampling”: now, not only do you not control which output will be observed (but only the probability distribution over outputs), you don’t control which input either—yet this lack of control is not a problem!  There are two key reasons why it isn’t:

  1. As I said before, SPDC sources have the crucial property that they herald a photon when they produce one.  So, even though you can’t control which 10 or so of your 200 SPDC sources will produce a photon in any given run, you know which 10 they were.
  2. In my and Alex’s original paper, the “hardest” case of BosonSampling that we were able to find—the case we used for our hardness reductions—is simply the one where the mxn “scattering matrix,” which describes the map between the n input modes and the m>>n output modes, is a Haar-random matrix whose columns are orthonormal vectors.  But now suppose we have m input modes and m output modes, and the mxm unitary matrix U mapping inputs to outputs is Haar-random.  Then any mxn submatrix of U will simply be an instance of the “original” hard case that Alex and I studied!

More formally, what can we  say about the computational complexity of Scattershot BS?  Admittedly, I don’t know of a reduction from ordinary BS to Scattershot BS (though it’s easy to give a reduction in the other direction).  However, under exactly the same assumption that Alex and I used to argue that ordinary BosonSampling was hard—our so-called Permanent of Gaussians Conjecture (PGC)—one can show that Scattershot BS is hard also, and by essentially the same proof.  The only difference is that, instead of talking about the permanents of nxn submatrices of an mxn Haar-random, column-orthonormal matrix, now we talk about the permanents of nxn submatrices of an mxm Haar-random unitary matrix.  Or to put it differently: where before we fixed the columns that defined our nxn submatrix and only varied the rows, now we vary both the rows and the columns.  But the resulting nxn submatrix is still close in variation distance to a matrix of i.i.d. Gaussians, for exactly the same reasons it was before.  And we can still check whether submatrices with large permanents are more likely to be sampled than submatrices with small permanents, in the way predicted by quantum mechanics.

Now, everything above assumed that each SPDC source produces either 0 or 1 photon.  But what happens when the SPDC sources produce 2 or more photons, as they sometimes do?  It turns out that there are two good ways to deal with these “higher-order terms” in the context of Scattershot BS.  The first way is by using number-resolving detectors to count how many herald photons each SPDC source produces.  That way, at least you’ll know exactly which sources produced extra photons, and how many extra photons each one produced.  And, as is often the case in BosonSampling, a devil you know is a devil you can deal with.  In particular, a few known sources producing extra photons, just means that the amplitudes of the output configurations will now be permanents of matrices with a few repeated rows in them.  But the permanent of an otherwise-random matrix with a few repeated rows should still be hard to compute!  Granted, we don’t know how to derive that as a consequence of our original hardness assumption, but this seems like a case where one is perfectly justified to stick one’s neck out and make a new assumption.

But there’s also a more elegant way to deal with higher-order terms.  Namely, suppose m>>n2 (i.e., the number of input modes is at least quadratically greater than the average number of photons).  That’s an assumption that Alex and I typically made anyway in our original BosonSampling paper, because of our desire to avoid what we called the “Bosonic Birthday Paradox” (i.e., the situation where two or more photons congregate in the same output mode).  What’s wonderful is that exactly the same assumption also implies that, in Scattershot BS, two or more photons will almost never be found in the same input mode!  That is, when you do the calculation, you find that, once you’ve attenuated your SPDC sources enough to avoid the Bosonic Birthday Paradox at the output modes, you’ve also attenuated them enough to avoid higher-order terms at the input modes.  Cool, huh?

Are there any drawbacks to Scattershot BS?  Well, Scattershot BS certainly requires more SPDC sources than ordinary BosonSampling does, for the same average number of photons.  A little less obviously, Scattershot BS also requires a larger-depth beamsplitter network.  In our original paper, Alex and I showed that for ordinary BosonSampling, it suffices to use a beamsplitter network of depth O(n log m), where n is the number of photons and m is the number of output modes (or equivalently detectors).  However, our construction took advantage of the fact that we knew exactly which n<<m sources the photons were going to come from, and could therefore optimize for those.  For Scattershot BS, the depth bound increases to O(m log m): since the n photons could come from any possible subset of the m input modes, we no longer get the savings based on knowing where they originate.  But this seems like a relatively minor issue.

I don’t want to give the impression that Scattershot BS is a silver bullet that will immediately let us BosonSample with 30 photons.  The most obvious limiting factor that remains is the efficiency of the photon detectors—both those used to detect the photons that have passed through the beamsplitter network, and those used to detect the herald photons.  Because of detector inefficiencies, I’m told that, without further technological improvements (or theoretical ideas), it will still be quite hard to push Scattershot BS beyond about 10 photons.  Still, as you might have noticed, 10 is greater than 4 (the current record)!  And certainly, Scattershot BS itself—a simple, obvious-in-retrospect idea that was under our noses for years, and that immediately pushes forward the number of photons a BosonSampler can handle—should make us exceedingly reluctant to declare there can’t be any more such ideas, and that our current ignorance amounts to a proof of impossibility.

91 Responses to “Scattershot BosonSampling: A new approach to scalable BosonSampling experiments”

  1. Daniel Freeman Says:

    Has anyone considered doing some sort of feedback Boson Sampling? I have in mind putting a beamsplitter right before the detector, and probabilistically shuttling photons back to their input channel (possibly even the input channel they originated from in the case of scattershot boson sampling).

    I don’t know enough quantum optics to know if it’s experimentally feasible, nor do I have enough intuition to say anything about how the probability distribution on your output channel is modified, but these sorts of feedback ideas allow you to do all kinds of tricks from the perspective of quantum control.

  2. Scott Says:

    Daniel #1:

      Has anyone considered doing some sort of feedback Boson Sampling?

    I guess you have, just now! 🙂

    Seriously, that’s a very interesting idea, which we should think more about. Certainly it could serve as a cheap way to increase the “effective depth” of an optical network. But the most immediate problem I can see is that, if you probabilistically route some of your photons to the detectors and others back to the beginning of the beamsplitter network, then you’ve ipso facto made those two sets of photons distinguishable from each other, and destroyed the possibility of interference involving all your photons.

  3. Lou Scheffer Says:

    For dealing with the case where a source generates 2 or more photons – can’t you simply notice when that happens (since the herald photons will also be doubled) and then ignore that trial?

  4. Alan Aspuru-Guzik Says:

    Dear Scott,

    Fantastic post. I am looking forward to seeing which of the quantum optics labs does the first scattershot BosonSampling experiment! This post also let me understand BosonSampling better. Thanks for the awesome blog.

  5. J Says:

    Can we ever get to sampling say 10^6 photons? Does the BosonSampling thing have anything to do with factoring?

  6. Anthony Says:

    Isn’t it basically the same idea as in this paper
    http://arxiv.org/abs/1305.4346v1
    which was since retracted?

  7. Scott Says:

    Anthony #6: Thanks so much for pointing me to that paper! I somehow never saw it before (even though I’ve had some contact with its authors), or if I saw it then I’ve forgotten. I’ll certainly add a comment about it to the original post.

    Two remarks:

    (1) If I had seen the paper, I would’ve been thrown off by an unsatisfactory argument they make for why Scattershot BS should be hard: namely, because a Scattershot BS distribution contains a bunch of “ordinary” BosonSampling distributions as sub-distributions (each one with exponentially small probability). That connection between the two problems does give you a reduction, but it’s in the wrong direction. Alternatively, from the fact that a scaled version of some particular BosonSampling distribution occurs in the Scattershot BS distribution, it’s clear that Scattershot BS should be hard to simulate exactly unless PH collapses, but that’s not really what we want either. It’s far better just to say that, under the same assumption (Permanent of Gaussians Conjecture) we used to get that ordinary BS was hard to approximate, one can also get that Scattershot BS is hard to approximate, by trivial modifications to our original proof.

    Now, you could argue that, as a complexity theorist, I should’ve been able myself to correct the problem with their argument—that if I couldn’t, it would be more my fault than the authors’. That’s true, but I simply don’t have that much confidence in my own ability to realize obvious-in-retrospect things! 🙂

    (2) The authors write: “This paper has been withdrawn by the authors whilst a non-technical issue is resolved.” I have no idea what the “non-technical issue” is that caused them to temporarily withdraw. In any case, the only thing I could see that was unsatisfactory on a technical level was the argument for hardness, and that I just corrected for them! 🙂

  8. Scott Says:

    Lou Scheffer:

      For dealing with the case where a source generates 2 or more photons – can’t you simply notice when that happens (since the herald photons will also be doubled) and then ignore that trial?

    Yes, absolutely, that’s exactly what you can and should do, provided the multi-photon case occurs with probability bounded away from 1. In my “Solution #1,” however, I was assuming that the multi-photon case occurs almost all the time, so that you don’t have the luxury of ignoring it: you have to try to take advantage of it, and argue that the problem it solves is still hard (otherwise, you’ll simply be waiting too long for collision-free trials).

  9. Scott Says:

    J #5:

      Can we ever get to sampling say 10^6 photons?

    Probably not, but I don’t know a good reason to anyway!

    As Alex and I have often pointed out, with 106 photons it’s not even obvious how you’d efficiently verify that a BosonSampling device was working correctly, let alone simulate it classically! (For more about verification of BosonSampling, see this paper.) Furthermore, the only known “application” of BosonSampling is as a proof-of-principle, showing that a quantum device can do something “faster” than a classical simulation, and for that 30-40 photons should be perfectly sufficient.

      Does the BosonSampling thing have anything to do with factoring?

    Well, a BosonSampling device could certainly be a useful stepping-stone toward a KLM quantum computer (to get from one to the other, you just need to add adaptive measurements). And a KLM QC would be universal, so could certainly factor.

    On the other hand, BosonSampling itself has no known cryptographic or number-theoretic applications. (Though, if all you care about is evidence for classical hardness, not usefulness, then the evidence for hardness is arguably stronger for BosonSampling than it is for factoring.)

  10. rrtucci Says:

    Scott, has the possibility of a Twitter War with Lockheed Martin crossed your mind?

  11. Scott Says:

    rrtucci #10: Before your comment, no, it hadn’t! And I think it just exited my mind out the other end.

  12. Vadim Says:

    Possibly a very ignorant question on my part since I know even less physics than I do TCS, but does the experiment have to be done with photons, or could it be done with composite bosons like helium-4?

  13. Scott Says:

    Vadim #12: In principle you could use any excitations that behaved as identical, non-interacting bosons—even Higgs bosons! (Indeed, it’s for precisely this reason that we decided to call the problem BosonSampling rather than, say, PhotonSampling.)

    Photons are merely an “obvious” choice, one for which much of the required technology is already standard. But sure, people have also discussed using (e.g.) bosonic excitations in solid state. And just a couple weeks ago, a preprint appeared on the arXiv (which is still on my to-read pile) proposing to implement BosonSampling using trapped ions. I haven’t seen helium-4 discussed yet, and have no idea what its experimental advantages or disadvantages might be.

  14. I can't resist Says:

    OMG what a load of BS!!!

  15. aram Says:

    Along the lines of comment #14, Scattershot BS is a great way to get into Science and Nature!

  16. Scott Says:

    I can’t resist #14 and aram #15: Glad people are taking me up on what amounted to an open invitation to BS jokes…

  17. Rahul Says:

    Naive question probably: But how does the efficiency of the current photon detectors make Scattershit (ok, shot) BS harder? Can someone explain?

  18. John Sidles Says:

    The BosonSampling Principle (Scott #13):
    “In principle we can use [for BosonSampling experiments] any excitations that behave as identical, non-interacting bosons

    A simulationist’s dual to Scott’s assertion is

    The BosonSimulation Principle
    “In principle we can simulate BosonSampling trajectories on any state-space that supports identical, non-interacting bosons.”

    History assists us as follows. Quantum chemist John Pople’s Gaussian algorithm teaches that algebraically natural state-spaces are preferred; physicist Roy Glauber’s optics teaches that coherent states are physically natural; Dorje Brody and Eva-Maria Graefe’s “Coherent states and rational surfaces” (arXiv:1001.1754) reconciles Pople’s algebraic insight with Glauber’s physical insight, and Joseph Landsberg’s “Geometry and the complexity of matrix multiplication” (arXiv:cs/0703059) concretely teaches that a suitable state-space manifold is $$\sigma_r\,\text{Segre}\big(\mathbb{G}^k\times\mathbb{G}^k\times\dots (n\,\text{times})\big) \subset \mathbb{P}^{(k+1)^n-1}$$where \(n\) is the number of dynamically coupled photon modes, \(k\) is an \(\text{O}(n)\) Fock-number cut-off, the Segre map is natural, \(\sigma_r\) is a rank-\(r\) secant join (per Landsberg’s notation), and \(\mathbb{G}^k\) is the Glauber/Pople coherent state that the Veronese map associates to points in \(\mathbb{P}^{k}\):$$\begin{aligned}\mathbb{G}^k(a,b)=&\textstyle{\big[{{k}\choose{0}}\,{a^0}{b^{k}},{{k}\choose{1}}\,{a^1}{b^{k-1}},\,\dots\,,}\\ &\textstyle{\quad{{k}\choose{k-1}}\,{a^{k-1}}{b^{1}},{{k}\choose{k}}\,{a^{k}}{b^0}}\big] \mapsto \mathbb{P}^{k}\end{aligned}$$where \(a,b\) are complex coherent-state coordinates (for a basic background see Lecture 2 of Joe Harris’ Algebraic Geometry; for physical applications see eq. 15 of Brody and Graefe; for algebraic examples see Section 5.5 of Landsberg’s Tensors: Geometry and Applications).

    The resulting varietal state-spaces algebraically have dimensional \(r(n+1)-1\) (that is, polynomial-in-\(n\) dimension). Advantageously for simulation purposes, these state-spaces are “foamy”, in that they “fill” the usual quantum Hilbert space of dimension \((k+1)^n\), and moreover they preserve *almost* all of the usual Hamiltonian, thermodynamical, and quantum “goodness” of that Hilbert space (that is, conserved quantities are conserved, thermodynamical laws Zero through Three are respected, and the standard quantum uncertainty principles are respected too).

    The missing “quantum goodness” resides (of course) in high-order quantum coherences that essential to both quantum computing and BosonSampling.

    Suppose therefore that hostile aliens have come to Earth, and inspired by Erdős Ramsey(5,5) problem, these aliens command us to “show data from a 100-photon BosonSampling experiment, or be destroyed.”

    Absent a scalable BosonSampling experiment, we pretend that Earth’s sole option is to simulate data by unravelling trajectories on the above Segre/Veronese foamy-yet-thermodynamical varietal state-spaces. Fortunately the optimistic teaching of Pople, Glauber, Landsberger (and this year’s Chemistry Nobelists too) is that quantum simulation methods have sometimes worked very much better than simple arguments suggest.

    The Open BosonSimulation Question  What computational tests (if any) can the hostile aliens apply to Earth’s varietally simulated dataset, to demonstrate that the data are simulated rather than real? And do these tests require exponential computing resources and/or exponentially large data samples?

    Thank you Scott and Alex (Arkhipov), for so ably inspiring *both* experimentalists and simulationists!

  19. William Hird Says:

    If this was my field, I would look at ways to engineer “buckyballs”, or buckminsterfullerene cages to emit single photons. Seems to me a possible way to control a single atom and get it to emit photons with some precision. Just an idea!

  20. Jonathan Dowling Says:

    Scott:

    This is a very nice development.

    My only quibble is with the statement, “…there’s no known process in nature that can serve as a deterministic single-photon source.” This statement is overly strong and that along with the tone of the blog gives the impression that heralded SPDC is the only single-photon-source game in town.

    A great deal of effort has been invested in developing single photon sources for the past 20 years for use in quantum key distribution and linear optical quantum information processing. This work has led to the development of a number of single photon sources primarily using quantum dots in optical cavities, microwave qubits in strip-line cavities, and atoms in microwave or optical cavities. Often these are more of a headache than SPDC, but they do exist.

    These sources are not perfect yet; they do not always produce a single photon on demand and sometimes they produce more than one, but the improvement of these sources is a matter of technological matter and not some fundamental physical roadblock. They are currently much better than SPDC as single-photon sources.

    What would be interesting is to examine these other single photon sources in scatter-shot BosonSampling, since the input statistics will be much closer to what you and Alex originally proposed.

    Cheers,

    Jon

  21. Scott Says:

    Rahul #17:

      how does the efficiency of the current photon detectors make Scattershit (ok, shot) BS harder? Can someone explain?

    Well, suppose (for example) that you fail to detect some of your n photons as they emerge from the mxm beamsplitter network. Then you don’t know exactly what the output state was. Therefore, you don’t know the complete set of rows of the nxn submatrix of your mxm unitary matrix, whose squared permanent you should calculate and add to your probability histogram, to compare to the predictions of quantum mechanics. At best, you can narrow the set of rows down to choose(m-(n-k),k) possibilities, where k is the number of lost photons. Likewise, if you fail to detect k of the herald photons, then you don’t know exactly what the input state was. Therefore you don’t know the complete set of columns of your nxn submatrix, and can at best narrow it down to choose(m-(n-k),k) possibilities.

  22. Scott Says:

    Jon #20: Thanks for the highly informative comment! I knew, of course, that there were other ways to produce single photons besides SPDC, but I didn’t know how much further along they were toward being deterministic sources. (That might be because, every time I’ve met with optics people, almost every sentence out of their mouths has been SPDC this and SPDC that!)

    Of course, reliable enough single-photon sources would obviate the need for Scattershot BS. But it seems more likely that one will need to combine every trick available…

  23. John Sidles Says:

    Jonathan Dowling foresees:
    “Improvement of [large-\(n\) mutually coherent on-demand] photon sources is a technological matter and not some fundamental physical roadblock.”

    Scott Aaronson foresees:

    “It seems more likely that one will need to combine every trick available.”

    The former proposition perennially inspires discourse, while the latter proposition is (among experimentalists at least) “a truth universally acknowledged.”

    Oft-cited articles like Pittman et al. “Can Two-Photon Interference be Considered the Interference of Two Photons?” (1996) and recent advances like McMillan et al. “Two-photon interference between disparate sources for quantum networking” (2013) show us that these considerations are mathematically challenging and physically subtle.

    These experiments help too to appreciate the robust common sense of William Lamb’s oft-cited sardonic diatribe “Anti-photon” (Applied Physics B, 1996):

    “It should be apparent from the title of this article that the author does not like the use of the word “photon”, which dates from 1926. In his view, there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed.”

    A Google search finds all three of these outstanding articles.

    Summary  The existing literature provides ample reasons that we should be (as Scott says) “exceedingly reluctant to declare that our current ignorance amounts to proofs” in regard to foreseeing the feasibility-versus-infeasibility of scalable QIT technologies, and similarly in regard to foreseeing the fundamental limits to efficient computational simulation of those technologies.

    Carlton Caves’ advice is well-considered too: “Physicists are really obliged to give every explanation they can think of.” It is Caves’ path that has led us most reliably toward fundamental math-and-physics advances, particularly in regard to quantum technologies.

    Conclusion  The present era’s abundance of yet-unexplored mathematical paths for explaining and computationally simulating large-scale quantum systems is good news for young STEM professionals, and augurs well for continued quantum advances, adventures, excitement, and enterprises.

  24. Gil Kalai Says:

    John referred to the following story told by Joel Spencer:

    “Erdős asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5,5) or they will destroy our planet. In that case, he claims, we shoultd marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6,6). In that case, he believes, we should attempt to destroy the aliens.”

    (Interestingly these Rumsey numbers re of some relevance to our recent computational complexity discussions over here. Even more interestingly, a recent paper on a D-wave experiment attempts to compute Rumsey numbers)

    Going out on a limb I would rephrase Erdos statement as follows: Imagine an alien force, vastly more powerful than us, landing on Earth and demanding to exhibit (genuine) bosonsampling capability for 5 bosons, or they will destroy our planet. In that case, we should marshal all our quantum engineering forces and attempt to carry it over. But suppose, instead, that they ask for BosonSampling for 10 bosons. In that case, we should attempt to destroy the aliens.”

    I would not expect much difference between genuine BosonSampling and and genuine FermionSampling.

  25. rrtucci Says:

    Scott, sorry for the suggestion of a Twitter War, I don’t much like Twitter so I don’t blame you for not wanting to air your views about this that way. One last idea: How about if you write a periodic blog post (every six months or so) accessing the progress of Lockheed Martin/NASA/Google adiabatic quantum computing efforts as you see them (or maybe as you and some collaborators like Matthias Troyer see them). It would be a civilized way of fostering competition and airing differences, and a public service of sorts, similar to how the president’s state of the union speech is followed by a state of the union rebuttal. Oh, and by the way, sorry for being off topic. I think the BosonSampling stuff is cool, but I have nothing useful to add to it.

  26. Jay Says:

    @Scott, thank you for this crystal clear post

    @Gil, thanks I was wondering what your thoughts were. Just to clarify: will you change your mind entirely if experimentalists actually do the job for 10 bosons? Also, don’t you think if aliens were asking for that, 10’d be low enough we could compute it classically?

  27. J Says:

    All the alien forces talk is making me want to ask can quantum computing or BSP say anything about verifying the impossible quickly – that can it answer NP=coNP question?

  28. Gil Kalai Says:

    Dear Jay, good questions! I did not refer to the computational task of BosonSampling but to actually creating 20-levels bosons and a bosonic state for 10-bosons based on 10 by 20 complex matrix. (This is why I said that I expect the same for FermionSampling that can be classically simulated even for 1000 Fermions without difficulty.)

    Regarding changing my mind, there are experimental efforts which are much closer that can cause me to change my mind. Here I refer to obtaining robust qubits based on implementation of the surface codes (or other quantum codes). There are two strong experimental groups attempting to achieve it with surface code and other groups with related experimental endeavors. One group leaded by John Martinis expects to create robust qubits based on 35-qubit surface code, in a few months, even more robust qubits based on a little over 100 physical qubits (in 2-3 years) and extremely robust qubits based on 1000 or so physical qubits (longer-term with sufficient resources). I would expect my conjectures to “show up” even in the 35-qubit case.

    Success of having logical qubits substantially more robust than “raw” physical qubits that John’s and various other experimental groups are attempting now will smash my conjectures to pieces.

  29. Rahul Says:

    Even with Scattershot BS, you still need to wait for a synchronized photon emission from at least, say, 10 of your 200 SPDC sources, right?

    If so, what’s the probability of that event, do we have an estimate? So far as I remember doesn’t this probability sharply fall as n increases?

    What’s the larges number of simultaneous photons we have yet seen from a SPDC expt. and what was the freq. in MHz of that event combination?

    Again, perhaps naive questions.

  30. Scott Says:

    Rahul #29: You’ll have to ask the experimentalists for estimates involving the SPDC sources that exist today. What I can tell you is that the big difference with Scattershot BS + SPDC sources, compared to ordinary BS + SPDC sources, is that the asymptotics are actually working in your favor rather than against you. Thus, suppose you have (let’s say) n2 SPDC sources. Then even if each one has only a 1/n probability of producing a photon in any given (synchronized) run, you’ll still get ~n photons on average—not in an exponentially small fraction of runs, but in almost every run. The main drawbacks are:

    (1) Ironically, in order for this to work, your beamsplitter network needs to be large enough (!).
    (2) You’re still limited by the efficiency of your photodetectors.

  31. Scott Says:

    J #27: No, I don’t know any good reason to think that a universal quantum computer, much less a BosonSampler, would help us answer the NP vs. coNP question. Alas, that one will probably require the computers between our ears!

  32. Scott Says:

    rrtucci #25: Err, I do write a blog post commenting on the latest out of the “D-Wave archipelago” roughly every 6 months or so. (Usually, not because I want to, but because other people spur me into it!)

  33. Scott Says:

    Gil #24: Wow, I guess my risk assessment is different than yours. If it came down to BosonSampling with n=10 photons, versus a war to the death against a superior alien race, I’d certainly at least want to try the former before opting for the latter. Especially if the aliens were gracious enough to allow Scattershot BosonSampling. 🙂

    As for n=5 photons, that could be done today, if anyone had the motivation and funding. (Recall that the Oxford group has already done 4 photons.)

  34. Rahul Says:

    Scott #30:

    I agree with you about the asymptotics, but my suspicion is that for the BS experimental size of interest the probability of a 10 detector coincidence is going to be minuscule.

    My intuitive fear is that desynchronization is going to throw a spanner in the works analogous to decoherence for conventional QCs. Of course, this is speculative so I’d love to see empirical synch data for 3,4,5 simultaneous photons.

    I could be totally wrong. I’d love to see an experimentalist comment.

    Your theory seems impeccable, it’s the messy experimental reality that I’m worried about.

  35. Jay Says:

    > I would expect my conjectures to “show up” even in the 35-qubit case. (…) [otherwise] smash my conjectures to pieces.

    Fair and exciting! Not sure if I wish you’re wrong or if I wish you’re right

    > I did not refer to the computational task of BosonSampling but to actually creating 20-levels bosons and a bosonic state for 10-bosons based on 10 by 20 complex matrix.

    Ok so some Aliens put a gun on our heads and will pull the trigger if we don’t manufacture a “quantum black box”, but the black box is allowed to compute an easy task.

    No problem, we’re safe! Just put Dwave in charge. 🙂

  36. RaulGPS Says:

    Scattershot BS seems to be a subset of a family of Boson Sampling problems that I like to call Gaussian Boson Sampling (sampling the modes of non-trivial Gaussian state in the photon-number basis), for which it is easy to prove exact sampling to be hard [as you mention in your Boson Sampling paper in THEORY OF C OMPUTING, page 236 open question (4)], but hardness of approximation is not known. If it is true that you (Scott and Alex) “can also get that Scattershot BS is hard to approximate” (by trivial modifications to your original proof), it is plausible that you can extend it to Gaussian Bosonic Sampling, which could be potentially interesting for implementation purposes.

    Scattershot BS is a subset of Gaussian Boson Sampling because you can always see Scattershot BS as a Gaussian Boson Sampling where you prepare 2N two-mode squeezed vacuum states, apply a random linear-optics circuit [unitary U(N)] over the odd-number modes and the identity on even-modes, and finally sample the photon-number basis on all 2N outputs. The heralding detectors being simulated by the even-modes detectors.

  37. Joshua Zelinsky Says:

    “Ironically, in order for this to work, your beamsplitter network needs to be large enough”

    Scott, can you explain why this is the case? This doesn’t seem obvious to me.

  38. Scott Says:

    Joshua #37: Maybe it was a confusing way to put things. All I meant was that in Scattershot BS, if you have m SPDC sources, and each one has a probability p of producing a photon in a given run, then you’ll get m*p heralded single photons on average (ignoring the higher-order terms—which we can if p<<1/√m). By contrast, if you do ordinary BosonSampling, the probability that each of your SPDC sources spits out a single photon at once decreases like pm. So there’s a sense in which, whereas with ordinary BS the situation gets worse the more sources you add, with Scattershot BS it gets better.

  39. Joshua Zelinsky Says:

    Ah, gotcha. That makes sense.

  40. Raoul Ohio Says:

    Scott, re #32:

    Given that you are likely to be goaded into doing this about every six months anyway, consider announcing a series of special SO editions on the topic. For example, each Jan 1 and July 1, everyone with a report on the topic can post.

  41. J Says:

    Scott#9:

    I thought BosonSampling approximates Permanent. If you can approximate a $n \times n$, you can approximate $n!$ in $\log_2^k{n}$ steps (I think according to http://math-www.uni-paderborn.de/agpb/work/6dich.pdf)

    Then this essentially can be used to verify your BosonSampling device no?

  42. J Says:

    continuation: One must be able to approximate $n!$ modulo any $\ log{n}$ bit number efficiently.

  43. Aaron F. Says:

    Or to say it more carefully: instead of sampling only from a probability distribution over output configurations of your n photons, now you’ll sample from a joint distribution over inputs and outputs…. [N]ow, not only do you not control which output will be observed (but only the probability distribution over outputs), you don’t control which input either….

    Recently, I’ve been finding myself drawn more and more toward this way of thinking about quantum computing, which makes quantum gate sets look almost exactly like topological quantum field theories. In this analogy, a circuit diagram is like a spacetime region, and its inputs and outputs are like the inward- and outward-oriented boundaries of the region. Instead of thinking of the circuit as describing a transformation from inputs to outputs, you can think of it as describing a state on the boundary—and instead of thinking about experimentally “setting the inputs and then measuring the outputs,” you can think about probing the boundary state by testing how likely the circuit is to attach to various other circuits along the boundary.

  44. Scott Says:

    J #41: No, the problem is that, as I said in the post, BosonSampling doesn’t let you calculate the permanent of a given matrix, only sample from a certain probability distribution in which the probabilities are defined in terms of permanents.

  45. Mateus Araújo Says:

    An almost off-topic comment: every time I ask my experimentalists about doing something involving number-resolving photon detectors, they look back at me in despair and ask: “Do you \emph{really} need it?”

  46. Scott Says:

    Mateus #45: Yes, I’ve experienced that as well!

    With BosonSampling, the basic story is that you don’t need number-resolving detectors, if you instead have a large number of very reliable non-number-resolving detectors.

    For simplicity, first consider “ordinary” (non-scattershot) BS. Then provided m>>n2 (i.e., the number of output modes is at least quadratically greater than the number of photons), and provided your scattering matrix is Haar-random, with high probability you’ll avoid the “bosonic birthday paradox,” meaning that all n of your photons will land in separate output modes. So then w.h.p., all your occupation numbers are either 0 or 1, and you just need detectors that can distinguish those two cases. (Provided, of course, that your detectors are actually reliable enough to let you account for all n of the photons—so that, if you see fewer than n, you can figure it was probably a collision rather than a photon loss.)

    Likewise, in the case of Scattershot BS, if m>>n2 then you can attenuate your SPDC sources to the point where, in a 1-ε fraction of runs, every source produces either 0 or 1 photons. So then distinguishing those cases again suffices (though not to detect the ε fraction of cases where you did get higher-order terms).

  47. Scott Says:

    RaulGPS #36: That’s an awesome observation—thanks so much!! Yes, just as you said, we can use Scattershot BS to show that “ordinary” BosonSampling with Gaussian inputs and number-resolving detectors is hard to simulate even approximately, under exactly the same assumption (the Permanent-of-Gaussians Conjecture) that works for Fock-state inputs. And that resolves one of the open problems from our paper.

    Incidentally, sorry for the delay! I wanted a few quiet minutes while Lily was napping to think about your comment and verify its correctness. Now that I have, though, I think I’ll add something about it to the original post.

  48. Rahul Says:

    Scott:

    You should have one of your experimental collaborators guest blog on this topic.

    I’d really love to hear more experimental perspectives. e.g. how feasible is this? How far from n=10 are we? What are potential roadblocks? etc.

  49. Nathan Says:

    Just wanted to point out that the Lund et al arXiv has actually been updated i.e. unwithdrawn with a new version 2 days ago at http://arxiv.org/abs/1305.4346v3

  50. Mateus Araújo Says:

    Scott #46: Thanks for taking your time to give more details! But I had understood that number-resolving photon detectors were unecessary, that’s why I said my comment was almost off-topic. I just thought it would be interesting to point out that using them is a pain in the ass.

    But your sentence made me curious: when you seen less than n photons you just discard the run? Does this imply a hard lower bound on the detection efficiency of, say, \(\eta \ge {\frac23}^{\frac1n}\)?

  51. Scott Says:

    Mateus #50: Well, if you have non-number-resolving detectors and you see fewer than n photons, then at least one photon is unaccounted for. So the probability of the outcome is not a squared permanent, but a sum of many squared permanents (one for every possible combination of lost photons). And unfortunately, it becomes iffy to argue for the computational hardness of estimating such sums—we have some ideas about how to do it, but we’re still working out the details, and in any case our arguments will probably only be convincing for a small number of lost photons. So sure, you can keep those runs, but they won’t be as interesting computationally as the ones where you can account for all of the photons.

  52. Joshua Zelinsky Says:

    This seems to raise another question Is there any unexpected collapse we can show occurs if Boson Sampling is enough to simulate BQP, i.e. something like if one has an oracle O that simulates boson computers in the sense of Theorem 1 from your original paper with Alex Arkhipov, can we then get some unexpected inclusion of the form BQP <= C^O for some complexity class C where it is not known that BQP <= C? Such a result would suggest that boson sampling does encompass some non-trivial amount of BQP.
    Alternatively, is there any collapses that can occur from examples of C that are unlikely? It would be neat for example if one could have some result off the form, if BQP <= P^O then the polynomial hierarchy collapses (or some other unexpected collapse occurs), or alternatively, something like BQP <= PH^O. Both of these sound like they might be difficult.

  53. jim Says:

    For those of you interested in more D-Wave news, Matthias Troyer gave the general Physics and Applied Physics Colloquium at Stanford today.

    The full taped video of the colloquium is available for download: http://nitsche.mobi/2013/troyer/

  54. John Sidles Says:

    This month’s (December 2013) issue of Notices of the AMS includes a short account by Lockheed-Martin’s Richard H. Warren titled Numeric Experiments on the Commercial Quantum Computer.”

    Historically inclined Shtetl Optimized readers will recall that the great algebraic geometer Solomon Lefschetz played a founding role in Lockheed-Martin’s celebrated mathematical institute of the 1950s-1960s, namely the Research Institute for Advanced Studies (RIAS).

    Two of the Clay Institute’s Millennium Prize problems, namely the Hodge Conjecture and the Navier Stokes Conjecture, owe much to this celebrated era of Lockheed-Martin/Samuel Lefschetz RIAS research.

    It is natural to ask, is history repeating itself? And it is natural too to reflect that DWAVE’s devices implement a computational dynamic flow (per Navier-Stokes) upon an algebraic state-space (per Lefshetz/Hodge) such that a profound class of mathematical problems is in play.

    Conclusion  It is mathematically natural to *hope* that Navier/Stokes/Lefschetz/Hodge/Tate (etc) history is repeating itself at DWAVE/Lockheed-Martin!

  55. Scott Says:

    Joshua #52: You ask some excellent questions.

    I don’t know how to prove that any unlikely complexity-theoretic consequence would follow, if BosonSampling were universal for BQP. I agree that such a result would be great.

    My strongest evidence that BosonSampling is not universal for BQP, just comes from considering the sizes of the respective Lie groups: the group of unitary transformations acting on n qubits is ~22n-dimensional, whereas the group of unitary transformations acting on n optical modes is merely ~n2-dimensional (it then gets lifted, via homomorphism, to a tiny subgroup of the group of unitary transformations on an exponentially-larger Hilbert space). By the usual standards of complexity theory, this sort of argument doesn’t count for much. But it does show that there can’t be any “faithful mapping” from the gates in a qubit-based quantum circuit to the linear-optical gates, and it seems pretty convincing intuitively.

    Ironically, I do know an unlikely-seeming complexity consequence that would follow if the converse held, and BosonSampling were reducible to BQP! How is that possible, you ask, since clearly a BosonSampling machine can be efficiently simulated by a universal quantum computer? Well, it simply has to do with the fact that BosonSampling is a sampling problem, whereas BQP is defined as a class of decision problems. In my and Alex’s paper, we note the following strange corollary of our results: if exact BosonSampling could be performed by a probabilistic polynomial-time algorithm with an oracle for BQP decision problems, then P#P would be contained in BPPNP^PromiseBQP. We added a remark like, “this seems unlikely, though we admit to lacking a strong intuition here.”

  56. lars Says:

    jim #53: Thanks for the link. The same talk held at ETH is available online here (smaller file size, one file).
    Direct link for the ~300 MB file: Link

  57. John Sidles Says:

    Gil Kalai posts (#28) “Success of having logical qubits substantially more robust than “raw” physical qubits that John’s and various other experimental groups are attempting now will smash my conjectures to pieces.”

    It won’t be easy to devise active/error-correcting qubit protection methods that better the extraordinarily long many-minute phase-decoherence times that are reported in this week’s Science by Saeedi et al. in “Room-Temperature Quantum Bit Storage Exceeding 39 Minutes Using Ionized Donors in Silicon-28.”

    These authors apply protection means — chiefly high Debye temperatures, isotopic purification, and decoupling pulses — that are entirely passive, and thus would serve to protect the phase coherence of classical Bloch spins equally well as quantum spins.

    It would be wonderful (obviously) if we could induce high-order quantum coherences among individual spins, protect these coherences by means both passive and active, interact them by tunable Hamiltonians, and read them out with high fidelity. After all, individually these capabilities have been demonstrated convincingly; now the great quest is to demonstrate them simultaneously and scalably.

    One indication that Gil Kalai’s conjectures may be correct is that optimizing any one of these capabilities degrades the rest. It would be terrific if we had a deeper understanding of why this coupling is ubiquitous. Hundreds of instances are available for study; why is a unified appreciation so elusive?

  58. Henning Dekant Says:

    rrtucci #25: May be misreading your comment somewhat. But it seem to me you almost position Scott as a kind of spokesperson for Matthias Troyer. Don’t think that’s fair.

    Before moving on to Harvard he came to Waterloo, and we arranged to meet over lunch. He is great to talk to, and of course very knowledgable. To me it seems he approaches D-Wave with unbiased academic curiosity. For instance he originally was sceptical of their chip truly performing quantum annealing, but backed this conclusion in a later paper when the data strongly suggested it.

    Mathias won’t partake in any blog discussion until his latest paper is published, so to me it makes sense to wait with revisiting the benchmark discussion until then.

  59. Scott Says:

    Henning #58: For the record, I was also skeptical of the presence of quantum annealing behavior in the device, back when they hadn’t published anything to show it—and then I changed my mind immediately when they did publish something! Which is, y’know, how science is supposed to work. Here’s the relevant blog post from 2.5 years ago.

    On the other hand, I agree with you that it was inaccurate for rrtucci to call me a “collaborator” of Matthias. At most, I’m Paul to his Jesus. 🙂

    I share your admiration for Matthias’s objectivity. That’s why I’ve happily reported on his blog everything that he and his collaborators have found: both the evidence for quantum annealing behavior in the device, and the evidence for no scaling advantage whatsoever in any of the datasets that they tried.

  60. rrtucci Says:

    “it was inaccurate for rrtucci to call me a “collaborator” of Matthias”

    Sorry, with hindsight, I should have been more clear. I meant that Scott and Matthias could collaborate in doing a blog post, or Matthias could do a guest post in Scott’s blog. I didn’t mean to imply that they had collaborated in writing a paper in the past.

    I like Rahul’s idea of Jan 1 and July 4, two fireworks days. It could be the state of the union speech by the King Kong party versus the rebuttal by the Godzilla party. By making this a bit more scheduled, more people will begin to see it as just healthy, necessary debate.

  61. Ted Says:

    Hi – Just came across a five year old segment of your site re arguments about quantum computing. I think an interesting way to look at this issue is via what’s often called the de Broglie/Bohm perspective: i.e., if QM is interpreted to be deterministic, as it can be, how does a quantum computer work? Leads to some interesting considerations…

  62. Henning Dekant Says:

    Scott #59, didn’t mean to imply that you differ from Mathias on the ability to change your mind when presented with new facts 🙂

    As I wrote I don’t want to rekindle the benchmarking discussion before Mathias can freely chime in, but just for clarification when you mention scaling, do you mean scaling with the number of qubits on the chip?

  63. Scott Says:

    Henning #62: Yes.

  64. Scott Says:

    Ted #61: dBB makes the same predictions as standard QM for what a quantum computer would do, but unfortunately, it doesn’t really provide new or different insight. For more see my Physics StackExchange answer on the subject.

  65. nathan Says:

    Off topic, sorry. But is Eliezer Yudkowski’s talk available online?

  66. Scott Says:

    nathan #65: Sorry, no. Read the update on my last post.

  67. J Says:

    $14$ to $10000$ and for $39$ minutes

    http://www.zdnet.com/scientists-produce-most-stable-qubit-largest-quantum-circuit-yet-7000023293/

    http://www.sciencemag.org/content/342/6160/830
    Is this good news and if so for what?

  68. asdf Says:

    This doesn’t seem to have been mentioned yet:

    November 4, 2013
    Polymath9: P=NP? (The Discretized Borel Determinacy Approach)

    http://polymathprojects.org/2013/11/04/polymath9-pnp/

    Blog post: http://gowers.wordpress.com/2013/11/03/dbd1-initial-post

    I remember some of Scott’s comments about underestimating the difficulty of P vs NP but in any case I wish all the best to this project.

  69. Rahul Says:

    “The nincompoops who we paid to record the talk wrote down November instead of October for the date, didn’t show up, then stalled for a month before finally admitting what had happened.”

    Didn’t anyone just look to see if the videographer was around? Sounds funny. 🙂

  70. John Sidles Says:

    This afternoon Gil Kalai will speak at the the University of Washington (UW) Pacific Institute for the Mathematical Science (PIMS) colloquium on the topic Why quantum computers cannot work. This is a continuation of the wonderful Gil Kalai/Aram Harrow debate, and would be great to see quantum folks there!

    As a tribute to Gil and Aram (and Scott Aaronson too), I have composed for MathOverflow a question Do quantum “Sure-Shor separators” have a natural Veronese/Segre classification? (question inspired by Gil Kalai and Aram Harrow).

    This question is constructed as a tribute to Aaronson’s fine “Sure-Shore Separator” question and the outstanding Kalai/Harrow debate that this question helped stimulate. For us engineers, proposition “Why quantum computers CAN’T work” is dual to a broad class of propositions that include “How large-scale quantum simulation DOES work” and “How atomic-resolution microscopes CAN work”, and includes too unanswered fundamental physics questions like “Is Nature’s quantum dynamical state-space EXPERIMENTALLY static and flat?”

    Responses are invited, needless to say (and later this week I will offer a bounty on this question).

  71. Michael Gogins Says:

    Why is the extended Church-Turing thesis defined by you as “the thesis that all natural processes can be simulated with polynomial overhead by a classical computer”, as opposed to “the thesis that all natural processes can be simulated with polynomial overhead by a BUILDABLE computer” (which could be a quantum computer)?

    Or would that be the “physical” Church-Turing thesis?

  72. John Sidles Says:

    In quantum simulation the following problem is natural: Given a set of Haar-random \(m\times m\) unitary transforms \(U\) on an input state of \(m\) modes, an \(n\times n\) submatrix \(U’\) is selected from each \(U\), and moreover \(U’\) is multiplicatively corrupted by gaussian amplitude and phase noise, each of relative amplitude \(\epsilon\ll 1\), yielding noisy submatrices \(U”\).

    Question  If the noise-free submatrices \(U’\) are ranked by permanent \(|P(U’)|^2\), can that permanent-rank be approximated by PTIME measures applied to the noisy submatrices \(U”\)?

    A physics solution  Rank-order the (noise-free) \(U’\) by the rate at which its associated (noisy) \(U”\) generates entropy in its \(n\) output modes, for classical light of uniform phase and amplitude incident on its \(n\) input modes.

    Example  For a typical case of \(m = 200\) total modes and \(n=15\) submatrix modes (i.e., 15 heralded-and-detected photons in a Scattershot BosonSampling experiment), with relative amplitude-and-phase error \(\epsilon=0.05\), the Spearman \(\rho\)-value associated to the (quantum) permanent-ordering versus the (classical) entropy-ordering is \(\rho_\text{Spearman}\simeq 0.14\) (sample data here, color-coded by entropy-rate rank).

    Remarks  Computing the entropy rate requires \(\mathcal{O}(n^2)\) resources, whereas computing the determinant (for example) is \(\mathcal{O}(n^{^{2.37}})\) (by Coppersmith-Winograd). This motivates the following (somewhat jocular) conjecture:

    The Permanent-of-Gaussians Sorting Conjecture  Approximately sorting a noise-corrupted list of gaussian matrices by permanent is computationally easier than exactly sorting noise-free matrices by determinant.

    More seriously (and broadly), these optical intuitions lead us to appreciate that permanent-associated quantities may be in some respects intrinsically *easier* to compute (approximately) than determinant-associated questions.

    The physical intuition is “noise is our friend”, in the following sense. If the \(U’\) matrices had elements in a finite field, then the notion of a “nearby” noisy matrix \(U”\) would cease to be defined, in that finite field elements have no natural ordering, (AFAICT). Converse, the observed robustness to noise of \(U’\)-elements in \(\mathbb{C}\) acts (somehow) to facilitate rank-ordering, and thus estimation, of \(\mathbb{C}\)-valued permanents.

    Open questions  (intended for StackExchange later in the week)  What PTIME measures perform better than the entropy-rate in rank-ordering by \(|\,P\,|^2\) the submatrices that are associated to Scattershot BosonSampling? What is the upper bound to the Spearman \(\rho\)-value that is achievable in PTIME for approximate rank-ordering of (possibly noisy) matrices?

  73. Alexander Vlasov Says:

    I may not find a proof about impossibility of classical sampling (using some nontrivial approach). For example for permanent with negative numbers. In such a case noise may prevent to calculate that effectively, but sampling may be possible anyway.

  74. John Sidles Says:

    Trapdoor BosonSampling:
    Twenty Years After ScatterShot BosonSampling

    (a theorems-from-physics fantasy)

    Alice (an experimentalist>  We quantum experimentalists have good news! After decades of work, scalable methods for ScatterShot BosonSampling have been conceived, and experimental datasets for \(n=20\) heralded photons scattering into \(m=200\) output modes now are available.

    Carl (a mathematician)  That’s terrific news, Alice, because after decades of work, we mathematicians have recently proved the Aaronson/Arkhipov Permanent of Gaussians Conjecture in full generality and rigor, and so it is now called the Permanent of Gaussians Theorem.

    Bob (a simulationist)  We quantum simulationists have good news too! After decades of work, PTIME methods for simulating ScatterShot BosonSampling have been conceived, and simulated datasets for \(n=20\) heralded photons scattering into \(m=200\) output modes now are available.

    Carl  That’s terrific news, Bob, because after decades of work, we mathematicians have recently proven, in full generality and rigor, that no statistical test can distinguish Bob-style PTIME-simulated data from Alice-style experimental data.

    Alice and Bob  (in unison) What!? How can that be?

    Carl  The answer is “forehead-slappingly obvious in retrospect” (in Scott Aaronson’s amusing phrase). Bob’s PTIME simulation algorithm samples quantum trajectories, and each trajectory is associated to a data record. Crucially, the map \(\text{(trajectory)}\to\text{(data record)}\) is a trapdoor function (that is, not invertible with PTIME resources); thus Bob’s trapdoor BosonSampling simulation algorithm cannot be adapted to computationally verify that Alice’s experimental data records are associated to large permanents.

    An Unanswered Math-and-Physics  Does Nature generate Alice’s data records using Bob’s PTIME algorithms?

    Alice and Bob and Carl (in operatic chorus)   Scalable BosonSampling experiments *and* a rigorously proven Permanent of Gaussians Theorem *and* efficient large-scale quantum simulation algorithms *and* deep open questions:  surely the 21st century is gonna be terrific!

  75. John Sidles Says:

    To assist computational experiments relating to BosonSampling experiments, permanent estimation and rank-ordering, the Permanent of Gaussians Conjecture (etc), the good citizens of the Mathematica StackExchange have provided various code-optimizations relating to Can (compiled) matrix permanent evaluation be further sped-up? that on a modest laptop allow 10×10 matrix permanents to be evaluated in less than a millisecond, and 20×20 permanents to be evaluated in less than one second.

  76. Scott Says:

    Michael Gogins #71:

      Why is the extended Church-Turing thesis defined by you as “the thesis that all natural processes can be simulated with polynomial overhead by a classical computer”, as opposed to “the thesis that all natural processes can be simulated with polynomial overhead by a BUILDABLE computer” (which could be a quantum computer)?

    Sorry for the delay! It’s a very good question, whose boring answer is that this is the usual terminology I’ve seen. We need some name for the (probably false) thesis that all natural processes can be efficiently simulated by a classical computer, and ECT is the name that many papers have used. Certainly it has some historical basis, in that it is what many computer scientists and others assumed or believed for decades (usually without even explicitly formulating the assumption), and it’s what some still believe.

    I should point out that David Deutsch has used the name “Church-Turing principle” (or according to Wikipedia, “Church-Turing-Deutsch principle” 🙂 ) to refer to the weaker thesis you mention, that all natural processes can be efficiently simulated by some buildable universal device.

  77. Scott Says:

    Rahul #69:

      Didn’t anyone just look to see if the videographer was around?

    If a videographer had been there, he or she would’ve been in a glass alcove overlooking the lecture room where we wouldn’t have easily seen them. But yes, we should’ve checked. As it was, we were completely preoccupied with other technical issues, like finding a VGA/HDMI converter for Eliezer’s laptop.

  78. John Sidles Says:

    Scott posts [sensibly, in comment #76] “We need some name for the (probably false) thesis that all natural processes can be efficiently simulated by a classical computer, and ECT [the extended Church-Turing Thesis] is the name that many papers have used.

    Scott posts [sensibly, in the OP] “Scattershot BosonSampling itself — a simple, obvious-in-retrospect idea that was under our noses for years, and that immediately pushes forward the number of photons a BosonSampler can handle — should make us exceedingly reluctant to declare there can’t be any more such ideas, and that our current ignorance amounts to a proof of impossibility.

    Combining Scott’s two sensible statements and juxtaposing them with sustained Moore’s Law process in multi-scale dynamical simulation capability helps us to appreciate that the notoriously tricky concepts of “probably,” “natural,” “efficiently,” and “impossibility” are associated to richly interacting mathematical nuances that have already provided ample room for classical simulation of large-scale quantum dynamics to become (as the Nobel Committee calls it) our Virgil in the world of atoms.

    Conclusion  Our current ignorance (in Scott’s phrase) obscures the practical limits to continued improvements in classical Virgil’s quantum guidance, and so it is healthy for the 21st century STEM enterprise that opinions in regard to these unknown limits vary widely.

  79. Jay Says:

    Scott #76

    May we summarize as follow?

    *Church-Turing thesis: any computation can be carried out by a Turing machine/lambda calculus. [equivalent formulations proposed by Church and Turing around 1935, final name by Kleene 1952]

    *Church-Turing principle : Every physical process can be simulated by a universal computing device. [proposition and name due to Deutsch, 1985; sometime refered to as the Church-Turing-Deutsch Principle; equivalent to Michael Gogins #71’s formulation]

    *Deutsch 1985’s actual thesis: Every physical process can be simulated by a quantum, universal computing device. [sometime also refered to as the Church-Turing-Deutsch Principle]

    *Extended Church-Turing thesis: Every physical process can be simulated by a classical, universal computing device. [unclear origin for the denomination; unclear if anyone actually proposed it; sometime confused with CT thesis]

  80. Scott Says:

    Jay #79: Looks OK, except that in your statement of the ECT, you forgot to say, “…can be simulated with polynomial overhead.” (Otherwise you just get a restatement of the ordinary CT.)

    Also, I think Ian Parberry and some others explicitly formulated the ECT, and it was certainly at least “implicit” in most complexity theory textbooks. And if nothing else, Leonid Levin and Oded Goldreich staunchly defend the ECT today. 🙂

  81. John Sidles Says:

    Three references in regard to evolving definitions of “ECT”

    • Dershowitz and Falkovich, “A formalization and proof of the Extended Church-Turing Thesis” (arXiv:1207.7148 [cs.LO]) provide in-depth references that include Ian Parberry’s 1986 work (as mentioned in Scott #80).

    Regrettably, Dershowitz and Falkovich overlook what is (AFAICT) a key early reference to the (seemingly equivalent?) notion of the polynomial-time Church-Turing Thesis, which is set forth plainly and explicitly in

    • Aaronson, “Multilinear formulas and skepticism of quantum computing” (arXiv:quant-ph/0311039v4)

    And finally, the practical math-and-physics nuances that are associated to PTIME/trapdoor quantum simulations (per comment #74) are regrettably not accounted in recent references that include:

    • Shchesnovich, “A sufficient fidelity bound for scalability of the boson sampler computer” (arXiv:1311.6796 [quant-ph])

    In particular, conclusions like Shchesnovich’s

    NDBS [nondeterministic BosonSampling experiments] perform a sampling which cannot be simulated on a classical computer with only polynomial resources

    implicitly embrace a trapdoor-excluding definition of simulation that regrettably (as it seems to me) bears little relation to the eminently practical trapdoor-exploiting — and generically thermodynamical — PTIME realities of real-world large-scale quantum dynamical simulation.

    ——
    Conclusion  The plausibility of the ECT hinges substantially upon definitional nuances that are associated to the inclusion-versus-exclusion of trapdoor PTIME algorithms as “simulations” of quantum dynamical systems.

  82. Jay Says:

    Yep I forgot “efficiently” from definitions 2-4. Thanks to you and John Sidles for the ref.

    However Parberry et al. 1986 hardly defined the ECT. They instead cite a textbook from Leslie Goldschlager 1983 and use a different naming:

    “the sequential computation thesis [10] states that time on all “reasonable” sequential machine models is related by a polynomial.”

    So it’s the same idea, but at least the name does not come from this path.

    Diging further I couldn’t find older references to this “sequential computation thesis”, but that may have followed as a consequence of the “parallel computation thesis”, which was likely defined by Chandra and Stockmeyer 1976.

    A.K. Chandra, L.J. Stockmeyer, “Alternation”, Proc. 17th FOCS, Oct. 1976 (98-108). [cited in Goldschlager, STOC 1978]

  83. Jay Says:

    John Slides, I had a look at your post #74 but that does not seems a consistent story, or maybe I just don’t get your idea. In short you’re considering a fantasy in which:

    1) Alice constructs a BosonSampler using physical ressources

    2) Carl fills some minor holes in the demonstration that it is hard to emulate BosonSampling unless P^#P=BPP^NP

    3) Bob finds a polynomial time algorithm for BosonSampling

    So far no problem. Fantasy use to present faster-than-light travels, by this standard it’s definitely ok to consider an incomplete collapse of the Polynomial Hierarchy.

    4) Carl demonstrate we cannot distinguish Bob’s algorithm from Alice’s physical BosonSampler

    Here I don’t get your point. To me statement 4) is the same as statement 3) plus statement 1). Why should anyone be surprised?

  84. John Sidles Says:

    Jay asks (#83)  “Why should anyone be surprised? [at Bob’s perfect PTIME/trapdoor simulation of Alice’s scalable Scattershot BosonSampling experiment, per comment #74]

    Indeed the point is that no-one should be surprised at Bob’s simulational success, because perfect PTIME/trapdoor simulation of Scattershot BosonSampling experiments would not violate any standard assumption of complexity theory (as appreciated by me at any rate).

    Close reading of the Aaronson/Arkhipov article “The computational complexity of linear optics” (2011) is strongly recommended, and all definitions in the following discussion — including the ECT in particular — are as given in that well-reasoned and wonderful (as it seems to me) article.

    In particular:

    • Alice’s scalable BosonSampling experimental capability would not disprove the Extended Church-Turing thesis (in its Aaronson/Arkhipov form) because …

    • Bob’s PTIME/perfect BosonSampling simulation capability of Alice’s experiment would not disprove the Aaronson/Arkhipov Permanent of Gaussians Conjecture and therefore …

    • Bob’s efficient/perfect BosonSampling simulation capability would not imply the collapse of any portion of the polynomial-time hierarchy.

    A concise overview of the mathematical reasoning associated to the above conclusions — reasoning that is “forehead-slappingly obvious in retrospect” (in Scott’s phrase) — is generated by substituting “Shor Factoring” for “BosonSampling”:

    Alice (an experimentalist) asserts  I can scalably exhibit “Scattershot Shor Factoring” of randomly selected integers.

    Bob (a simulationist)asserts   I can scalably simulate “PTIME/Scattershot Shor Factoring” of pseudo-randomly selected integers (via the AKS primality test), and no statistical test can reveal that my simulations use this trick to evade the hardness-of-factoring obstruction.

    Carl (a mathematician)  This is terrific news on four fronts:

    (1)  Alice’s Scattershot Factoring experiment is scalable, and

    (2)  Bob’s simulated-factoring datasets are indistinguishable from Alice’s experimental datasets, and

    (3)  all of the standard hardness assumptions relating to classical and quantum factoring are respected, and

    (4)  the Aaronson/Arkhipov version of the Extended Church-Turing Thesis is upheld.

    Summary  For quantum simulationists, the key difference between quantum factoring and Scattershot BosonSampling is that AKS-based PTIME/trapdoor sampling algorithms exist for factoring, whereas for Scattershot BosonSampling the analogous PTIME/trapdoor sampling algorithms are not known yet to exist.

    Fortunately (even excitingly!) neither complexity theory nor the supposed falsity of Extended Church-Turing Thesis provide any reason to foresee that efficient Scattershot BosonSampling algorithms don’t exist.

    Discovering algorithms in this PTIME/trapdoor simulation class is (at it seems to me) a wonderful open problem, having innumerable practical applications, that in the memorable phrase of Scott’s original post “is a simple, obvious-in-retrospect idea that has been under our noses for years.”

    Conclusion  Carl-the-optimistic-mathematician’s favorite joke has the joyous and justly celebrated punch-line You know, you’re absolutely right!

  85. Jay Says:

    “simulation of Scattershot BosonSampling experiments would not violate any standard assumption of complexity theory (as appreciated by me at any rate).”

    You disagree that would collapse the polynomial hierarchy at third level, or you disagree this is a standard assumption?

  86. Jay Says:

    Sorry, I tend to stop at first bull, then didn’t read “BosonSampling simulation capability would not imply the collapse of any portion of the polynomial-time hierarchy.” which answers my last question.

    So, you have a story in which Carl demonstrate one thing and the opposite is true. Yes that’s surprizing, you’re absolutly right.

  87. John SIdles Says:

    Jay, your second response is correct (as it seems to me), and here are some of the details that your first response requested.

    —————-
    Aaronson/Arkhipov “Computational complexity of linear optics” begins with (what amounts to) the clearest succinct definition of the Extended Church-Turing Thesis (ECT) (that is known to me):

    Argument against the ECT (per Aaronson/Arkhipov 2011)  “Predicting the results of a given quantum-mechanical experiment, to finite accuracy, cannot be done by a classical computer in probabilistic polynomial time, unless factoring integers can as well.”

    There follow 110 pages of analysis that work through the multitudes of nuances, subtleties, complexities, and implications that are associated to this statement.

    With comparable brevity, the following is asserted:

    Argument for the ECT  “Simulating the results of a quantum-mechanical Scattershot BosonSampling experiment, by a classical computer in probabilistic polynomial time, such that the simulated data are indistinguishable by any statistical test from experimental data, would not disprove the Permanent of Gaussians Conjecture.”

    Here the Permanent of Gaussians Conjecture is defined per Aaronson/Arkhipov section 1.2.3, Problem 1.4 and Conjecture 1.5.

    This argument *for* the ECT presumes the same technical assumptions as the Aaronson/Arkhipov article; in particular the polynomial hierarchy is respected. More subtly yet realistically, the classical simulation must have access to a Kolmogorov-incompressible random-number generator (so that distinguishing simulated data from experimental data cannot be accomplished by guessing the initial state of the random-number generator). Reconciling PTIME/perfect simulation with the Permanent of Gaussians Conjecture is achieved via what might be called the trapdoor loophole, namely, the simulated data do not provide a PTIME estimator for any Per(X), where “estimator” has the concrete definition of Problem 1.4, and in particular the matrix X is given as in input.

    Conclusion  According to our present understanding, the standard complexity-theoretic postulates do not obstruct the fondest hopes of *both* BosonSamplers *and* quantum simulationists.

  88. Jay Says:

    “PH may indeed collapse, in which case BosonSampling’d say nothing about the ECT”.

  89. John Sidles Says:

    Jay remarks “PH may indeed collapse, in which case BosonSampling’d say nothing about the ECT.”

    A less controversial source of plausibility for the ECT (as it seems to me) is the richly-intertwining postulates and definitions that are associated to the key theorems of “Computational complexity of linear optics” (section 5 in particular); postulates and definitions that (as it seems to me) leave ample room to satisfy — simultaneously — the most ardent BosonSamplers and BosonSimulators.

    Example  Even today the most-used, most-efficient primality tests and linear programming solvers are formally unreliable and/or require EXP resources, yet in practice these algorithms compute results reliably in PTIME.

    Very plausibly, we’ll need much more practical experience than we now have, in simulating BosonSampling experiments and dealing concretely with tricky issues of verification and validation, before we can be confident that we’re specified the best postulates and definitions.

    This generically enormous gap between worst-case and average-case algorithmic complexity (for fields of characteristic zero especially) leaves ample room for optimism that well-considered tuning of postulates and definitions can yield complexity-theoretic narratives in which “Everyone is absolutely right.”

  90. joe Says:

    Scott,

    Despite this post now being really old, I’m going to comment belatedly anyway, since it has started being cited in papers in the literature as the definitive reference on Scattershot Boson Sampling.

    This is an aside, but I know you like for things to be correct and not misleading, even if they’re not the main show. In your post, you say, “there’s no known process in nature that can serve as a deterministic single-photon source”. If by “deterministic single-photon source”, you mean one that produces the Fock state |1> on-demand, with 100% fidelity, then no experimentalist should argue; achieving 100% is always going to be impossible. However, so long as we use a more relaxed condition that we just want very high fidelity, I think it’s fair to argue that many systems offer a natural deterministic single-photon generation process, and it’s largely a matter of engineering for us to get close to 100% fidelity. For example, in optically-active quantum dots, deterministic generation of single photons (with very low probability of generating two photons in a single shot; <1%) is achievable with both pulsed electrical and optical excitation. The big catch with using atoms or artificial atoms (like quantum dots) as single-photon sources is not that they don't deterministically generate single photons; it's that collecting the photon that is produced is tricky. For example, when an atom emits a photon, to first approximation the photon can and will be emitted in any direction. By putting an atom in a cavity, one can enhance the probability of emission into the cavity mode, and in this way have most of the photons that are emitted come out in only one direction, and be collected and collimated by a lens. For example, this work http://www.nature.com/ncomms/journal/v4/n2/pdf/ncomms2434.pdf demonstrates a single-photon source with which one will have a single photon be collected by a lens in 80% of attempts at triggering it. In my understanding, the source is close to 100% deterministic, and there is just 80% collection efficiency because sometimes the emitted photon goes in the wrong direction (out the side, or into the back substrate of the device).

    Given that such amazing sources exist, why doesn't everyone use them? Mostly it's a question of expense and practicality, not physics: these high-efficiency quantum-dot-based single-photon sources are currently still difficult to fabricate, and need to be operated at ~4 K. In contrast, SPDC just requires you to shine a laser at a crystal (that you can buy for ~$300) at room temperature, and this simplicity in operation has made the technique very popular; it's also very convenient that the source is heralded.

  91. Want a quantum boost? Play the odds – Perimeter Says:

    […] idea of taking a scattershot approach was explored on the blog of MIT professor Scott Aaronson (who proposed the original boson sampling mode in 2010), with the idea credited to Steven […]