My visit to D-Wave: Beyond the roast-beef sandwich

February 21st, 2012

Last week I was in Vancouver, to give talks at the University of British Columbia and at the American Association for the Advancement of Science annual meeting.  As part of that visit, on Friday afternoon, John Preskill, John Martinis, Michael Freedman and I accepted a gracious invitation to tour the headquarters of D-Wave Systems in Burnaby (a suburb of Vancouver).  We started out in a conference room, where they served us cookies and sodas.  Being the mature person that I am, the possibility of the cookies being poisoned at no point crossed my mind.

Then we started the tour of D-Wave’s labs.  We looked under a microscope at the superconducting chips; we saw the cooling systems used to get the chips down to 20 millikelvin.  In an experience that harked back to the mainframe era, we actually walked inside the giant black cubes that D-Wave was preparing for shipment.  (The machines are so large partly because of the need for cooling, and partly to let engineers go in and fix things.)  Afterwards, D-Wave CTO Geordie Rose gave a 2-hour presentation about their latest experimental results.  Then we all went out to dinner.  The D-Wave folks were extremely cordial to us and fielded all of our questions.

In spite of my announcement almost a year ago that I was retiring as Chief D-Wave Skeptic, I thought it would be fitting to give Shtetl-Optimized readers an update on what I learned from this visit.  I’ll start with three factual points before moving on to larger issues.

Point #1: D-Wave now has a 128-(qu)bit machine that can output approximate solutions to a particular NP-hard minimization problem—namely, the problem of minimizing the energy of 90-100 Ising spins with pairwise interactions along a certain fixed graph (the “input” to the machine being the tunable interaction strengths).  So I hereby retire my notorious comment from 2007, about the 16-bit machine that D-Wave used for its Sudoku demonstration being no more computationally-useful than a roast-beef sandwich.  D-Wave does have something today that’s more computationally-useful than a roast-beef sandwich; the question is “merely” whether it’s ever more useful than your laptop.  Geordie presented graphs that showed D-Wave’s quantum annealer solving its Ising spin problem “faster” than classical simulated annealing and tabu search (where “faster” means ignoring the time for cooling the annealer down, which seemed fair to me).  Unfortunately, the data didn’t go up to large input sizes, while the data that did go up to large input sizes only compared against complete classical algorithms rather than heuristic ones.  (Of course, all this is leaving aside the large blowups that would likely be incurred in practice, from reducing practical optimization problems to D-Wave’s fixed Ising spin problem.)  In summary, while the observed speedup is certainly interesting, it remains unclear exactly what to make of it, and especially, whether or not quantum coherence is playing a role.

Which brings me to Point #2.  It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system.  (Note that, if there’s no entanglement, then it becomes extremely implausible that quantum coherence could be playing a role in a speedup.  For while separable-mixed-state quantum computers are not yet known to be efficiently simulable classically, we certainly don’t have any examples where they give a speedup.)  Last year, as reported on this blog, D-Wave had a nice Nature paper that reported quantum tunneling behavior in an 8-qubit system.  However, when I asked D-Wave scientist Mohammad Amin, he said he didn’t think that experiment provided any evidence for entanglement between qubits.

The “obvious” way to demonstrate entanglement between qubits would be to show a Bell inequality violation.  (We know that this can be done in superconducting qubits, as the Schoelkopf group at Yale among others reported it a couple years ago.)  Meanwhile, the “obvious” way to demonstrate a role for quantum coherence in the apparent speedup would be gradually to “turn down” the system’s coherence (for example, by adding an interaction that constantly measured the qubits in the computational basis), and check that the annealer’s performance degraded to that of classical simulated annealing.  Unfortunately, the D-Wave folks told us that neither experiment seems feasible with their current setup, basically because they don’t have arbitrary local unitary transformations and measurements available.  They said they want to try to demonstrate 2-qubit entanglement, but in the meantime, are open to other ideas for how to demonstrate a quantum role in the apparent speedup with their existing setup.

Point #3: D-Wave was finally able to clarify a conceptual point that had been bugging me for years.  I—and apparently many others!—thought D-Wave was claiming that their qubits decohere almost immediately (so that, in particular, entanglement would almost certainly never be present during the computation), but that the lack of entanglement didn’t matter, for some complicated reason having to do with energy gaps.  I was far from alone in regarding such a claim as incredible: as mentioned earlier, there’s no evidence that a quantum computer without entanglement can solve any problem asymptotically faster than a classical computer.  However, that isn’t D-Wave’s claim.  What they think is that their system decoheres almost immediately in the energy eigenbasis, but that it doesn’t decohere in the computational basis—so that, in particular, there would be entanglement at intermediate stages.  If so, that would be perfectly fine from the standpoint of the adiabatic algorithm, which doesn’t need coherence in the energy eigenbasis anyway (after all, the whole point is that, throughout the computation, you want to stay as close to the system’s ground state as possible!).  I understand that, given their knowledge of decoherence mechanisms, some physicists are extremely skeptical that you could have rapid decoherence in the energy basis without getting decoherence in the computational basis also.  So certainly the burden is on D-Wave to demonstrate that they maintain coherence “where it counts.”  But at least I now understand what they’re claiming, and how it would be compatible (if true) with a quantum speedup.

Let me now move on to three broader questions raised by the above points.

The first is: rather than constantly adding more qubits and issuing more hard-to-evaluate announcements, while leaving the scientific characterization of its devices in a state of limbo, why doesn’t D-Wave just focus all its efforts on demonstrating entanglement, or otherwise getting stronger evidence for a quantum role in the apparent speedup?  When I put this question to Mohammad Amin, he said that, if D-Wave had followed my suggestion, it would have published some interesting research papers and then gone out of business—since the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems.  So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t.  There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits!  Once you’ve shown that the foundation is solid, then you try to scale up.  So, please support D-Wave if it wants to spend money to show Bell inequality violations, or other “smoking-gun” evidence that its qubits are working together coherently.  You’re welcome, D-Wave!

The second question is one that I’ve encountered many times on the blogosphere: who cares how D-Wave’s system works, and whether it does or doesn’t exploit quantum coherence, as long as it solves practical problems faster?  Sure, maybe what D-Wave is building is really a series of interesting, useful, but still basically “classical” annealing devices.  Maybe the word “quantum” is functioning here as the stone in a stone soup: attracting money, interest, and talented people to build something that, while neat, ultimately doesn’t much depend on quantum mechanics at all.  As long as D-Wave’s (literal!) black box solves the problem instances in such-and-such amount of time, why does it matter what’s inside?

To see the obtuseness of this question, consider a simple thought experiment: suppose D-Wave were marketing a classical, special-purpose, $10-million computer designed to perform simulated annealing, for 90-bit Ising spin glass problems with a certain fixed topology, somewhat better than an off-the-shelf computing cluster.  Would there be even 5% of the public interest that there is now?  I think D-Wave itself would be the first to admit the answer is no.  Indeed, Geordie Rose spoke explicitly in his presentation about the compelling nature of (as he put it) “the quantum computing story,” and how it was key to attracting investment.  People don’t care about this stuff because they want to find the ground states of Ising spin systems a bit faster; they care because they want to know whether or not the human race has finally achieved a new form of computing.  So characterizing the device matters, goddammit!  I pride myself on being willing to adjust my opinions on just about anything in response to new data (as I’ve certainly done in D-Wave’s case), but the insistence that black boxes must be opened and explanations provided is something I’ll carry to the grave.

Finally, given the skeptical-yet-positive tone of this post, some people will wonder whether I now regret my earlier, more unmitigated D-Wave skepticism.  The answer is no!  Asking questions is my job.  I’ll give D-Wave credit whenever it answers some of the questions—as it did on this visit!—and will shift my views accordingly.  But I’ll also neither stop asking nor apologize for asking, until the evidence for a quantum speedup becomes clear and indisputable (as it certainly hasn’t yet).  On the other hand, I do regret the snowballing nastiness that developed as a combined result of my and other skeptics’ statements, D-Wave’s and its supporters’ statements, and the adversarial nature of the blogosphere.  For the first time, I find myself really, genuinely hoping—with all my heart—that D-Wave will succeed in proving that it can do some (not necessarily universal) form of scalable quantum computation.  For, if nothing else, such a success would prove to the world that my $100,000 is safe, and decisively refute the QC skeptics who, right now, are getting even further under my skin than the uncritical D-Wave boosters ever did.

Safari photos from Kenya

February 12th, 2012

(Credit for most of the photos: Dana)

I was going to write a whole long essay about

  • the differences between going to the zoo and visiting an ancestral environment of humanity, where elephants have grazed for millions of years;
  • the weird sense of familiarity, as if you’re seeing how the surface of the earth is “supposed” to look, how it did look before humans started converting it into KFCs and parking lots;
  • how to tell whether an elephant charging your jeep is serious about wanting to trample you or, much more likely, just warning you to go away (apparently, it has to do with whether its ears are straight back or flapping);
  • the “airport” at Lake Naivasha (a strip of dirt in a grassy field filled with zebras, and a guy on a bicycle who shoos the zebras off the strip before a plane lands);
  • Britain’s failure, to this day, to issue any sort of apology for its detention, torture, and murder of tens of thousands of Kenyans during the waning years of its colonial rule in the 1950s;
  • the near-destruction by poaching, over the last century, of many of the majestic animal populations you see above;
  • the heroism of Richard Leakey (past director of the Kenya Wildlife Service) in overcoming decades of bureaucratic inertia to initiate a crackdown, where rangers were authorized to “poach the poachers,” shooting them on sight (!);
  • how, after Leakey almost-singlehandedly saved Kenya’s wild elephants, he lost both of his legs when his plane crashed (widely suspected to be due to sabotage), and was forced from his job months later;
  • the benefits of safari tourism in creating a serious economic incentive for conservation, but also the drawbacks (e.g., all the jeeps making it harder for the cheetahs to hunt);
  • the large, obvious, anything-but-“theoretical” changes being wrought by global warming on the rainfall in Kenya’s game parks (which changes are killing the trees, thereby eliminating the lions’ hiding places and making it harder for them to hunt—hey, at least the zebras are happy);
  • the Maasais’ innovative uses for cow dung; the resulting immature jokes on my part (homeowner to roofer: “this roof you sold me is shit!”);
  • my growing fascination, over the course of the trip, with the lesser-known corners of Mammalia (elands, dik-diks, kudus, waterbucks, topis, rock hyraxes); how this might mirror my fascination with lesser-known complexity classes like AWPP, QMA(2)/qpoly, SBP, C=P, and BPPpath;
  • how parts of the African savannah have better cellphone reception than my office in Stata;
  • how it’s indeed possible to catch up on Jon Stewart and The Big Bang Theory over wifi, from a tent in the Maasai Mara, while hippos bellow loudly in the river below, and elephants graze and crocodiles sun themselves on the other side.

But then I never got around to writing that essay.  So enjoy the photos, and ask in the comments if you want me to say something else.

The battle against Elsevier gains momentum

February 8th, 2012

Check out this statement on “The Cost of Knowledge” released today, which (besides your humble blogger) has been signed by Ingrid Daubechies (President of the International Mathematical Union), Timothy Gowers, Terence Tao, László Lovász, and 29 others.  The statement carefully explains the rationale for the current Elsevier boycott, and answers common questions like “why single out Elsevier?” and “what comes next?”

Also check out Timothy Gowers’ blog post announcing the statement.  The post includes a hilarious report by investment firm Exane Paribas, explaining that the current boycott has caused Reed Elsevier’s stock price to fall, but presenting that as a great investment opportunity, since they fully expect the price to rebound once this boycott fails like all the previous ones.  I ask you: does that not want to make you boycott Elsevier, for no other reason than to see the people who follow Exane Paribas’ cynical advice lose their money?

In related news, the boycott petition now has 4600+ signatures and counting.  If you’ve already signed, great!  If you haven’t, why not?

Update (Feb. 9): There’s now a great editorial by Gareth Cook in the Boston Globe supporting the Elsevier boycott (and analogizing it to both the Tahrir Square uprising and the Boston Tea Party!).

Whether or not God plays dice, I do

February 3rd, 2012

Another Update (Feb. 7): I have a new piece up at IEEE Spectrum, explaining why I made this bet.  Thanks to Rachel Courtland for soliciting the piece and for her suggestions improving it.

Update: My $100,000 offer for disproving scalable quantum computing has been Slashdotted.  Reading through the comments was amusing as always.  The top comment suggested that winning my prize was trivial: “Just point a gun at his head and ask him ‘Convinced?'”  (For the record: no, I wouldn’t be, even as I handed over my money.  And if you want to be a street thug, why limit yourself to victims who happen to have made public bets about quantum computing?)  Many people assumed I was a QC skeptic, and was offering the prize because I hoped to spur research aimed at disproving QC.  (Which is actually an interesting misreading: I wonder how much “pro-paranormal” research has been spurred by James Randi’s million-dollar prize?)  Other people said the bet was irrelevant since D-Wave has already built scalable QCs.  (Oh, how I wish I could put the D-Wave boosters and the QC deniers in the same room, and let them duke it out with each other while leaving me alone for a while!)  One person argued that it would be easy to prove the impossibility of scalable QCs, just like it would’ve been easy to prove the impossibility of scalable classical computers in 1946: the only problem is that both proofs would then be invalidated by advances in technology.  (I think he understands the word “proof” differently than I do.)  Then, buried deep in the comments, with a score of 2 out of 5, was one person who understood precisely:

I think he’s saying that while a general quantum computer might be a very long way off, the underlying theory that allows such a thing to exist is on very solid ground (which is why he’s putting up the money). Of course this prize might still cost him since if the news of the prize goes viral he’s going to spend the next decade getting spammed by kooks.

OK, two people:

    There’s some needed context.  Aaronson himself works on quantum complexity theory.  Much of his work deals with quantum computers (at a conceptual level–what is and isn’t possible).  Yet there are some people who reject the idea the quantum computers can scale to “useful” sizes–including some very smart people like Leonid Levin (of Cook-Levin Theorem fame)–and some of them send him email, questions, comments on his blog, etc. saying so.  These people are essentially asserting that Aaronson’s career is rooted in things that can’t exist.  Thus, Aaronson essentially said “prove it.”  It’s true that proving such a statement would be very difficult … But the context is that Aaronson gets mail and questions all the time from people who simply assert that scalable QC is impossible, and he’s challenging them to be more formal about it.  He also mentions, in fairness, that if he does have to pay out, he’d consider it an honor, because it would be a great scientific advance.

For better or worse, I’m now offering a US$100,000 award for a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world.  This award has no time limit other than my death, and is entirely at my discretion (though if you want to convince me, a good approach would be to convince most of the physics community first).  I might, also at my discretion, decide to split the award among several people or groups, or give a smaller award for a discovery that dramatically weakens the possibility of scalable QC while still leaving it open.  I don’t promise to read every claimed refutation of QC that’s emailed to me.  Indeed, you needn’t even bother to send me your refutation directly: just convince most of the physics community, and believe me, I’ll hear about it!  The prize amount will not be adjusted for inflation.

The impetus for this prize was a post on Dick Lipton’s blog, entitled “Perpetual Motion of the 21st Century?”  (See also this followup post.)  The post consists of a debate between well-known quantum-computing skeptic Gil Kalai and well-known quantum-computing researcher Aram Harrow (Shtetl-Optimized commenters both), about the assumptions behind the Quantum Fault-Tolerance Theorem.  So far, the debate covers well-trodden ground, but I understand that it will continue for a while longer.  Anyway, in the comments section of the post, I pointed out that a refutation of scalable QC would require, not merely poking this or that hole in the Fault-Tolerance Theorem, but the construction of a dramatically-new, classically-efficiently-simulable picture of physical reality: something I don’t expect but would welcome as the scientific thrill of my life.  Gil more-or-less dared me to put a large cash prize behind my words—as I’m now, apparently, known for doing!—and I accepted his dare.

To clarify: no, I don’t expect ever to have to pay the prize, but that’s not, by itself, a sufficient reason for offering it.  After all, I also don’t expect Newt to win the Republican primary, but I’m not ready to put $100,000 on the line for that belief.  The real reason to offer this prize is that, if I did have to pay, at least doing so would be an honor: for I’d then (presumably) simply be adding a little to the well-deserved Nobel Prize coffers of one of the greatest revolutionaries in the history of physics.

Over on Lipton’s blog, my offer was criticized for being “like offering $100,000 to anyone who can prove that Bigfoot doesn’t exist.”  To me, though, that completely misses the point.  As I wrote there, whether Bigfoot exists is a question about the contingent history of evolution on Earth.  By contrast, whether scalable quantum computing is possible is a question about the laws of physics.  It’s perfectly conceivable that future developments in physics would conflict with scalable quantum computing, in the same way that relativity conflicts with faster-than-light communication, and the Second Law of Thermodynamics conflicts with perpetuum mobiles.  It’s for such a development in physics that I’m offering this prize.

Update: If anyone wants to offer a counterpart prize for a demonstration that scalable quantum computing is possible, I’ll be happy for that—as I’m sure, will many experimental QC groups around the world.  I’m certainly not offering such a prize.

Boycott Elsevier!

January 26th, 2012

If you’re in academia and haven’t done so yet, please take a moment to sign this online petition organized by Tyler Neylon, and pledge that you won’t publish, referee, or do editorial work for any Elsevier journals.  I’ve been boycotting Elsevier (and most other commercial journal publishers—Elsevier is merely the worst) since 2004, when I first learned about their rapacious pricing policies.  I couldn’t possibly be happier with my choice: unlike most idealistic principles, this one gets you out of onerous work rather than committing you to it!  Sure, Elsevier is huge and we’re tiny, but the fight against them is finally gathering steam (possibly because of Elsevier’s support for the “Research Works Act”), years after the case against them became inarguable.  Since their entire business model depends on our donating free labor to them, all it will take to bring them down is for enough of us to decide we’re through being had.  We can actually win this one … Yes We Can.

For more information, see this wonderful recent post by Fields medalist and Shtetl-Optimized commenter Timothy Gowers, entitled “Elsevier — my part in its downfall.”  (Added: also check out this great post by Aram Harrow.)  You might also enjoy a parody piece I wrote years ago, trying to imagine how Elsevier’s “squeeze those dupes for all they’ve got” business model would work in any other industry.

Updates from Kenya

January 18th, 2012

Yes, I’m blogging from outside Nairobi, where I’ve come to investigate the true circumstances of President Obama’s birth.  Seriously, Dana and I are here to go on a safari for our belated honeymoon—for both of us, it’s our first non-work-related trip in many years.  Needless to say, we both brought our laptops.

Like everyone else with an ounce of sense, I’m absolutely horrified by SOPA, and inspired by the way so many Internet companies and organizations have banded together to try to prevent the United States from moving in the direction of China and Iran.

Meanwhile, on the theme of open access to information on the web, check out a New York Times article by Thomas Lin about the open science movement.  I’m quoted briefly toward the end.

Sorry for the light (nonexistent) blogging lately.  I’ll be back after I’m done with the lions and hippos and so forth.

Cerebrum-stuffer from Shtetl Claus

December 24th, 2011

Ho3!  Home with family for the holidays and looking for something to do?  Then check out the archives of our 6.893 Philosophy and Theoretical Computer Science course blog.  The course just ended last week, so you can find discussions of everything from the interpretation of quantum mechanics to Occam’s Razor to the Church-Turing Thesis to strong AI, as well as links to student projects, including Criticisms of the Turing Test and Why You Should Ignore (Most of) Them, Barwise Inverse Relation Principle, Bayesian Surprise, Boosting, and Other Things that Begin with the Letter B, and an interactive demonstration of interactive proofs.  Thanks to my TA Andy Drucker, and especially to the students, for making this such an interesting course.

My New York Times essay on quantum computing

December 5th, 2011

I have a special treat for those commenters who consider me an incorrigible publicity-hound: an essay I was invited to write for the New York Times Science section, entitled Quantum Computing Promises New Insights, Not Just Supermachines.  (My original title was “The Real Reasons to Study Quantum Computing.”)  This piece is part of a collection of essays on “the future of computing,” which include one on self-driving cars by Sebastian Thrun, one on online learning by Daphne Koller, and other interesting stuff (the full list is here).

In writing my essay, the basic constraints were:

(a) I’d been given a rare opportunity to challenge at least ten popular misconceptions about quantum computing, and would kick myself for years if I didn’t hit all of them,

(b) I couldn’t presuppose the reader had heard of quantum computing, and

(c) I had 1200 words.

Satisfying these constraints was harder than it looked, and I benefited greatly from the feedback of friends and colleagues, as well as the enormously helpful Times staff.  I did get one request that floored me: namely, to remove all the material about “interference” and “amplitudes” (too technical), and replace it by something ordinary people could better relate to—like, say, a description of how a quantum computer would work by trying every possible answer in parallel.  Eventually, though, the Gray Lady and I found a compromise that everyone liked (and that actually improved the piece): namely, I’d first summarize the usual “try all answers in parallel” view, and then explain why it was wrong, bringing in the minus signs and Speaking Truth to Parallelism.

To accompany the essay, I also did a short podcast interview about quantum computing with the Times‘ David Corcoran.  (My part starts around 8:20.)  Overall, I’m happy with the interview, but be warned: when Corcoran asks me what quantum computers’ potential is, I start talking about the “try all answers in parallel” misconception—and then they cut to the next question before I get to the part about its being a misconception!  I need to get better at delivering soundbites…

One final comment: in case you’re wondering, those black spots on the Times‘ cartoon of me seem to be artifacts of whatever photo-editing software they used.  They’re not shrapnel wounds or disfiguring acne.

Quantum Algorithms for Quantum Field Theories

December 3rd, 2011

For weeks, I’ve been meaning to blog about an important recent paper by Stephen Jordan, Keith Lee, and John Preskill, entitled Quantum Algorithms for Quantum Field Theories.  So I’m now doing so.

As long as I’ve been in quantum computing, people have been wondering aloud about the computational power of realistic quantum field theories (for example, the Standard Model of elementary particles).  But no one seemed to have any detailed analysis of this question (if there’s something I missed, surely commenters will let me know).  The “obvious” guess would be that realistic quantum field theories should provide exactly the same computational power as “ordinary,” nonrelativistic quantum mechanics—in other words, the power of BQP (the class of problems solvable in polynomial time by a quantum computer).  That would be analogous to the situation in classical physics, where bringing in special relativity dramatically changes our understanding of space, time, matter, and energy, but seems (unlike quantum mechanics) to have little or no effect on which computational problems can be solved efficiently.  Analogously, it would seem strange if quantum field theories (QFTs)—which tie together quantum mechanics, special relativity, and detailed knowledge about the elementary particles and their interactions, but seen from far enough away are “just” quantum mechanics—forced any major revision to quantum computing theory.

Until now, though, there seems to have been only one detailed analysis supporting that conclusion, and it applied to (2+1)-dimensional topological QFTs (TQFTs) only, rather than “realistic” (3+1)-dimensional QFTs.  This was the seminal work of Freedman, Kitaev, and Wang and Freedman, Larsen, and Wang in 2000.  (Six years later, Aharonov, Jones, and Landau gave a more computer-science-friendly version, by directly proving the BQP-completeness of approximating the Jones polynomial at roots of unity.  The latter problem was known to be closely-related to simulating TQFTs, from the celebrated work of Witten and others in the 1980s.)  To a theoretical computer scientist, dropping from three to two spatial dimensions might not sound like a big deal, but what’s important is that the relevant degrees of freedom become “topological”, making possible a clean, simple model of computation.  For “realistic” QFTs, by contrast, it wasn’t even obvious how to define a model of computation; putting realistic QFTs on a rigorous mathematical footing remains a notorious open problem.

In their new paper, Jordan, Lee, and Preskill say that they give an algorithm, running on a “conventional” quantum computer, to estimate scattering probabilities in a class of QFTs called “continuum φ4 theories.” Their algorithm uses time polynomial in the number of incoming particles in the scattering experiment and in their total energy, and inversely polynomial in the desired precision ε and in the distance λ-λc between the QFT’s coupling constant λ and a phase transition λc.  (In d=2 spatial dimensions, they say the dependence on the precision scales like (1/ε)2.376, the 2.376 coming from matrix multiplication. Naturally, that should now be amended to (1/ε)2.373.)  To develop their algorithm, Jordan et al. apparently had to introduce some new techniques for coping with the error incurred by discretizing QFTs.  No classical algorithm is known with similar scaling—so when suitably formalized, the “QFT simulation problem” might indeed be in BQP-BPP, matching the uninformed doofus intuition of complexity theorists like me.  Jordan et al. don’t say whether the problem they’re solving is also BQP-complete; I imagine that could be a topic for future research.  They also don’t say whether their precision parameter ε bounds the variation distance between the real and simulated output distributions (rather than just the differences between probabilities of individual scattering outcomes); I hope they or someone else will be able to clarify that point.

In case it isn’t obvious yet, let me make it crystal-clear that I lack the physics background to evaluate Jordan et al.’s work in a serious technical way.  All I can say with confidence is that the small number of people who (1) have the requisite background and (2) care about computational complexity, will probably spend non-negligible time discussing and understanding this paper in the weeks and months to come.


Conflict-of-Interest Warning: At a deep, subconscious level, I probably chose to blog about Jordan et al.’s paper not for any legitimate scientific reason, but simply because I know John Preskill and Stephen Jordan personally, and, despite being physicists, they’re both tremendously-respected colleagues who’ve made many outstanding contributions to quantum computing theory besides this one.  Then again, everything I’ve ever done—and everything you’ve ever done—has probably had such unsavory hidden motives as well, so who’s counting?  In all of history, there have only been ten or twenty people whose commitment to scientific objectivity has been absolute and pure, and since they comment on complexity blogs anonymously, we’ll probably never even know their names…

The Alternative to Resentment

December 2nd, 2011

A year ago, in a post entitled Anti-Complexitism, I tried to grapple with the strange phenomenon—one we’ve seen in force this past week—of anonymous commenters getting angry about the mere fact of announcements, on theoretical computer science blogs, of progress on longstanding open problems in theoretical computer science.  When I post something about global warming, Osama Bin Laden, or (of course) the  interpretation of quantum mechanics, I expect a groundswell of anger … but a lowering of the matrix-multiplication exponent ω?  Huh?  What was that about?

Well, in this case, some commenters were upset about attribution issues (which hopefully we can put behind us now, everyone agreeing about the importance of both Stothers’ and Vassilevska Williams’ contributions), while others honestly but mistakenly believed that a small improvement to ω isn’t a big deal (I tried to explain why they’re wrong here).  What interests me in this post is the commenters who went further, positing the existence of a powerful “clique” of complexity bloggers that’s doing something reprehensible by “hyping” progress in complexity theory, or by exceeding some quota (what, exactly?) on the use of the word “breakthrough.”

One of the sharpest responses to that paranoid worldview came (ironically) from a wonderful anonymous comment on my Anti-Complexitism post, which I recommend everyone read.  Here was my favorite paragraph:

The final criticism [by the anti-complexites] seems to be: complexity theory makes too much noise which people in other areas do not like.  I really don’t understand this one, I mean what is wrong with people in an area being excited about their area?  Is that wrong?  And where do we make those noise?  On complexity blogs!  If you don’t like complexity theorists being excited about their area why are you reading these blogs?  The metaphor would be an outsider going to a wedding and asking the people in the wedding with a very serious tone: “why is everyone happy here?”

Yesterday, in response to my reposting the above comment on Lance and Bill’s blog, another anonymous commenter had something extremely illuminating to say:

Scott, you are missing the larger socio-economical context: it’s not about excitement.  It’s about researchers competing for scarce resources, primarily funding.  The work involved in funding acquisition is generally loathed, and directly reduces the time scientists have for research and teaching.  If some researchers ramp up their hype-level vis-a-vis the rest of the community, as the complexity community is believed to be doing (what with all them Goedel awards?), they are forcing (or are seen as forcing) the rest either to accept a lower level of funding with all the concomitant disadvantages, or invest more time in hype themselves.  In other words, hypers are defecting in the prisoners dilemma type game scientists are playing, the objective of which is to minimise the labour involved in funding acquisition.

This is similar to teeth-whitening: in the past, it was perfectly possible to be considered attractive with natural, slightly yellowish teeth. Then some defected by bleaching, then more and more, and today natural teeth are socially hardly acceptable, certainly not if you want to be good-looking.  Is that progress?

I posted a response on Lance and Bill’s blog, but then decided it was important enough to repost here.  So:

Dear Anonymous 2:47,

Let me see whether I understand you correctly.  On the view you propose, other scientists shouldn’t have praised (say) Carl Sagan for getting millions of people around the world excited about science.  Rather, they should have despised him, for using hype to divert scarce funding dollars from their own fields to the fields Sagan favored (like astronomy, or Sagan’s preferred parts of astronomy).  Sagan forced all those other scientists to accept a terrible choice: either accept reduced funding, or else sink to Sagan’s level, and perform the loathed task of communicating their own excitement about their own fields to the public.

Actually, there were other scientists who drew essentially that conclusion.  As an example, Sagan was famously denied membership in the National Academy of Sciences, apparently because of a few vocal NAS members who were jealous and resentful of Sagan’s outreach activities.  The view we’re now being asked to accept is that those NAS members are the ones who emerge from the story the moral victors.

So let me thank you, Anonymous 2:47: it’s rare for anyone to explain the motivation behind angry TCS blog comments with that much candor.

Now that the real motivation has (apparently) crawled out from underneath its rock, I can examine it and refute it.  The central point is simply that science isn’t a Prisoner’s-Dilemma-type game.   What you describe as the “socially optimal equilibrium,” where no scientists need to be bothered to communicate their excitement about their fields, is not socially optimal at all—neither from the public’s standpoint nor from science’s.

At the crudest level, science funding is not a fixed-size pie.  For example, when Congress was debating the cancellation of the Superconducting Supercollider, a few physicists from other fields eagerly jumped on the anti-SSC bandwagon, hoping that the SSC money might then get diverted to their own fields.  Ultimately, of course, the SSC was cancelled, and none of the money ever found its way to other areas of physics.

So, if you see people using blogs to talk about research results that excite them, then instead of resenting it, consider starting your own blog to talk about the research results that excite YOU.  If your blog is well-written and interesting, I’ll even add you to my blogroll, game-theoretic funding considerations be damned.  Just go to WordPress.com—it’s free, and it takes only a few minutes to set one up.