Archive for February, 2012

Tell President Obama to support the Federal Research Public Access Act

Tuesday, February 28th, 2012

If you’re tired of blog posts about open science, sorry dude—but it feels great to be part a group of blogging nerds who, for once, are actually having a nonzero (and positive, I think!) impact on the political process.  Yesterday, Elsevier, which had been the biggest supporter of the noxious Research Works Act, announced, under pressure from the “Cost of Knowledge” movement, that it was dropping its support for RWA.  Only hours later, Elsevier’s paid cheerleaders in Congress, Darrell Issa (R-CA) and Carolyn Maloney (D-NY), announced that they were shelving the RWA for now.  See this hilarious post by physicist John Baez, which translates Issa and Maloney’s statement on why they’re letting the RWA die into ordinary English sentence-by-sentence.

But it gets better: Representative Mike Doyle (D-PA) has introduced a sort of anti-RWA, the Federal Research Public Access Act (or easily-pronounced FRPAA), which would require federal agencies with budgets of over $100 million to make the research they sponsor freely available less than 6 months after its publication in a peer-reviewed journal (thereby expanding the NIH’s successful open-access policy).  If you’re a US citizen, and you care about the results of taxpayer-funded medical and other research being accessible to the public, then please sign this petition telling President Obama you support the FRPAA.  Tell your coworker, husband, wife, grandmother, etc. to sign it too.  Apparently the President will personally review it if it gets to 25,000 signatures by March 9.

And if you’re not a US citizen: that’s cool too!  Support open-access initiatives in your country.  (Or, if you live someplace like Syria, support the prerequisite “not-getting-shot” initiatives.)  Just don’t have a cow about my blogging American issues from time to time, like this easily-offended Aussie did over on Cosmic Variance.

My visit to D-Wave: Beyond the roast-beef sandwich

Tuesday, February 21st, 2012

Last week I was in Vancouver, to give talks at the University of British Columbia and at the American Association for the Advancement of Science annual meeting.  As part of that visit, on Friday afternoon, John Preskill, John Martinis, Michael Freedman and I accepted a gracious invitation to tour the headquarters of D-Wave Systems in Burnaby (a suburb of Vancouver).  We started out in a conference room, where they served us cookies and sodas.  Being the mature person that I am, the possibility of the cookies being poisoned at no point crossed my mind.

Then we started the tour of D-Wave’s labs.  We looked under a microscope at the superconducting chips; we saw the cooling systems used to get the chips down to 20 millikelvin.  In an experience that harked back to the mainframe era, we actually walked inside the giant black cubes that D-Wave was preparing for shipment.  (The machines are so large partly because of the need for cooling, and partly to let engineers go in and fix things.)  Afterwards, D-Wave CTO Geordie Rose gave a 2-hour presentation about their latest experimental results.  Then we all went out to dinner.  The D-Wave folks were extremely cordial to us and fielded all of our questions.

In spite of my announcement almost a year ago that I was retiring as Chief D-Wave Skeptic, I thought it would be fitting to give Shtetl-Optimized readers an update on what I learned from this visit.  I’ll start with three factual points before moving on to larger issues.

Point #1: D-Wave now has a 128-(qu)bit machine that can output approximate solutions to a particular NP-hard minimization problem—namely, the problem of minimizing the energy of 90-100 Ising spins with pairwise interactions along a certain fixed graph (the “input” to the machine being the tunable interaction strengths).  So I hereby retire my notorious comment from 2007, about the 16-bit machine that D-Wave used for its Sudoku demonstration being no more computationally-useful than a roast-beef sandwich.  D-Wave does have something today that’s more computationally-useful than a roast-beef sandwich; the question is “merely” whether it’s ever more useful than your laptop.  Geordie presented graphs that showed D-Wave’s quantum annealer solving its Ising spin problem “faster” than classical simulated annealing and tabu search (where “faster” means ignoring the time for cooling the annealer down, which seemed fair to me).  Unfortunately, the data didn’t go up to large input sizes, while the data that did go up to large input sizes only compared against complete classical algorithms rather than heuristic ones.  (Of course, all this is leaving aside the large blowups that would likely be incurred in practice, from reducing practical optimization problems to D-Wave’s fixed Ising spin problem.)  In summary, while the observed speedup is certainly interesting, it remains unclear exactly what to make of it, and especially, whether or not quantum coherence is playing a role.

Which brings me to Point #2.  It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system.  (Note that, if there’s no entanglement, then it becomes extremely implausible that quantum coherence could be playing a role in a speedup.  For while separable-mixed-state quantum computers are not yet known to be efficiently simulable classically, we certainly don’t have any examples where they give a speedup.)  Last year, as reported on this blog, D-Wave had a nice Nature paper that reported quantum tunneling behavior in an 8-qubit system.  However, when I asked D-Wave scientist Mohammad Amin, he said he didn’t think that experiment provided any evidence for entanglement between qubits.

The “obvious” way to demonstrate entanglement between qubits would be to show a Bell inequality violation.  (We know that this can be done in superconducting qubits, as the Schoelkopf group at Yale among others reported it a couple years ago.)  Meanwhile, the “obvious” way to demonstrate a role for quantum coherence in the apparent speedup would be gradually to “turn down” the system’s coherence (for example, by adding an interaction that constantly measured the qubits in the computational basis), and check that the annealer’s performance degraded to that of classical simulated annealing.  Unfortunately, the D-Wave folks told us that neither experiment seems feasible with their current setup, basically because they don’t have arbitrary local unitary transformations and measurements available.  They said they want to try to demonstrate 2-qubit entanglement, but in the meantime, are open to other ideas for how to demonstrate a quantum role in the apparent speedup with their existing setup.

Point #3: D-Wave was finally able to clarify a conceptual point that had been bugging me for years.  I—and apparently many others!—thought D-Wave was claiming that their qubits decohere almost immediately (so that, in particular, entanglement would almost certainly never be present during the computation), but that the lack of entanglement didn’t matter, for some complicated reason having to do with energy gaps.  I was far from alone in regarding such a claim as incredible: as mentioned earlier, there’s no evidence that a quantum computer without entanglement can solve any problem asymptotically faster than a classical computer.  However, that isn’t D-Wave’s claim.  What they think is that their system decoheres almost immediately in the energy eigenbasis, but that it doesn’t decohere in the computational basis—so that, in particular, there would be entanglement at intermediate stages.  If so, that would be perfectly fine from the standpoint of the adiabatic algorithm, which doesn’t need coherence in the energy eigenbasis anyway (after all, the whole point is that, throughout the computation, you want to stay as close to the system’s ground state as possible!).  I understand that, given their knowledge of decoherence mechanisms, some physicists are extremely skeptical that you could have rapid decoherence in the energy basis without getting decoherence in the computational basis also.  So certainly the burden is on D-Wave to demonstrate that they maintain coherence “where it counts.”  But at least I now understand what they’re claiming, and how it would be compatible (if true) with a quantum speedup.

Let me now move on to three broader questions raised by the above points.

The first is: rather than constantly adding more qubits and issuing more hard-to-evaluate announcements, while leaving the scientific characterization of its devices in a state of limbo, why doesn’t D-Wave just focus all its efforts on demonstrating entanglement, or otherwise getting stronger evidence for a quantum role in the apparent speedup?  When I put this question to Mohammad Amin, he said that, if D-Wave had followed my suggestion, it would have published some interesting research papers and then gone out of business—since the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems.  So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t.  There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits!  Once you’ve shown that the foundation is solid, then you try to scale up.  So, please support D-Wave if it wants to spend money to show Bell inequality violations, or other “smoking-gun” evidence that its qubits are working together coherently.  You’re welcome, D-Wave!

The second question is one that I’ve encountered many times on the blogosphere: who cares how D-Wave’s system works, and whether it does or doesn’t exploit quantum coherence, as long as it solves practical problems faster?  Sure, maybe what D-Wave is building is really a series of interesting, useful, but still basically “classical” annealing devices.  Maybe the word “quantum” is functioning here as the stone in a stone soup: attracting money, interest, and talented people to build something that, while neat, ultimately doesn’t much depend on quantum mechanics at all.  As long as D-Wave’s (literal!) black box solves the problem instances in such-and-such amount of time, why does it matter what’s inside?

To see the obtuseness of this question, consider a simple thought experiment: suppose D-Wave were marketing a classical, special-purpose, $10-million computer designed to perform simulated annealing, for 90-bit Ising spin glass problems with a certain fixed topology, somewhat better than an off-the-shelf computing cluster.  Would there be even 5% of the public interest that there is now?  I think D-Wave itself would be the first to admit the answer is no.  Indeed, Geordie Rose spoke explicitly in his presentation about the compelling nature of (as he put it) “the quantum computing story,” and how it was key to attracting investment.  People don’t care about this stuff because they want to find the ground states of Ising spin systems a bit faster; they care because they want to know whether or not the human race has finally achieved a new form of computing.  So characterizing the device matters, goddammit!  I pride myself on being willing to adjust my opinions on just about anything in response to new data (as I’ve certainly done in D-Wave’s case), but the insistence that black boxes must be opened and explanations provided is something I’ll carry to the grave.

Finally, given the skeptical-yet-positive tone of this post, some people will wonder whether I now regret my earlier, more unmitigated D-Wave skepticism.  The answer is no!  Asking questions is my job.  I’ll give D-Wave credit whenever it answers some of the questions—as it did on this visit!—and will shift my views accordingly.  But I’ll also neither stop asking nor apologize for asking, until the evidence for a quantum speedup becomes clear and indisputable (as it certainly hasn’t yet).  On the other hand, I do regret the snowballing nastiness that developed as a combined result of my and other skeptics’ statements, D-Wave’s and its supporters’ statements, and the adversarial nature of the blogosphere.  For the first time, I find myself really, genuinely hoping—with all my heart—that D-Wave will succeed in proving that it can do some (not necessarily universal) form of scalable quantum computation.  For, if nothing else, such a success would prove to the world that my $100,000 is safe, and decisively refute the QC skeptics who, right now, are getting even further under my skin than the uncritical D-Wave boosters ever did.

Safari photos from Kenya

Sunday, February 12th, 2012

(Credit for most of the photos: Dana)

I was going to write a whole long essay about

  • the differences between going to the zoo and visiting an ancestral environment of humanity, where elephants have grazed for millions of years;
  • the weird sense of familiarity, as if you’re seeing how the surface of the earth is “supposed” to look, how it did look before humans started converting it into KFCs and parking lots;
  • how to tell whether an elephant charging your jeep is serious about wanting to trample you or, much more likely, just warning you to go away (apparently, it has to do with whether its ears are straight back or flapping);
  • the “airport” at Lake Naivasha (a strip of dirt in a grassy field filled with zebras, and a guy on a bicycle who shoos the zebras off the strip before a plane lands);
  • Britain’s failure, to this day, to issue any sort of apology for its detention, torture, and murder of tens of thousands of Kenyans during the waning years of its colonial rule in the 1950s;
  • the near-destruction by poaching, over the last century, of many of the majestic animal populations you see above;
  • the heroism of Richard Leakey (past director of the Kenya Wildlife Service) in overcoming decades of bureaucratic inertia to initiate a crackdown, where rangers were authorized to “poach the poachers,” shooting them on sight (!);
  • how, after Leakey almost-singlehandedly saved Kenya’s wild elephants, he lost both of his legs when his plane crashed (widely suspected to be due to sabotage), and was forced from his job months later;
  • the benefits of safari tourism in creating a serious economic incentive for conservation, but also the drawbacks (e.g., all the jeeps making it harder for the cheetahs to hunt);
  • the large, obvious, anything-but-“theoretical” changes being wrought by global warming on the rainfall in Kenya’s game parks (which changes are killing the trees, thereby eliminating the lions’ hiding places and making it harder for them to hunt—hey, at least the zebras are happy);
  • the Maasais’ innovative uses for cow dung; the resulting immature jokes on my part (homeowner to roofer: “this roof you sold me is shit!”);
  • my growing fascination, over the course of the trip, with the lesser-known corners of Mammalia (elands, dik-diks, kudus, waterbucks, topis, rock hyraxes); how this might mirror my fascination with lesser-known complexity classes like AWPP, QMA(2)/qpoly, SBP, C=P, and BPPpath;
  • how parts of the African savannah have better cellphone reception than my office in Stata;
  • how it’s indeed possible to catch up on Jon Stewart and The Big Bang Theory over wifi, from a tent in the Maasai Mara, while hippos bellow loudly in the river below, and elephants graze and crocodiles sun themselves on the other side.

But then I never got around to writing that essay.  So enjoy the photos, and ask in the comments if you want me to say something else.

The battle against Elsevier gains momentum

Wednesday, February 8th, 2012

Check out this statement on “The Cost of Knowledge” released today, which (besides your humble blogger) has been signed by Ingrid Daubechies (President of the International Mathematical Union), Timothy Gowers, Terence Tao, László Lovász, and 29 others.  The statement carefully explains the rationale for the current Elsevier boycott, and answers common questions like “why single out Elsevier?” and “what comes next?”

Also check out Timothy Gowers’ blog post announcing the statement.  The post includes a hilarious report by investment firm Exane Paribas, explaining that the current boycott has caused Reed Elsevier’s stock price to fall, but presenting that as a great investment opportunity, since they fully expect the price to rebound once this boycott fails like all the previous ones.  I ask you: does that not want to make you boycott Elsevier, for no other reason than to see the people who follow Exane Paribas’ cynical advice lose their money?

In related news, the boycott petition now has 4600+ signatures and counting.  If you’ve already signed, great!  If you haven’t, why not?

Update (Feb. 9): There’s now a great editorial by Gareth Cook in the Boston Globe supporting the Elsevier boycott (and analogizing it to both the Tahrir Square uprising and the Boston Tea Party!).

Whether or not God plays dice, I do

Friday, February 3rd, 2012

Another Update (Feb. 7): I have a new piece up at IEEE Spectrum, explaining why I made this bet.  Thanks to Rachel Courtland for soliciting the piece and for her suggestions improving it.

Update: My $100,000 offer for disproving scalable quantum computing has been Slashdotted.  Reading through the comments was amusing as always.  The top comment suggested that winning my prize was trivial: “Just point a gun at his head and ask him ‘Convinced?'”  (For the record: no, I wouldn’t be, even as I handed over my money.  And if you want to be a street thug, why limit yourself to victims who happen to have made public bets about quantum computing?)  Many people assumed I was a QC skeptic, and was offering the prize because I hoped to spur research aimed at disproving QC.  (Which is actually an interesting misreading: I wonder how much “pro-paranormal” research has been spurred by James Randi’s million-dollar prize?)  Other people said the bet was irrelevant since D-Wave has already built scalable QCs.  (Oh, how I wish I could put the D-Wave boosters and the QC deniers in the same room, and let them duke it out with each other while leaving me alone for a while!)  One person argued that it would be easy to prove the impossibility of scalable QCs, just like it would’ve been easy to prove the impossibility of scalable classical computers in 1946: the only problem is that both proofs would then be invalidated by advances in technology.  (I think he understands the word “proof” differently than I do.)  Then, buried deep in the comments, with a score of 2 out of 5, was one person who understood precisely:

I think he’s saying that while a general quantum computer might be a very long way off, the underlying theory that allows such a thing to exist is on very solid ground (which is why he’s putting up the money). Of course this prize might still cost him since if the news of the prize goes viral he’s going to spend the next decade getting spammed by kooks.

OK, two people:

    There’s some needed context.  Aaronson himself works on quantum complexity theory.  Much of his work deals with quantum computers (at a conceptual level–what is and isn’t possible).  Yet there are some people who reject the idea the quantum computers can scale to “useful” sizes–including some very smart people like Leonid Levin (of Cook-Levin Theorem fame)–and some of them send him email, questions, comments on his blog, etc. saying so.  These people are essentially asserting that Aaronson’s career is rooted in things that can’t exist.  Thus, Aaronson essentially said “prove it.”  It’s true that proving such a statement would be very difficult … But the context is that Aaronson gets mail and questions all the time from people who simply assert that scalable QC is impossible, and he’s challenging them to be more formal about it.  He also mentions, in fairness, that if he does have to pay out, he’d consider it an honor, because it would be a great scientific advance.

For better or worse, I’m now offering a US$100,000 award for a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world.  This award has no time limit other than my death, and is entirely at my discretion (though if you want to convince me, a good approach would be to convince most of the physics community first).  I might, also at my discretion, decide to split the award among several people or groups, or give a smaller award for a discovery that dramatically weakens the possibility of scalable QC while still leaving it open.  I don’t promise to read every claimed refutation of QC that’s emailed to me.  Indeed, you needn’t even bother to send me your refutation directly: just convince most of the physics community, and believe me, I’ll hear about it!  The prize amount will not be adjusted for inflation.

The impetus for this prize was a post on Dick Lipton’s blog, entitled “Perpetual Motion of the 21st Century?”  (See also this followup post.)  The post consists of a debate between well-known quantum-computing skeptic Gil Kalai and well-known quantum-computing researcher Aram Harrow (Shtetl-Optimized commenters both), about the assumptions behind the Quantum Fault-Tolerance Theorem.  So far, the debate covers well-trodden ground, but I understand that it will continue for a while longer.  Anyway, in the comments section of the post, I pointed out that a refutation of scalable QC would require, not merely poking this or that hole in the Fault-Tolerance Theorem, but the construction of a dramatically-new, classically-efficiently-simulable picture of physical reality: something I don’t expect but would welcome as the scientific thrill of my life.  Gil more-or-less dared me to put a large cash prize behind my words—as I’m now, apparently, known for doing!—and I accepted his dare.

To clarify: no, I don’t expect ever to have to pay the prize, but that’s not, by itself, a sufficient reason for offering it.  After all, I also don’t expect Newt to win the Republican primary, but I’m not ready to put $100,000 on the line for that belief.  The real reason to offer this prize is that, if I did have to pay, at least doing so would be an honor: for I’d then (presumably) simply be adding a little to the well-deserved Nobel Prize coffers of one of the greatest revolutionaries in the history of physics.

Over on Lipton’s blog, my offer was criticized for being “like offering $100,000 to anyone who can prove that Bigfoot doesn’t exist.”  To me, though, that completely misses the point.  As I wrote there, whether Bigfoot exists is a question about the contingent history of evolution on Earth.  By contrast, whether scalable quantum computing is possible is a question about the laws of physics.  It’s perfectly conceivable that future developments in physics would conflict with scalable quantum computing, in the same way that relativity conflicts with faster-than-light communication, and the Second Law of Thermodynamics conflicts with perpetuum mobiles.  It’s for such a development in physics that I’m offering this prize.

Update: If anyone wants to offer a counterpart prize for a demonstration that scalable quantum computing is possible, I’ll be happy for that—as I’m sure, will many experimental QC groups around the world.  I’m certainly not offering such a prize.