Archive for the ‘Adventures in Meatspace’ Category

The Future of Computer Science, and Why Every Other Major Sucks By Comparison

Monday, April 12th, 2010

Does this post finally herald my return to regular blogging after a months-long absence?

I don’t know.  For me, writing a Shtetl-Optimized entry always followed the same process: I’d get an idea and start typing, furiously rocking back and forth in my chair.  Then the voices in my head would pipe up: no, I can’t say that—what will everyone think?—judging from past experience, they’ll probably take offense—I can already see the commenters boiling me alive—maybe if I rephrased it, or, y’know, provided some context—but to explain the real context, I’d need a whole book—and who has the time for that?—better wait till after tenure—meantime, maybe I could blog about something light and uncontroversial instead—but then what’s the point?—we already have one GASARCH—well, I could always put off a decision till later—

Back in the blog’s heyday, I’d win these fights about 40% the time and the voices would win about 60%.  (In other words: if you’ve ever taken offense at an entry of mine, rest assured that you haven’t even seen the half of my drafts folder.)  But now that I have an actual stake in this shabby world—students to advise and look after, a tenure case to build, conceivably even a family to start—the voices win more like 98% of the time.  And that’s why my blogging fell off.

Occasionally, though, something comes along so uncomplicatedly joyous that I feel no reservations about sharing it with the world.  Such was the case this weekend, when I was somehow called upon to represent MIT’s EECS Department in the annual “Professor Talent Show” at Campus Preview Weekend.  This is an event where six faculty members square off, taking eight minutes each to

(1) explain why their department is the coolest,
(2) crack jokes, and
(3) possibly demonstrate a musical or athletic talent.

Then, using electronic clickers, the several hundred prefrosh in attendence vote for which major carried the day.  Though I had no absolutely no talent of any kind to demonstrate, and was up against a banjo-player, violinist, and basketball-spinner among other tough competitors, for some reason EECS won!  You can see my PowerPoint slides here:

The Future of Computer Science, and Why Every Other Major Sucks By Comparison
http://www.scottaaronson.com/talks/futurecs.ppt

(You can read the jokes that go along with each slide in the slide notes at the bottom.)

Update (4/15): I hadn’t realized at all that there’s actually a video of me giving the talk!  (Click on “Part 2.”)

QIP’2010: The Power and the Glory of Quantum Complexity Theory

Tuesday, January 19th, 2010

Firstly, if you haven’t contributed to relief efforts in Haiti, you can do so (the charity linked to was recommended by a Haitian-American working at MIT CSAIL).  I wish I had something more useful to say about this tragedy, but I don’t.

For the past couple days, I’ve been at QIP’2010 in Zurich, Switzerland.  I’d had premonitions, even before arriving, that this was going to be an unusually awesome QIP.  Having been on the PC, I knew the strength of the technical program, and I’d learned that the turnout—320 participants—would be a record high.  My positive feelings only intensified when I saw the following in the hallway of my hotel:

My buzz reached a fever pitch when I entered the lecture hall and found giant, gourmet Swiss chocolate bars on every seat.

But I only knew for sure that this QIP would rock when my erstwhile adviser, Umesh Vazirani, gave his opening plenary talk on “New bridges between computer science and quantum computation.”  Umesh highlighted several developments, including:

  1. the relationship I noticed last year between the BQP versus the polynomial hierarchy problem and the Generalized Linial-Nisan Conjecture.
  2. the QIP=PSPACE breakthrough of Jain et al. (based on the recent multiplicative-weights update method from classical computer science).
  3. recent breakthroughs in lattice-based cryptography (most famously, Gentry’s fully homomorphic encryption system), which took inspiration from the quantum computing work of Oded Regev five years ago.
  4. the work of Broadbent, Fitzsimons, and Kashefi and independently Aharonov, Ben-Or, and Eban (for which I paid out pieces of the Aaronson $25.00 Prize), which lets a “classical” verifier (equipped with the ability to send single, unentangled qubits through a channel) verify an arbitrary quantum computation; and which Umesh illustrated by a story about a verifier investigating the claims of a shady, fly-by-night company called “Q-Wave.”

Umesh argued that the deepening connections between quantum computing and classical complexity theory—open problems in classical complexity being solved using quantum-inspired techniques, tools that weren’t available in the classical world until a year or two ago already being used for quantum purposes, etc.—represent one of the most exciting new developments in the field.

The combination of the chocolate bar (which I was already eating), and Umesh preaching so much truth from the pulpit, was heavenly.

Amazingly, subsequent talks managed to keep up the momentum.  Daniel Gottesman spoke about his work with Sandy Irani on the quantum complexity of translationally-invariant tiling and Hamiltonian problems.  By giving tools to say something useful about computational complexity even in cases where the only free parameter is the system size, Gottesman and Irani open up some exciting avenues for further work.  Jordan Kerenidis spoke about the effort to prove an analogue of the Valiant-Vazirani witness isolation theorem for QMA (Quantum Merlin-Arthur).  Stefano Pironio talked about how you can exploit the Bell inequality violations to generate a long random string starting from a short random seed, assuming only the locality of the laws of physics, and not assuming anything about the reliability of your randomness-generating devices.  This observation, which I find to be of great conceptual (and conceivably even practical) interest, is related to the so-called “Free Will Theorem” of Conway and Kochen, as well as to a result I proved eight years ago in my review of Stephen Wolfram’s book.  For Conway and Kochen, though, the motivation was to prove that “subatomic particles have free will” (a strange interpretation that I don’t by any means endorse!), while for me, the motivation was to prove that Wolfram was wrong.  Neither I nor (as far as I know) Conway and Kochen thought about the obvious-in-retrospect application to generating random numbers.  (Incidentally, if anyone’s interested, my talk slides from yesterday morning are here.)

There’s also been a great deal of excitement at this year’s QIP about the efficient simulation of quantum systems occurring in nature, using recent techniques for model-order reduction (including MERA, matrix product states, quantum metropolis sampling, area laws…).  I hope I haven’t just made John Sidles faint from excitement.

The full schedule is here; feel free to ask in the comments about any talks I didn’t mention.  If there’s enough interest, I might also write a followup post about the rest of the conference.

Barriers to snarky blogging

Thursday, August 27th, 2009

I’m writing from the Barriers in Computational Complexity workshop in Princeton, where too many real things are happening for me to blog about nothing.  I understand that streaming video of all the talks will be up eventually; for now, a few highlights:

  • On Tuesday I hosted a panel discussion on “Barrier Problems in Boolean Complexity.”  The panelists were Steve Cook, Avi Wigderson, Russell Impagliazzo, and Sasha Razborov.  We got lots of questions from the floor, about everything from whether P≠NP, to whether P vs. NP is independent of set theory, to whether the laws of physics can be understood as computer programs.  Alas, there were few to no serious disagreements among the panelists (indeed, you can probably guess their answers to the last three questions).
  • Ketan Mulmuley spoke about Geometric Complexity Theory (GCT), his approach to P vs. NP and related problems based on algebraic geometry and group representation theory.  For months I’ve been planning a blog post about GCT. Spurred on by people at the workshop, I might actually finish it soon.  In the meantime, those of you who can’t wait for your daily helping of plethysms, Weyl modules, G-varieties might want to check out Mulmuley’s new complexity-theoretic overview and complementary mathematical overview of GCT.
  • Ben Rossman spoke about recent lower bounds in circuit complexity: an ~nk/4 lower bound on the size of AC0 circuits computing the k-clique function, and (a brand-new result) an ~nk/4 lower bound on the size of monotone circuits computing the k-clique function, even on average.
  • Ran Raz gave an awesome talk on “How to Fool People to Work on Circuit Lower Bounds.”  (Answer: by giving them completely innocuous-looking mathematical problems, without telling them that the answers would imply breakthroughs in complexity theory.  Alas, presumably no one who attended Ran’s talk—or for that matter who’s reading this entry—can be fooled, since we’re in on the secret.)  In particular, Ran spoke about his STOC’08 paper on elusive functions, as well as some brand-new work on how lower-bounding the rank of explicit tensors would lead to circuit and formula size lower bounds.

Meanwhile, Lance has a superb survey article in Communications of the ACM about the status of the P vs. NP problem.

(An earlier version of this post discussed a preprint by Gus Gutoski on quantum multi-prover interactive proof systems.  That preprint has since been retracted.)

And now I bid adieu, as the next talk is starting and my laptop is running out of batteries.


My long, complexity-theoretic journey

Thursday, June 11th, 2009

So, what was I doing these past few weeks that could possibly take precedence over writing ill-considered blog entries that I’d probably regret for the rest of my life?

1. On the gracious invitation of Renato Renner, I visited one of Al Einstein’s old stomping-grounds: ETH Zürich.  There I gave a physics colloquium called How Much Information Is In A Quantum State?, as well as a talk on my paper Quantum Copy-Protection and Quantum Money, which has been more than three years in the procrastinating.  Though I was only in Switzerland for three days, I found enough time to go hiking in the Swiss Alps, if by “Swiss Alps” you mean a 200-foot hill outside the theoretical physics building.  I’m quite proud of having made it through this entire trip—my first to Switzerland—without once yodeling or erupting into cries of “Riiiiiiicola!”  On the other hand, what with the beautiful architecture, excellent public transportation, and wonderful hosts, it was a struggle to maintain my neutrality.

2. On the plane to and from Switzerland, I had the pleasure of perusing Computational Complexity: A Modern Approach, by Sanjeev Arora and Boaz Barak, which has just been published after floating around the interweb for many years.  If you’re a hardcore complexity lover, I can recommend buying a copy in the strongest terms.  The book lives up to its subtitle, concentrating almost entirely on developments within the last twenty years.  Classical complexity theorists should pay particular attention to the excellent quantum computing chapter, neither of whose authors has the slightest background in the subject.  You see, people, getting quantum right isn’t that hard, is it?  The book’s only flaw, an abundance of typos, is one that can and should be easily fixed in the next edition.

3. I then visited the National Institute of Standards and Technology—proud keepers of the meter and the kilogram—at their headquarters in Gaithersburg, MD.  There I gave my talk on Quantum Complexity and Fundamental Physics, a version of the shtick I did at the QIS workshop in Virginia.  Afterwards, I got to tour some of the most badass experimental facilities I’ve seen in a while.  (Setting standards and making precision measurements: is there anything else that sounds so boring but turns out to so not be?)  A highlight was the Center for Neutron Research, which houses what’s apparently the largest research reactor still operating in the US.  This thing has been operating since 1967, and it shoots large numbers of slow-moving neutrons in all directions so that archaeologists, chemists, physicists, etc. can feed off the trough and do their experiments.  The basic physics that’s been done there recently has included setting bounds on possible nonlinearities in the Schrödinger equation (even though any nonlinearity, no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time), as well as observing the photons that the Standard Model apparently predicts are emitted 2% of the time when a neutron decays.  I also got to see one of the world’s least jittery floors: using dynamical feedback, they apparently managed to make this floor ~107 times less jittery than a normal floor, good enough that they can run a double-slit experiment with slow neutrons on top of it and see the interference pattern.  (Before you ask: yes, I wanted to jump on the floor, but I didn’t.  Apparently I would’ve messed it up for a day.)

I have to add: the few times I’ve toured a nuclear facility, I felt profoundly depressed by the “retro” feel of everything around me: analog dials, safety signs from the 60s…   Why are no new reactors being built in the US, even while their value as stabilization wedges becomes increasingly hard to ignore?  Why are we unwilling to reprocess spent fuel rods like France does?  Why do people pin their hopes on the remote prospect of controlled fusion, ignoring the controlled fission we’ve had for half a century?  Why, like some horror-movie character unwilling to confront an evil from the past, have we decided that a major technology possibly crucial to the planet’s survival must remain a museum piece, part of civilization’s past and not its future?  Of course, these are rhetorical questions.  While you can be exposed to more radiation flying cross-country than working at a nuclear reactor for months, while preventing a Chernobyl is as easy as using shielding and leaving on the emergency cooling system, human nature is often a more powerful force than physics.

4. Next I went to STOC’2009 in Bethesda, MD.  Let me say something about a few talks that are impossible not to say something about.  First, in what might or might not turn out to be the biggest cryptographic breakthrough in decades, Craig Gentry has proposed a fully homomorphic encryption scheme based on ideal lattices: that is, a scheme that lets you perform arbitrary computations on encrypted data without decrypting it.  Currently, Gentry’s scheme is not known to be breakable even by quantum computers—despite a 2002 result of van Dam, Hallgren, and Ip, which said that if a fully homomorphic encryption scheme existed, then it could be broken by a quantum computer.  (The catch?  Van Dam et al.’s result applied to deterministic encryption schemes; Gentry’s is probabilistic.)

Second, Chris Peikert (co-winner of the Best Paper Award) announced a public-key cryptosystem based on the classical worst-case hardness of the Shortest Vector Problem.  Previously, Regev had given such a cryptosystem based on the assumption that there’s no efficient quantum algorithm for SVP (see also here for a survey).  The latter was a striking result: even though Regev’s cryptosystem is purely classical, his reduction from SVP to breaking the cryptosystem was a quantum reduction.  What Peikert has now done is to “dequantize” Regev’s security argument by thinking very hard about it.  Of course, one interpretation of Peikert’s result is that classical crypto people no longer have to learn quantum mechanics—but a better interpretation is that they do have to learn QM, if only to get rid of it!  I eagerly await Oded Goldreich‘s first paper on quantum computing (using it purely as an intellectual tool, of course).

Third, Robin Moser (co-winner of the Best Paper Award and winner of the Best Student Paper Award) gave a mindblowing algorithmic version of the Lovász Local Lemma.  Or to put it differently, Moser gave a polynomial-time algorithm that finds a satisfying assignment for a k-SAT formula, assuming that each clause intersects at most 2k-2 other clauses.  (It follows from the Local Lemma that such an assignment exists.)  Moser’s algorithm is absurdly simple: basically, you repeatedly pick an unsatisfied clause, and randomly set its variables so that it’s satisfied.  Then, if doing that has made any of the neighboring clauses unsatisfied, you randomly set their variables so that they’re satisfied, and so on, recursing until all the damage you’ve caused has also been fixed.  The proof that this algorithm actually halts in polynomial time uses a communication argument that, while simple, seemed so completely out of left field that when it was finished, the audience of theorists sort of let out a collective gasp, as if a giant black “QED” box were hovering in the air.

Fourth, Babai, Beals, and Seress showed that if G is a matrix group over a finite field of odd order, then the membership problem for G can be solved in polynomial time, assuming an oracle for the discrete logarithm problem.  This represents the culmination of about 25 years of work in computational group theory.  I was all pumped to announce an important consequence of this result not noted in the abstract—that the problem is therefore solvable in quantum polynomial time, because of Shor’s discrete log algorithm—but Laci, alas, scooped me on this highly nontrivial corollary in his talk.

5. Finally, I took the train up to Princeton, for a workshop on “Cryptography and Complexity: Status of Impagliazzo’s Worlds”.  (For the insufficiently nerdy: the worlds are Algorithmica, where P=NP; Heuristica, where P≠NP but the hard instances of NP-complete problems are hard to find; Pessiland, where the hard instances are easy to find but none of them can be used for cryptographic one-way functions; Minicrypt, where one-way functions do exist, enabling private-key cryptography, but not the trapdoor one-way functions needed for public-key cryptography; and Cryptomania, where trapdoor one-way functions exist, and cryptography can do pretty anything you could ask.)  I gave a talk on Impagliazzo’s worlds in arithmetic complexity, based on ongoing join work with Andy Drucker (where “ongoing” means we’re pretty sure more of our results are correct than would be expected by random guessing).

Tell you what: since it’s been a long time, feel free to ask whatever you feel like in the comments section, whether related to my journeys or not.  I’ll try to answer at least a constant fraction of questions.


The QIS workshop

Tuesday, May 5th, 2009

As promised, here’s my report on the Quantum Information Science Workshop in Virginia, only a week or so behind schedule.

I tried to be cynical—really I did.  But despite my best efforts, somehow I went home more excited about quantum than I’ve been in a long time.

The highlight of the workshop was of course the closed, invitation-only, late-night meeting in the basement of NSF headquarters, at which a group of us hidebound quantum computing reactionaries plotted to keep the field focused on irrelevant mathematical abstractions, and to ostracize the paradigm-smashing entrepreneurial innovators who threaten our status.  I don’t think I’ve ever heard so much cackling in the space of a single evening, or so much clinking of bone goblets.  Stuff like that is why I entered the field in the first place.

But there were some other highlights as well:

[Full list of talks iz heer]

1. In his talk on quantum algorithms with polynomial speedups, Andris Ambainis called attention to a spectacular recent paper by Ben Reichardt, which characterizes the quantum query complexity of any partial or total Boolean function f (up to a logarithmic factor) as the optimal witness size of a span program for f, and also as the negative-weight quantum adversary lower bound for f.  Assuming this result is correct, it seems possible that the remaining open problems in quantum query complexity will be pulverized, one after another, by solving the associated SDPs for the optimal span programs.  (Incidentally, using Reichardt’s result, it must be possible to prove, e.g., a Ω(n1/3/log(n)) lower bound for the quantum query complexity of the collision problem using the adversary method.  This was a longstanding open problem.  Can one say, explicitly, what the adversary matrices are in this case?)  Alas, it also seems possible that span programs will turn out to be almost as hard to analyze as quantum algorithms were…

(1+√5)/2. Despite the obvious danger to the future funding of the entire field, by some clerical error I was released from my padded cell to speak about “Quantum Complexity and Fundamental Physics”.  My “talk,” if it can be called that, was in my opinion neither rational nor integral to the workshop.

2. In her talk on blind quantum computation, Anne Broadbent (who’s also visiting MIT this week) described some beautiful new results that partly answer my Aaronson $25.00 Challenge from a year and a half ago.  The Challenge, if you recall, was whether a quantum computer can always “prove its work” to a classical skeptic who doesn’t believe quantum mechanics—or more formally, whether every problem in BQP admits an interactive protocol where the prover in BQP and the verifier is in BPP.  Anne, Joe Fitzsimons, and Elham Kashefi haven’t quite answered this question, but in a recent paper they’ve come close: they’ve shown that a quantum computer can prove its work to someone who’s almost completely classical, her only “quantum” power being to prepare individual polarized photons and send them over to the quantum computer.  Furthermore, their protocol has the amazing property that the quantum computer learns nothing whatsoever about which particular quantum computation it’s performing!  (Aharonov, Ben-Or, and Eban independently gave a protocol with the same amazing properties, except theirs requires the “classical” verifier to have a constant-sized quantum computer.)  Anne et al. also show that two quantum computers, who share entanglement but can’t communicate with each other, can prove their work to a completely classical verifier (while, again, remaining completely oblivious to what they computed).

On top of everything else, these results appear to be the first complexity-theoretic application of the measurement-based quantum computing paradigm, as well as the first “inherently quantum” non-relativizing results.  (Admittedly, we don’t yet have an oracle relative to which the blind quantum computing protocols don’t work—but the protocols rely essentially on the gate structure of the quantum circuits, and I conjecture that such an oracle exists.)

Rereading my Challenge, I noticed that “the [one-member] Committee may also choose to award smaller prizes for partial results.”  And thus, yesterday I had the pleasure of awarding Anne a crumpled $10 bill, with an additional $5 contributed by Seth Lloyd, for a grand total of $15.00 to be shared equally among Anne, Joe, and Elham.  (Update: Since I wrote that, Anne has elected to trade in for three signed and doodled-upon $5 bills.)  (Another Update: A $12, or $15-$O(1), prize shall be awarded to Dorit Aharonov, Michael Ben-Or, and Elad Eban the next time I see them.)  This is, I believe, the first time a monetary reward offered on Shtetl-Optimized has actually been paid out.

3. In a talk that was so good, you almost forgot it involved chemistry, Alán Aspuru-Guzik discussed applications of quantum complexity theory to understanding photosynthesis and the design of efficient solar cells (!).  To give you a sense of how mindblowing that is, it briefly made me wonder whether I should reread some of John Sidles’ cheerful ramblings about the coming merger of quantum systems engineering with biology in the 21st century (of which, I predict, this very sentence will inspire dozens more).

So what then is the connection between quantum complexity theory and photosynthesis?  Well, a few of you might remember my post “Low-Hanging Fruit from Two Conjoined Trees” from years ago, which discussed the lovely result of Childs et al. that a quantum walk on two conjoined binary trees can reach a designated end vertex exponentially faster than a classical walk on the same graph.  That result interested me, among other things, because it can be shown to lead to an oracle relative to which BQPSZK, which at the time I didn’t know how to find otherwise.  But especially given the bizarre nature of the graph needed to produce the oracle separation, I thought of this result as pretty much the prototype of an irrelevant complexity-theoretic curiosity (which, naturally, made me like it all the more).

You can probably guess where this is going.

Shown above is a light-harvesting molecule (image snagged from Alán’s slides), which apparently is efficient at concentrating light at its center for essentially the same reason the Childs et al. quantum walk reaches the target vertex exponentially faster than a classical walk: namely, because of destructive interference between the paths that point backward, toward the leaves.  According to Alán, what plants do to harvest sunlight is not entirely unrelated either (it also involves quantum coherence), and fully understanding these mechanisms in quantum information terms might conceivably be useful in designing better solar cells.  To be fair, a part of me always did suspect that quantum oracle separations would turn out to be the key to solving the world energy crisis.  I’ll point you here or here if you want to know more.

Incidentally, Alán’s talk had another, also extremely interesting part, which was about coming up with precise numerical estimates of the number of qubits you’d need to simulate the wavefunctions of (say) benzene, caffeine, and cholesterol.  (Many of us have long thought that simulating physics and chemistry will be the real application for scalable quantum computers if we ever build them, practical long before breaking RSA and ultimately more useful too.  But it’s not something we often talk about—ostensibly for lack of meaty things to say, really because we don’t know chemistry.)

4. In her talk, Dorit Aharonov posed an open problem that I now have no choice but to inflict on others, if I don’t want to feel forced to think about it myself.  So here’s her problem: how hard is it to find the ground state of a local Hamiltonian H=H1+…+Hm (that is, a sum of k-qubit interactions, for some constant k), if we impose the constraint that the Hi‘s all commute with each other?  Clearly it’s somewhere between NP and QMA.  It might seem obvious that this problem should be in NP—to which I can only respond, prove it!

There were also lots of great talks by the experimentalists.  Having attended them, I can report with confidence that (1) they’re still trying to build a quantum computer but (2) decoherence is still a big problem.  If you want to know even more detail than I’ve just provided—or you want to know about the theory talks I didn’t mention, or more about the ones I did mention—ask away in the comments.  I can’t promise that no one will know the answer.

The T vs. HT (Truth vs. Higher Truth) problem

Friday, January 9th, 2009

From a predictably-interesting article by Freeman Dyson in Notices of the AMS (hat tip to Peter Woit):

The mathematicians discovered the central mystery of computability, the conjecture represented by the statement P is not equal to NP. The conjecture asserts that there exist mathematical problems which can be quickly solved in individual cases but cannot be solved by a quick algorithm applicable to all cases. The most famous example of such a problem is the traveling salesman problem, which is to find the shortest route for a salesman visiting a set of cities, knowing the distance between each pair. All the experts believe that the conjecture is true, and that the traveling salesman problem is an example of a problem that is P but not NP. But nobody has even a glimmer of an idea how to prove it. This is a mystery that could not even have been formulated within the nineteenth-century mathematical universe of Hermann Weyl.

At a literal level, the above passage contains several howlers (I’ll leave it to commenters to point them out), but at a “deeper” “poetic” level, Dyson happens to be absolutely right: P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.

Speaking of P versus NP, I’m currently visiting Sasha Razborov at his new home, the University of Chicago.  (Yesterday we had lunch at “Barack’s favorite pizza place”, and walked past “Barack’s favorite bookstore.”  Were they really his favorites?  At a deeper poetic level, sure.)

One of the highlights of my trip was meeting Ketan Mulmuley for the first time, and talking with him about his geometric approach to the P vs. NP problem.  Ketan comes across in person as an almost mythological figure, like a man who flew too close to the sun and was driven nearly to ecstatic obsession by what he saw.  This is someone who’ll explain to anyone in earshot, for as long as he or she cares to listen, that he’s glimpsed the outlines of a solution of the P vs. NP problem in the far frontiers of mathematics, and it is beautiful, and it is elegant—someone who leaps from Ramanujan graphs to quantum groups to the Riemann Hypothesis over finite fields to circuit lower bounds in the space of a single sentence, as his hapless listener struggles to hold on by a fingernail—someone whose ideas seem to remain obstinately in limbo between incoherence and profundity, making just enough sense that you keep listening to them.

Now, I get emails every few months from people claiming to have proved P≠NP (not even counting the P=NP claimants).  Without exception, they turn out to be hunting polar bears in the Sahara: they don’t even grapple with natural proofs, or relativization, or algebrization, or the lower bounds/derandomization connection, or any the other stuff we know already about why the problem is hard.  Ketan, by contrast, might be searching for polar bears with a kaleidoscope and trying to hunt them with a feather, but he’s in the Arctic all right.  I have no idea whether his program will succeed within my lifetime at uncovering any of the truth about the P vs. NP problem, but it at least clears the lower hurdle of reflecting some of the higher truth.

Keeping cool

Sunday, October 26th, 2008

Update (10/27): Peter Norvig at Google points me to his Election FAQ, for those who feel they haven’t yet spent enough time reading about the election.  I’ve just been perusing it, and it’s an unbelievably good source of information—reaching the same conclusions as I did on just about every particular, yet also calm, reasoned, and professional.

1. That’s my mom at an Obama office in Sarasota, FL.  For once, I find myself kvelling to strangers about her.

2. I’m at FOCS’2008 in Philadelphia right now.  Yesterday morning I gave a tutorial on The Polynomial Method in Quantum and Classical Computing, and was delighted by how many people showed up — I wouldn’t have woken up for my talk.  (And before you ask: yes, the PowerPoint slides for this talk include photographs of both Bill Ayers and Joe the Plumber.)

3. Here’s the FOCS conference program — tons of good stuff, as you can see for yourself.  If there’s a talk you want to know more about, say so in the comments section and I’ll try to find someone who attended it.

Note: I was a program committee member, and therefore know much more than usual about the talks—but my objectivity and license as a “journalist” are also severely compromised.  If unvarnished opinion is what you seek, ask my friend and roommate Rahul Santhanam, who’s also reporting live from the conference over at Lance’s blog.  (As you can see, we CS theorists manage our conflicts of interest roughly as well as the Alaska governor’s office…)

4. I apologize that I haven’t had much to say recently.  Against my better judgment, I find myself transfixed by the same topic everyone else is transfixed by, and it’s hard to find anything to say about it that hasn’t been said better by others.  If you want to enter my world, don’t read Shtetl-Optimized; read Andrew Sullivan or FiveThirtyEight.com.  Following the election is, of course, not all that different from following a football game, except for the added dash of excitement that the future of civilization might hinge on the outcome.

(Years congruent to 0 mod 4 are pretty much the only times when I understand what it’s like to be a sports fan.  Speaking of which, I heard there was some sort of “World’s Series” in Philadelphia last night—probably in basketball—and something called the “Phillies” won?  I might be wrong, though.  Maybe it was the “Flyers” … or is that a volleyball team?  Keep in mind, I only lived in this area for the first 15 years of my life.)

5. For a congenital pessimist like me, I confess it’s been difficult to deal with the fact that my team (I mean the Democrats, not the Eagles or whatever they’re called) is winning.  I simply don’t know how to react; it’s so far outside my emotional range.  Since when has the universe worked this way?  When did reason and levelheadedness start reaping earthly rewards, or incompetence start carrying a cost?  I’m sure Nov. 4 will bring something to console me, though: maybe Al Franken will lose the Senate race in Minnesota, or the homophobe proposition will pass in California…

6. Writing blog posts in numbered lists is easier; I should do it more often.  I don’t have to pretend all the little things I want to say are part of an overarching narrative, rather than standing in the relation “and that reminds me of … which in turn reminds me of…”

7. There’s another psychological question inspired by the election that’s fascinated me lately: how does one become more obamalike in temperament?

I’ve written before about Obama’s penchant for introspection and respect for expertise, which of course are qualities with which I strongly identify.  But Obama also has a crucial quality I lack: as the whole world has marveled, nothing rattles him.  Placed for two years under the brightest glare on earth, besieged by unexpected events, he simply sticks to a script, Buddha-like in his emotional control (although not in his quest for power in the temporal world).  His nerves are of carbon nanotube fiber.

When he briefly slipped behind after the Republican convention, I panicked: I felt sure he’d lose if he didn’t completely change his approach.  Sean Carroll recommended chilling out.  I now face the indignity of admitting that I was wrong while a physicist was right.

What struck me most, during the debates, was how again and again Obama would pass up the chance to score points—choosing instead to let his opponent impale himself with his own words, and use his time to hammer home his message for the benefit of any voters just emerging from their caves.  (As an example, consider his pointed refusal in the third debate to say anything bad about Palin—the subtext being, “isn’t it obvious?”)  It’s almost as if he thought his goal was winning the election, not proving the other guy wrong.

I have (to put it mildly) not always exhibited the same prudent restraint, least of all on this blog.  So for example, whenever there’s been bait dangling in front of me in the comments section, I’ve tended to bite, often ending up with a hook through my cheek.

But no more.  As the first exercise in my newfound quest for the Zen-like equanimity and balance of our soon-to-be-president, I now present to you two excerpts from the comments on my previous post, with no reaction whatsoever from me.

Have you considered the possibility that, in the same way a logical deduction is being equated with truth, understanding a thing is just an illusion? If a thing is logical, that only means that it appeals to the reasoning facility of the brain, not that it’s the truth.

Mathematics is just a place where it becomes clear how a human may think. Computers only go for the calculable. And the mathematical truths a computer can produce are at most countable infinite. But there are uncountable infinite truths.

How George W. Bush is changing my life for the better

Thursday, May 29th, 2008

I’ll give you a hint: it’s not from the rebate checks.

Let me put it this way: from now on, I am going to exercise at least twice a week. I’m going to post the remaining Quantum Computing Since Democritus lectures. I’m going to finish writing up several papers that I’ve been procrastinating on for years. I’m going to get involved in more non-work-or-blog-related social activities. And I’m going to do all of these things (and have friends and family members vouch for it) because if I don’t, then money I’ve already placed in escrow will be donated to the George W. Bush Presidential Library, as well as to the National Rifle Association.

Yeah, I signed up for the “commitment contract” service stickK.com, which was started in January by Yale economics professors Dean Karlan and Ian Ayres and student Jordan Goldberg. You get to choose the “anti-charities” to which your money gets donated if you don’t achieve your stated goals; surprisingly, stickK itself doesn’t take a cut (it seems to get all its money from ad revenue). The idea is obvious in retrospect; what’s amazing is that it took this long for anyone to build a company around it. So far it seems to be working. For example, I jogged on Tuesday and went swimming this morning, despite not having exercised for the previous six months. What remains to be seen is whether W. can inspire me to new heights of research productivity.

An enormous hat tip to Michael Nielsen.

Geordie Rose at MIT

Tuesday, January 29th, 2008

While there are many, many things in this world that I’m bent on destroying, D-Wave Systems has never been one of them. Ideally, I’d simply let the D-Wave folks do their thing (namely, try to build an adiabatic quantum computer) while I do my thing (namely, study the fundamental limits of quantum computers). It was only when, in connection with D-Wave, cringe-worthy claims about quantum computing started appearing all over the press that I felt a professional obligation to say something.

Now that I’m “involved,” though, I also need to keep you ablog of any notable further developments. And presumably, D-Wave founder Geordie Rose coming to MIT to meet with our quantum information group counts as notable.

Two months ago, you’ll recall, we were graced by a visit from D-Wave’s Mohammad Amin and Andrew Berkley, but I’d never before had the pleasure of meeting Geordie. At least formally, the reason for his visit was not to defend D-Wave, but to present “four hard problems” for us to solve. These problems were as follows:

  1. Find a practical adiabatic factoring algorithm. Because of the equivalence of adiabatic and standard quantum computing, we know that such an algorithm exists, but the running time you get from applying the reduction is something like O(n11). Geordie asks for an O(n3) factoring algorithm in the adiabatic model. It was generally agreed (with one dissent, from Geordie) that reducing factoring to a 3SAT instance, and then throwing a generic adiabatic optimization algorithm at the result, would be a really, really bad approach to this problem.
  2. Find a fault-tolerance threshold for adiabatic quantum computing, similar to the known threshold in the circuit model. Geordie asserted that such a threshold has to exist, because of the equivalence of adiabatic and standard quantum computing. However, others immediately pointed out that this is not so: the equivalence theorem is not known to be “fault-tolerance-preserving.” This is a major open problem that many people have worked on without success.
  3. Prove upper and lower bounds on the adiabatic algorithm’s performance in finding exact solutions to hard optimization problems.
  4. Prove upper and lower bounds on its performance in finding approximate solutions to such problems. (Ed Farhi described 3 and 4 as “so much harder than anything else we’ve failed to solve.”)

While none of these problems are new to the quantum computing community, they’re all extremely good ones, and all (indeed) extremely hard.

Of course, we did also discuss some controversial, red-meat, “did-D-Wave-build-a-quantum-computer” sorts of questions, so I owe it to you to provide a few highlights from that discussion.

Seth Lloyd, who’s been more sympathetic to D-Wave than most of us, correctly pointed out that D-Wave has a “credibility problem in the scientific community.” He discussed in great detail the experiments D-Wave ought to be doing to convince scientists that they’re really seeing quantum effects. I strongly agreed with Seth, adding that I’d rather see two coherent qubits than thousands of incoherent ones. Of course, even if D-Wave could demonstrate two-qubit entanglement (and Geordie says it’s the “next thing on the list”), there would still remain the enormous issues of scalability and of the limitations of the adiabatic algorithm in solving hard optimization problems. But at least we could be more comfortable in saying that what they currently have is a tiny quantum computer.

Geordie conceded that, so far, D-Wave has no direct evidence for entanglement among two or more qubits. He nevertheless argued that they have indirect evidence (basically, that their data are better fit by a simple quantum model than a simple classical one), and that the lack of direct evidence is solely due to the difficulty of performing the requisite measurements. Seth replied that, despite the difficulty, D-Wave would “do itself a big favor” by performing the measurements.

Seth also mentioned D-Wave’s claims to the popular press — for example, about the ability of quantum computers to solve NP-complete problems — as a major factor in its scientific credibility problem. Geordie admitted that some of D-Wave’s publicity was (here he paused for a few seconds) “not inaccurate, but verging on inaccurate.”

Note: Geordie now says that he was only talking about the reporting on D-Wave; in his words, “I stand by 100% anything I’ve ever said to anyone about these machines.” At the time, I understood him quite clearly to be talking about D-Wave’s own publicity; it’s strange that he would have hesitated to admit that reporters have misunderstood things. But I freely admit that I might have misheard or misinterpreted him.

I asked Geordie about the result of Bansal, Bravyi, and Terhal that the planar Ising spin graph problem admits an efficient classical approximation algorithm — thus calling into question D-Wave’s whole strategy of solving other NP approximation problems by mapping them onto Ising spin graph instances. Geordie replied, first, that their machine can handle many non-planar links, and second, that Bansal et al.’s algorithm merely trades an exponential dependence on n for an exponential dependence on 1/ε. I agreed that their algorithm isn’t practical, but argued that its mere existence would have to be dealt with in any attempt to convert approximate solutions of the Ising spin graph problem into approximate solutions of the original optimization problems.

So, where do we stand? Here’s my attempt at a fair summary:

  • The people at D-Wave are not conscious frauds; they genuinely believe in what they’re doing.
  • On the other hand, much of the publicity surrounding D-Wave can be safely rejected. To some academics, even one or two public misrepresentations are enough to destroy a company’s credibility. Others, however, prefer to ignore press releases — seeing hype, exaggeration, and even outright falsehoods as just a necessary part of raising money — and to concentrate solely on a company’s communications with experts. Where you fall between these extremes probably depends on your personality more than anything else.
  • In the past, I criticized D-Wave (rightly, I think) for failing to share information with the scientific community in a good-faith manner. To their credit, they’re now making more of an effort to communicate.
  • Thus far, by Geordie’s own account, there’s no direct evidence that D-Wave’s machine actually produces entanglement at any stage of its operation (which all agree is a non-negotiable requirement for quantum computing). Geordie says that producing such evidence will be the “next thing on the list.” The Sudoku stunt was worthless from a scientific perspective; it did not answer any of the questions that physicists need answered.
  • Even if D-Wave managed to build (say) a coherent 1,024-qubit machine satisfying all of its design specs, it’s not obvious it would outperform a classical computer on any problem of practical interest. This is true both because of the inherent limitations of the adiabatic algorithm, and because of specific concerns about the Ising spin graph problem. On the other hand, it’s also not obvious that such a machine wouldn’t outperform a classical computer on some practical problems. The experiment would be an interesting one! Of course, this uncertainty — combined with the more immediate uncertainties about whether D-Wave can build such a machine at all, and indeed, about whether they can even produce two-qubit entanglement — also means that any talk of “lining up customers” is comically premature.

Special entry for you, my friend

Thursday, January 3rd, 2008

Happy New Year and all that. Recently I got back from a two-week journey to India (to attend the QIP conference in New Delhi and, of course, liveblog from the Taj) as well as England (to meet up with family in London and make a religious pilgrimage to Bletchley Park).

Even though my travel entries typically get fewer comments than anything else, I nonetheless feel a historic responsibility to record my first visit to a subcontinent with one-sixth of the world’s population — the birthplace not only of my adviser and so many other great theoretical computer scientists, but also of Gandhi, Ramanujan, the Buddha, and commenter Nagesh Adluru. But where do I even start? Writing anything open-ended has always been a chore for me, and it’s only getting harder with time.

So I’ll tell you what: I’ll just post some photos with commentary. Then ask me whatever you want in the comments section: “Were there any good talks at QIP?” “Were you brave enough to sample the strange, exotic North Indian dishes, like ‘naan’ and ‘samosas’ and ‘chicken curry’?” “Having spent a full week in India, to what extent, if any, do you think the Bhutto assassination will destabilize Indo-Pakistani relations?”

India: where every imaginable entity with wheels, feet, or hooves can be found on the road, making deafening noises while swerving to kill you; the water’s not even safe for toothbrushing; the beggars have their own beggars; and the cellphone network is more reliable than anything in the US.

These are students and religious pilgrims at the Dayalbagh colony near Agra, the headquarters of one branch of the Radha Soami sect of Hinduism. They’re laboring in the fields at dawn, before coming in to hear me and others give quantum computing talks. I’m not making this up.

When I agreed to give a talk at the Dayalbagh Educational Institute, all I knew about my hosts is that they were computer scientists near Agra who would take me on a guided tour of the Taj Mahal and arrange the logistics. I had no idea that my hosts — and their self-supporting agricultural commune of about 20,000 people, led by religious scholars fascinated by quantum computing theory — would turn out to be considerably more interesting than the Taj itself.

For my talk, I was going to present some recent results with Peter Shor, Salman Beigi, and Bill Fefferman on the complexity class QMA(k) (Quantum Merlin-Arthur with multiple unentangled Merlins). But then I learned that over 200 people would be attending. I panicked: “there aren’t 200 people on Earth who would care about this talk, let alone 200 people on a Hindu kibbutz near Agra!” So I quickly substituted my usual dog-and-polynomial show about the limits of quantum computers.

I was surprised that the guru of the sect, Prof. P. S. Satsangi, actually came to my talk. Everyone stood at attention when he entered the room, and then he sat in a special chair surrounded by flowers at the front of the lecture hall. He did not ask questions.

In the end, while I couldn’t assent to the Radha Soamis’ mystical beliefs (as they were explained to me), I found much in their way of life to recommend it. I had fun imagining, say, a Kansas farmtown where a quantum computing workshop would be a major public event, attended by the mayor and every local dignitary.

This is where I stayed in New Delhi: at the Islamic Cultural Centre Guest House. I chose to stay here because (1) as someone who’s occasionally blogged about the Israeli/Palestinian conflict, I felt a historic responsibility to make a bold peace gesture, and (2) it was the only place in walking distance to the conference center.

As you can see from the Christmas tree out in front, the Islamic Centre was happily not averse to ecumenicism. As explained to me by my “friend” at the guest house (the guy who knocked on my door every fifteen minutes to see if I needed anything, before asking me for a tip), “here in India there is no ‘you Hindu, you Muslim, you Buddhist, you Sikh.’ All are brothers, you understand? Tip?”

Dorit Aharonov and Barbara Terhal passionately debating some adiabatic something-or-other near the Qutb Minar, a twelfth-century minaret.

Need Grover’s algorithm tailored to solve the element distinctness problem in n2/3 queries? I know just the guy for such jobs…

If you can’t read it, the sign says “MADHUSUDAN MOTORS.”

Our guides: “c’mon, move along, nothing to see here … just a stray monkey …”

The obligatory photo. Not Photoshopped, I promise.

Here we shift the scene from India to its former colonialist ruler (now a quaint, scone-intensive island in the North Atlantic). I’m standing in front of the Bletchley Park mansion, an hour and a half by train from London. In the early nineties, this site was apparently going to be demolished to make way for housing developments. Then someone pointed out that, by current estimates, the cryptanalysis done at Bletchley Park probably shortened World War II by at least two years and saved about twenty million lives. So they made it into a museum. Next time you’re in London, I strongly recommend making the pilgrimage (just beware that the place closes at 4PM).

This is a Bombe.

Alan Turing’s office in Hut 8.