Archive for the ‘CS/Physics Deathmatch’ Category

Quantum Computing Since Democritus now out in the US! 20% discount for Shtetl-Optimized readers

Saturday, April 27th, 2013

OK, this will be my last blog post hawking Quantum Computing Since Democritus, at least for a while.  But I do have four pieces of exciting news about the book that I want to share.

  1. Amazon is finally listing the print version of QCSD as available for shipment in North America, slightly ahead of schedule!  Amazon’s price is $35.27.
  2. Cambridge University Press has very generously offered readers of Shtetl-Optimized a 20% discount off their list price—meaning $31.99 instead of $39.99—if you click this link to order directly from them.  Note that CUP has a shipping charge of $6.50.  So ordering from CUP might either be slightly cheaper or slightly more expensive than ordering from Amazon, depending (for example) on whether you get free shipping from Amazon Prime.
  3. So far, there have been maybe 1000 orders and preorders for QCSD (not counting hundreds of Kindle sales).  The book has also spent a month as one of Amazon’s top few “Quantum Physics” sellers, with a fabulous average rating of 4.6 / 5 stars from 9 reviews (or 4.9 if we discount the pseudonymous rant by Joy Christian).  Thanks so much to everyone who ordered a copy; I hope you like it!  Alas, these sales figures also mean that QCSD still has a long way to go before it enters the rarefied echelon of—to pick a few top Amazon science sellers—Cosmos, A Brief History of TimeProof of Heaven (A Neurosurgeon’s Journey into the Afterlife), Turn On Your SUPER BRAIN, or The Lemon Book (Natural Recipes and Preparations).  So, if you believe that QCSD deserves to be with such timeless classics, then put your money where your mouth is and help make it happen!
  4. The most exciting news of all?  Luboš Motl is reading the free copy of QCSD that I sent him and blogging his reactions chapter-by-chapter!  So, if you’d like to learn about how mathematicians and computer scientists simply lack the brainpower to do physics—which is why we obsess over kindergarten trivialities like the Church-Turing Thesis or the Axiom of Choice, and why we insist idiotically that Nature use only the mathematical structures that our inferior minds can grasp—then check out Luboš’s posts about Chapters 1-3 or Chapters 4-6.  If, on the other hand, you want to see our diacritical critic pleasantly surprised by QCSD’s later chapters on cryptography, quantum mechanics, and quantum computing, then here’s the post for you.  Either way, be sure to scroll down to the comments, where I patiently defend the honor of theoretical computer science against Luboš’s hilarious ad hominem onslaughts.

Valiant’s valiance recognized

Wednesday, March 9th, 2011

Update (March 25): I have a new paper called A Linear-Optical Proof that the Permanent is #P-Hard, which is dedicated to Les Valiant on the occasion of his Turing Award.  Here’s the abstract:

One of the crown jewels of complexity theory is Valiant’s 1979 theorem that computing the permanent of an n*n matrix is #P-hard. Here we show that, by using the model of linear-optical quantum computing—and in particular, a universality theorem due to Knill, Laflamme, and Milburn—one can give a different and arguably more intuitive proof of this theorem.


For decades, Harvard’s Leslie Valiant has obviously deserved a Turing Award—and today, the ACM most excellently announced its agreement with the obvious.  I have little to add to the prize citation (see also Lance’s post): from launching new fields whose reach extends beyond theory (PAC-learning), to proving epochal results (#P-completeness of the permanent), to asking hugely influential questions (permanent vs. determinant), Valiant has been a creative powerhouse of theoretical computer science for longer than I’ve been alive.

One thing the prize citation doesn’t mention is that Valiant is now the third Turing Award winner (after Andy Yao and Len Adleman) to have made a major contribution to quantum computing theory.  Valiant’s 2001 paper Quantum Computers that can be Simulated Classically in Polynomial Time introduced the beautiful computational model that computer scientists now know as “matchgates,” and that physicists know as “noninteracting fermions.” It still amazes that Valiant proposed this model for purely mathematical reasons—hitting physical relevance straight between the eyes despite (as far as I can tell) not having that target anywhere in his sights.

To put the point in terms that my physicist friends will understand, that Valiant himself would probably dispute, but that I would defend:

Valiant’s work has shown that, even if our universe hadn’t been made of bosons and fermions, theoretical computer scientists would have had compelling reasons of their own to invent those particles or something equivalent to them—and furthermore, that at least one theoretical computer scientist would have had the imagination to do so.

Certainly Valiant has had a huge influence on me, both through his work and as someone who made time to talk to me as an obscure grad student a decade ago.   Three of my papers—The Learnability of Quantum States, A Full Characterization of Quantum Advice, and The Computational Complexity of Linear Optics—would collapse entirely without Valiant-laid foundations.

Congratulations, Les!

My diavlog with Anthony Aguirre

Saturday, July 24th, 2010

Bloggingheads has just posted an hour-long diavlog between the cosmologist Anthony Aguirre and your humble blogger.  Topics discussed include: the anthropic principle; how to do quantum mechanics if the universe is so large that there could be multiple copies of you; Nick Bostrom’s “God’s Coin Toss” thought experiment; the cosmological constant; the total amount of computation in the observable universe; whether it’s reasonable to restrict cosmology to our observable region and ignore everything beyond that; whether the universe “is” a computer; whether, when we ask the preceding question, we’re no better than those Renaissance folks who asked whether the universe “is” a clockwork mechanism; and other questions that neither Anthony, myself, nor anyone else is really qualified to address.

There was one point that sort of implicit in the discussion, but I noticed afterward that I never said explicitly, so let me do it now.  The question of whether the universe “is” a computer, I see as almost too meaningless to deserve discussion.  The reason is that the notion of “computation” is so broad that pretty much any system, following any sort of rules whatsoever (yes, even non-Turing-computable rules) could be regarded as some sort of computation.  So the right question to ask is not whether the universe is a computer, but rather what kind of computer it is.  How many bits can it store?  How many operations can it perform?  What’s the class of problems that it can solve in polynomial time?

The Future of Computer Science, and Why Every Other Major Sucks By Comparison

Monday, April 12th, 2010

Does this post finally herald my return to regular blogging after a months-long absence?

I don’t know.  For me, writing a Shtetl-Optimized entry always followed the same process: I’d get an idea and start typing, furiously rocking back and forth in my chair.  Then the voices in my head would pipe up: no, I can’t say that—what will everyone think?—judging from past experience, they’ll probably take offense—I can already see the commenters boiling me alive—maybe if I rephrased it, or, y’know, provided some context—but to explain the real context, I’d need a whole book—and who has the time for that?—better wait till after tenure—meantime, maybe I could blog about something light and uncontroversial instead—but then what’s the point?—we already have one GASARCH—well, I could always put off a decision till later—

Back in the blog’s heyday, I’d win these fights about 40% the time and the voices would win about 60%.  (In other words: if you’ve ever taken offense at an entry of mine, rest assured that you haven’t even seen the half of my drafts folder.)  But now that I have an actual stake in this shabby world—students to advise and look after, a tenure case to build, conceivably even a family to start—the voices win more like 98% of the time.  And that’s why my blogging fell off.

Occasionally, though, something comes along so uncomplicatedly joyous that I feel no reservations about sharing it with the world.  Such was the case this weekend, when I was somehow called upon to represent MIT’s EECS Department in the annual “Professor Talent Show” at Campus Preview Weekend.  This is an event where six faculty members square off, taking eight minutes each to

(1) explain why their department is the coolest,
(2) crack jokes, and
(3) possibly demonstrate a musical or athletic talent.

Then, using electronic clickers, the several hundred prefrosh in attendence vote for which major carried the day.  Though I had no absolutely no talent of any kind to demonstrate, and was up against a banjo-player, violinist, and basketball-spinner among other tough competitors, for some reason EECS won!  You can see my PowerPoint slides here:

The Future of Computer Science, and Why Every Other Major Sucks By Comparison
http://www.scottaaronson.com/talks/futurecs.ppt

(You can read the jokes that go along with each slide in the slide notes at the bottom.)

Update (4/15): I hadn’t realized at all that there’s actually a video of me giving the talk!  (Click on “Part 2.”)

Science: the toroidal pyramid

Wednesday, January 23rd, 2008

Chad Orzel gripes about this month’s Scientific American special issue on “The Future of Physics” — which is actually extremely good, but which turns out to be exclusively about the future of high-energy particle physics. Not surprisingly, the commenters on Chad’s blog reignite the ancient debate about which science is more fundamental than which other one, and whether all sciences besides particle physics are stamp collecting.

I started writing a comment myself, but then I realized I hadn’t posted anything to my own blog in quite some time, so being nothing if not opportunistic, I decided to put it here instead.

To me, one of the most delicious things about computer science is the way it turns the traditional “pyramid of sciences” on its head. We all know, of course, that math and logic are more fundamental than particle physics (even particle physicists themselves will, if pressed, grudgingly admit as much), and that particle physics is in turn more fundamental than condensed-matter physics, which is more fundamental than chemistry, which is more fundamental than biology, which is more fundamental than psychology, anthropology, and so on, which still are more fundamental than grubby engineering fields like, say, computer science … but then you find out that computer science actually has as strong a claim as math to be the substrate beneath physics, that in a certain sense computer science is math, and that until you understand what kinds of machines the laws of physics do and don’t allow, you haven’t really understood the laws themselves … and the whole hierarchy of fundamental-ness gets twisted into a circle and revealed as the bad nerd joke that it always was.

That was a longer sentence than I intended.

Note (Jan. 25): From now on, all comments asking what I think of the movie “Teeth” will be instantly deleted. I’m sick of the general topic, and regret having ever brought it up. Thank you for your understanding.

My take on the Koblitz affair

Saturday, September 1st, 2007

Now that Luca, Michael Mitzenmacher, Jonathan Katz, and Oded Goldreich have all weighed in on Neal Koblitz’s critique of modern cryptography in the Notices of the AMS, I can no longer bear to be left out of the action.

My reaction is simple: we computer scientists should feel honored that the mathematicians have finally bestowed on us the level of contempt they once reserved for the physicists.

Update (9/6): If you want to understand what’s actually involved in this controversy, the best starting point I’ve found is this paper by Ivan Damgård.

Experimental complexity theory

Wednesday, June 27th, 2007

I just came back from the MIT CSAIL (Computer Science and Artificial Intelligence Lab) annual meeting, which was held at a beach resort in Cape Cod. No, it isn’t California, but for at least a few months a year “my” coast can put up a respectable showing too:

Out of all the ideas I heard at the CSAIL meeting, the one that made me proudest to have become a professor was this: computer scientists should make a serious effort to address world hunger, deforestation, climate change, and other global crises, because of the significant opportunities to tap funding resources that are becoming available in these areas. I’m telling you, if a giant asteroid were going to hit the earth in a week, the first question academics would ask would be how to beat out competing proposals for the $50-million “Deflection of Space-Based Objects” initiative at NSF.

The meeting ended with a “Wild & Crazy Ideas Session,” at which I (naturally) spoke. I briefly considered talking about quantum gravity computing, closed timelike curves, or quantum anthropic postselection, but ultimately decided on something a little less mainstream. My topic was “Experimental Computational Complexity Theory,” or “why do theoretical physicists get $8-billion machines for the sole purpose of confirming or refuting their speculative ideas, whereas theoretical computer scientists get diddlysquat?” More concretely, my proposal is to devote some of the world’s computing power to an all-out attempt to answer questions like the following: does computing the permanent of a 4-by-4 matrix require more arithmetic operations than computing its determinant? You can read my slides here.

The wisdom of Gian-Carlo Rota (1932-1999)

Monday, April 9th, 2007

From www.rota.org:

Graph theory, like lattice theory, is the whipping boy of mathematicians in need of concealing their feelings of insecurity.

Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say ‘We’ve discovered a great new law of nature. Give us a billion dollars.’ And if it doesn’t change the world, then they say, ‘There’s an even deeper thing. Give us another billion dollars.’

When an undergraduate asks me whether he or she should major in mathematics rather than in another field that I will simply call X, my answer is the following: “If you major in mathematics, you can switch to X anytime you want to, but not the other way around.”

Flakiness is nowadays creeping into the sciences like a virus through a computer system, and it may be the greatest present threat to our civilization. Mathematics can save the world from the invasion of the flakes by unmasking them, and by contributing some hard thinking. You and I know that mathematics, by definition, is not and never will be flaky.

Note: Quotation here does not necessarily imply endorsement by Shtetl-Optimized LLC or any of its subsidary enterprises.

A complexity theorist’s (non)apology

Wednesday, February 21st, 2007

Several respected physicists wrote to me privately to say how disappointed they were that Umesh and I would fight shoddy journalism by making a shoddy claim of our own: namely, that the inability of quantum computers to solve NP-complete problems efficiently is an established fact. I took a lot of flak in the comments section over the same issue.

Ladies and gentlemen of the jury, I will answer the unjust charges being leveled against me and my advisor.

But first, let’s review the facts. As I’ve said in pretty much every introductory talk I’ve ever given, obviously we can’t yet hope to prove that NP-complete problems are hard for quantum computers, since we haven’t even proved they’re hard for classical computers! (Nor, for that matter, do we have any idea how to prove that if they’re hard for classical computers then they’re also hard for quantum computers.) These are some of the most profound open problems in mathematics. Solving them could easily take decades or centuries.

I dare say that Umesh and I know this as well as anyone on Earth. And that’s why, even while trying in the space of a few sentences to correct a breathtaking misconception about the nature of the physical world that was being endlessly repeated to millions of people, we still took care in what we said.

Here’s Umesh:

Most egregious is your assertion that quantum computers can solve NP-complete problems in “one shot” by exploring exponentially many solutions at once. This mistaken view was put to rest in the infancy of quantum computation over a decade ago … For unstructured search problems like the NP-complete problems this means that there is no exponential speed up but rather at most a quadratic speed up.

In the above passage, Umesh is talking about an epochal theorem that he and others did manage to prove: namely, that quantum computers could not solve NP-complete problems by any “one-shot” method based on exploring exponentially many solutions in parallel. Throw away the structure of an NP-complete problem — consider it just as an abstract space of 2n solutions — and we know that quantum computers will give you at most a quadratic speedup over classical ones.

In the thirteen years since this “BBBV theorem” was proved, two interesting things happened:

  1. Various experts dismissed the theorem as irrelevant, knocking down a straw man, stacking the deck in favor of its conclusion by imposing an utterly-unjustified “black-box” assumption, etc.
  2. Hundreds of articles appeared, in both the popular press and the arXiv, that directly contradicted the theorem.

It reminds me of how theologians chide Richard Dawkins for refuting only a crude, anthropomorphic, straw-man god instead of a sophisticated Einsteinian one, and then (with an air of self-satisfaction) go off and pray to the crude god.

To be fair, we do have one quantum algorithm for NP-complete problems that falls outside the scope of the BBBV theorem: namely, the adiabatic algorithm of Farhi et al. This algorithm can be seen as a quantum version of simulated annealing. Intriguingly, Farhi, Goldstone, and Gutmann gave examples where simulated annealing gets stuck at local optima, whereas the adiabatic algorithm tunnels through to the global optimum. On the other hand, van Dam, Mosca, and Vazirani gave other examples where the adiabatic algorithm also gets stuck at local optima, taking exponential time to reach a global optimum.

The upshot is that, if a fast quantum algorithm for NP-complete problems existed, then just like a fast classical algorithm, it would have to be radically different from anything that’s yet been imagined. Because of this — not to mention the civilization-changing consequences that such an algorithm would have — Umesh and I feel strongly that claims to solve NP-complete problems should never be bandied about lightly. As with perpetual-motion machines or antigravity shields, the burden of proof lies entirely with the would-be inventor. “In case of fire, break glass.” “In case of algorithm, break skepticism.”

It might be objected that, while the experts know that this is what Umesh meant, laypeople could easily misinterpret his words — or in other words, that Umesh has pulled a D-Wave of his own. But here’s the crucial difference. Any motivated reader who wanted the real story behind Umesh’s three-sentence caricature could find that story in peer-reviewed articles only a Google search away. But with D-Wave, all they’d have to go on is the PR. Simplifying mathematical subtleties is a right you have to earn, by having the cards in case anyone calls your bluff.

So much for Umesh’s letter. Now let’s look at mine:

Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. Indeed, the view of quantum computers as able to “try all possible solutions in parallel,” and then instantly choose the correct one, is fundamentally mistaken.

Notice I didn’t say it was proved that quantum computers can’t solve NP-complete problems in reasonable time: I said it was accepted. This, I felt, was a difference few people would have trouble understanding. As an example, if biologists said it was accepted that the Loch Ness monster doesn’t exist, presumably no one would interpret that as meaning they’d actually proved its nonexistence. Indeed, the interesting difference between the two cases is that someday, it might actually be possible to prove the nonexistence of the fast quantum algorithm.

Or are we complexity theorists being too dogmatic? Should we concede to a certain subset of our physicist friends that, until an actual proof has been discovered, we have no basis even to guess whether P versus NP or NP versus BQP will go one way or the other way? Should we, in other words, hold ourselves to the same lofty standards of uncompromising mathematical rigor that the physicists themselves have always adhered to?

Oh — pardon me. I had momentarily forgotten that we were talking about the headmasters of handwaving, the sultans of sloppiness, the princes of proof-by-example. Indeed, I think it’s fair to say that if physicists had discovered the P versus NP question, they would have immediately declared that P≠NP — and they would have hailed this ‘discovery’ of theirs as another remarkable success for physics as a discipline. And everyone else — from other scientists to programmers to journalists to the general public — would have gone right along with it. The task of proving P≠NP would have been left as a technical detail, to be filled in by the mathematical hairsplitters — just like the task of proving quark confinement, or the ergodicity of particles in a box, or the existence of Yang-Mills theory, or the perturbative finiteness of string theory.

Clearly, the issue here can’t be the intelligence of physicists, some of whom actually seem reasonably smart. The issue, rather, is their different standard — much closer to the standard of everyday life — for saying that they know something is true. My favorite example in this vein comes from Leonid Levin, who tells me he couldn’t convince Richard Feynman that P versus NP was an open problem at all.

I believe Feynman was onto something, in that the only reason P versus NP is called an “open problem” is that we — the theoretical computer scientists and mathematicians — hold ourselves to a different standard of rigor than any other scientists. Were we less cautious, we could easily talk about the hardness of NP-complete problems as one of our great discoveries, a discovery for which working out the mathematical underpinnings admittedly remains as a challenge for future generations.

Ironically, our higher standard of rigor often gets turned against us, when outsiders use it to argue that we’re just guessing, or building castles in the sky, or making conjectures that could all turn out to be wrong. The same charges could obviously be leveled against the central hypotheses of physics or economics or pretty much any other field, but they rarely are — at least not by the same people.

I’m tired of double standards, is all I’m saying.

Logicians on safari

Thursday, November 2nd, 2006

Sean Carroll, who many of you know from Cosmic Variance, asked the following question in response to my last entry:

I’m happy to admit that I don’t know anything about “one-way functions and interactive proofs.” So, in what sense has theoretical computer science contributed more in the last 30 years to our basic understanding of the universe than particle physics or cosmology? (Despite the fact that I’m a cosmologist, I don’t doubt your statement — I’d just like to be able to explain it in public.)

I posted my response as a comment, but it’s probably better to make it an entry of its own. So:

Hi Sean,

Thanks for your question!

Of course I was joking when I mentioned “objective standards” for ranking scientific fields. Depending on which questions keep you up at night, different parts of “humankind’s basic picture of the universe” will seem larger or smaller. (To say that, of course, is not to suggest any relativism about the picture itself.)

What I can do, though, is to tell you why — by my own subjective standards — the contributions of theoretical computer science over the last 30 years rival those of theoretical physics or any other field I know about. Of course, people will say I only think that because I’m a theoretical computer scientist, but that gets the causal arrow wrong: I became a theoretical computer scientist because, as a teenager, I thought it!

It’s probably best to start with some examples.

  1. We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
  2. There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down.
  3. Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
  4. If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly not true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation.
  5. It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for breaking RSA! For example, if you proved that RSA with an n-bit key took n5 steps to break, you would’ve discovered an algorithm for breaking it in 2n^1/5 steps. If you proved that RSA took 2n^1/3 steps to break, you would’ve discovered an algorithm for breaking it in n(log n)^2 steps. As you show the problem to be harder, you simultaneously show it to be easier.

Alright, let me stop before I get carried away. The examples I’ve listed (and hundreds more like them) are not exactly discoveries about physics, but they don’t have the flavor of pure math either. And even if they have some practical implications for computing (which they do), they certainly don’t have the flavor of nitty-gritty software engineering.

So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19th-century physics; now it’s time to venture out into the early 20th century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe that allows quantum computers is like.

Incidentally, it’s also why I try hard to keep up with your field. If I’m not mistaken, less than a decade ago cosmologists made an enormous discovery about the capacity of finite beings to learn mathematical truths: namely, that no computation carried out in the physical world can ever involve more than 1/Λ ~ 10122 bits.

Best,
Scott