Australian educators are using my $5,000 plagiarism settlement to sell schoolkids (on science)

December 13th, 2007

Two months ago, you might remember, the gods of humor and blogging saw fit to bestow on me an unexpected gift:

Model 1: But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves — then what is it about?

Model 2: Well, from my perspective, it’s about information, probabilities, and observables, and how they relate to each other.

Model 1: That’s interesting!

“For almost the first time in my life,” I wrote then, “I’m at a loss for words … Help me, readers. Should I be flattered? Should I be calling a lawyer?”

Almost three hundred comments later, your answer was clear. Half of you thought I’d be a stereotypical American jerk, epitomizing everything wrong with modern society, if I sought any redress for the blatant plagiarism of my quantum mechanics lecture. The other half thought I’d be a naïve moron if I didn’t seek redress. However, there did seem to be a rough consensus on two topics: first, that as part of any settlement I should “date the models” (at least a hundred people made some joke to that effect, each one undoubtedly thinking it highly creative); and second, that it was probably okay to try to get something from either Ricoh (the printer company) or Love Communications (the ad agency), as long as the proceeds went to charity and I didn’t directly benefit. Since, like any public figure, I now make all decisions by polling my base, I finally had a warrant for action.

After talking things over informally with Warren Smith — the guy who discovered the Ricoh ad in the first place, who just happens (in one of the many ironies of this case) to be studying Australian intellectual property law, and to whom I’m deeply indebted — I next contacted a lawyer from a well-known Australian law firm. Talking to her confirmed my suspicion that many of the armchair legal theories offered in my comments section were simply mistaken. In Australian copyright law, as in American law, you can’t just take someone’s words and use them for commercial purposes without permission or attribution, regardless of any subjective judgments about the “uniqueness” or “specialness” of those words. I was on strong legal ground. But there was also a key difference between the Australian and American legal systems: Australian lawyers are prohibited from taking cases on a contingency basis. If I wanted the law firm to pursue the case, then I would have to pay them up-front.

Disregarding the pleas of my relatives — who at this point were begging me to sue — I instead wrote to Love Communications directly, proposing a settlement to be donated (for example) to a mutually-agreed-upon Australian science outreach organization. We eventually agreed to a settlement of AUS$5,000. Considering the value of Love Communications’ Ricoh account — which the Sydney Morning Herald reported as more than AUS$1,000,000 — I thought the ad agency was getting off incredibly easily, but at least I wouldn’t have to write any more letters or deal with lawyers.

The one remaining problem was to find a suitable Australian science outreach organization to which to donate the $5,000, and that would also be amusing to blog about. As I chewed on this problem, my mind wandered back to my visit to Brisbane in December 2005, and to a conversation I’d had there with Jennifer Dodd — then of the University of Queensland and now of the Perimeter Institute in Waterloo. In that conversation, Jen had told me about a popular-science lecture series in Brisbane she’d founded, called BrisScience. I’d immediately asked her to repeat the name.

BrisScience,” she said.

“Spell it?” I asked.

“B-r-i-s-Science. Why, is there something funny about the name?”

“No, no, it shouldn’t be a big deal in Australia.”

Aha! I now emailed Jen to ask whether BrisScience had continued its “cutting-edge” outreach programs in her absence. Jen replied that yes, it had, and she put me in touch with Joel Gilmore, BrisScience’s current director. Joel told me that not only was BrisScience still going strong (with the next lecture, by Bill Phillips, being about quantum mechanics), but that he (Joel) also directed another science outreach program called the Physics Demo Troupe, which does hands-on science shows for schoolkids in Brisbane and rural areas. Joel proposed that we donate $2,000 of the settlement to BrisScience and $3,000 to the Physics Demo Troupe, the latter supporting a visit to the Torres Strait Islands in North Queensland. I agreed, and Love Communications agreed as well.

I am, of course, gratified that this sordid southern-hemisphere tale of sex, plagiarism, quantum mechanics, and printers could be resolved to everyone’s satisfaction, without the need for a courtroom battle, and that schoolkids in Torres Strait Island might even learn some physics as a result. But is there one sentence with which to conclude this saga, one sublimely fatuous thought that sums up my feelings toward the entire affair? Wait for it… wait for it…

That’s interesting.

Review of Mermin’s book

December 10th, 2007

By now, maybe a half-dozen people have asked me what I thought of David Mermin’s new book Quantum Computer Science: An Introduction; many seemed surprised when I told them I hadn’t read it. Since I aim to please, I finally cracked open my review copy on a train this weekend, and am pleased to report that … yes, it’s quite good. Indeed, the biggest problem is just Mermin’s infamous insistence on spelling qubit “Qbit.” (At this point, one might as well decide to spell quark “Qork.” A language consists of shared, often highly-irrational conventions that cannot be changed unilaterally.)

Mermin’s book is, one might say, radically limited in scope. There’s nothing about physical implementation, not a mixed state or POVM in sight, not a word on the provable limitations of quantum computers, no mention of P, NP, or BQP, and pretty much nothing about any development after 1995. The sole aim is to cover the “quantum canon” — Hadamards and Toffolis, Shor and Grover, quantum error-correction, the BB84 key distribution protocol, etc. — while dispelling various misconceptions along the way. But at that limited task, Mermin — who’s an extremely gifted expositor — does a better job than almost anyone.

He certainly does a better job than I would have. I’ll admit that, when the Mike&Ike book came out seven years ago, I naïvely imagined that the “quantum textbook problem” (or more precisely, the good quantum textbook problem) had been solved. From now on, there would be a single place where everyone could go to learn the quantum canon. And because anyone could know all that twentieth-century material by reading a single book, I could readily assume that anyone who was interested did know it, and could take that as shared background knowledge (like, say, the existence of the Roman Empire) when discussing newer topics like quantum lower bounds, the adiabatic algorithm, or BQP/qpoly.

Of course I couldn’t have been more wrong. In the years since Mike&Ike came out, the total amount of confusion in the world about the |A〉|B〉|C〉’s of quantum computing (as well as the total number of books that try to address that confusion) has increased exponentially. And so it’s good to have a distinguished physicist like Mermin patiently telling readers the following:

“To understand how to build a quantum computer … you must indeed have many years of experience in quantum mechanics and its applications under your belt. But if you only want to know what such a device is capable in principle of doing once you have it, then there is no reason to get involved in the really difficult physics of the subject.” (page xiii)

“This means that all traces of the amplitudes αx characterizing the input state have vanished from the output state. The only role they have played in the measurement is to determine the probability of a particular output.” (page 25)

“Small alterations in the phases produce small alterations in the probabilities of getting that extremely precise digital information, but not the precision of the information itself, once it is acquired.” (page 85)

Personally, I find it hard to remember that anyone needs to be told these things — and even when I do tell them, they don’t believe me (probably because I’m waving my arms too wildly). They think I’m making it up. But Mermin dispels the common misconceptions with a calm air of gravity.

I’ll end with two quibbles.

First, while Mermin talks a great deal about quantum black-box algorithms, he never once mentions the crucial distinction between the “black-box” world — the world where one can prove unconditionally both that quantum computers can solve certain problems exponentially faster than classical computers, and that they can’t solve certain other problems any faster than classical ones — and the “non-black-box” world, where all such statements are necessarily conjectural. The one time he does touch on this distinction, he gets it wrong:

“The best known classical algorithms for finding the period r of such a function take a time that grows faster than any power of the number n of bits of r (exponentially with n1/3).” (page 63)

The best classical algorithm for period-finding provably takes time that grows exponentially with n/2. The best known classical algorithms for factoring take time that grows exponentially with n1/3. But the latter algorithms (necessarily) use deeper properties of the factoring problem than just its reducibility to period-finding.

I found this an uncharacteristic omission for Mermin — whose tendency is to examine whatever he brings up from all possible angles — though perhaps it can be understood in terms of a decision to avoid any mention of complexity theory.

The second quibble is that there are no exercises.

Entanglement for peace and freedom

November 30th, 2007

A reader named Prempeh writes in the comments section of my last post:

I’m really no happier because of knowing that a phenomenon called quantum entanglement exist [sic]. Now, you say, this phenomenon has the potential to enable super-powerful computing, teleportation, … I say, until science helps me with a comprehensive, provable, repeatable methodology for using it’s [sic] results to make me (and everyone who wants to be) happy, I really do not see it as significantly more helpful than faith.

NB: Any chance that a unification theory could help the poor stave off devastating climate change caused in part by the profligacy of the west? End the brutality of war? Stop child sexual exploitation? Remove corruption, greed, racism, …

This is not a rhetorical question

A few quick non-rhetorical answers:

  1. At the least, thinking about quantum entanglement doesn’t exacerbate problems like war and climate change (if we neglect o(1) terms like the jet fuel needed to fly to conferences). The same can’t be said for many other human endeavors.
  2. The scientific revolution 400 years ago led directly to a doubling of the human lifespan, the birth of democracy and its subsequent spread across the world (Galileo, Newton → Spinoza, Hume, Locke → Paine, Jefferson → …), and the cessation of practices such as witch-burning. It’s true that those few lucky enough to have been tribal chieftains with large harems probably wouldn’t want to trade places with a modern; and also true that Hitler and Stalin managed to surpass the already-impressive brutality of the ancients. But on the whole, it seems to me that the human condition improved once we started understanding how the universe works. And given the number of utopian ideas that managed to do nothing but drench this vale of tears in new tears of their own, I don’t see the relative success of curiosity-driven science as anything to sneeze at.
  3. I do try to do my tiny part to raise awareness of climate change and other threats to civilization. Of course, every time I do so, I’m attacked in the comments section by hordes of denialists who tell me I should stick to what I know about (like quantum entanglement). There’s just no pleasing everyone.
  4. I see the central problem facing humanity — much more central than climate change, greed, racism, or anything else you mentioned — as collective stupidity. If we, as a species, weren’t so collectively stupid, we’d have error-correcting mechanisms that checked the other problems before they spiraled out of control.I also maintain the possibly-naïve hope that, if people could just understand basic conceptual points about how the world works — like why quantum entanglement doesn’t allow faster-than-light communication, but is still not the same as classical correlation — some tiny contribution might be made to fighting the collective stupidity of our species and thereby helping to ensure its continued survival. That, and not the prospect of teleportation or super-powerful computing, is what really motivates me.

Mistake of the Week: Explain Everything (Or Don’t Bother Explaining Anything)

November 26th, 2007

In today’s post I was going to announce the winners of my Unparadox Contest. But then I noticed the Lake Wobegon unparadox: if the total winnings are zero, then no one’s winnings are below average and in that sense, everyone’s a winner!

So instead of that, I thought I’d contribute to the general shnoodification of humankind, by discussing the same thing every other science blogger’s discussing: Paul Davies’s New York Times op-ed.

Over the years I have often asked my physicist colleagues why the laws of physics are what they are. The answers vary from “that’s not a scientific question” to “nobody knows.” The favorite reply is, “There is no reason they are what they are — they just are.” The idea that the laws exist reasonlessly is deeply anti-rational. After all, the very essence of a scientific explanation of some phenomenon is that the world is ordered logically and that there are reasons things are as they are. If one traces these reasons all the way down to the bedrock of reality — the laws of physics — only to find that reason then deserts us, it makes a mockery of science.

Now, I know Paul Davies: he took me out to a nice dinner in Iceland, and even quoted me next to Ludwig Wittgenstein in the epigraph of one of his papers. And I know for a fact that his views are much more nuanced than you’d think, if the above passage was all you were going on. I can assure you that, if his claim that physics without metaphysics is “a mockery of science” reminds you of those hooded monks from Monty Python and the Holy Grail, pounding their heads with wooden boards in between mystic incantations, then you’ve read his piece too superficially and have failed to grasp its subtler message.

But even so, reading his op-ed made me wonder: when did we, as a civilization, have a similar conversation before? Then I remembered: the early 1600’s!

Galileo: Hey, I’ve discovered that Jupiter has moons! And that objects in free fall follow parabolic trajectories! And that…

Jesuit schoolmen: Ah, foolish one, but you have told us nothing about the underlying causes of motion, or what it is that imbues the lunar bodies with their lunarity. Of what use are your so-called “explanations” if they rest on a foundation that is itself unexplained? One can hardly build a pyramid on sand!

One imagines the schoolmen feeling sorry for the naïve Galileo, with his rampant scientism and countless unexamined presuppositions. In their minds, if Galileo hadn’t explained everything then he hadn’t really explained anything — and hence they themselves (who had explained nothing) were the wiser by far.

Four hundred years after the scientific revolution, most people still think like the Jesuit schoolmen did:

How does a toaster work?

By converting electrical energy into heat.

But what is electricity?

The movement of electrons through a wire.

But what are electrons?

Fundamental particles with spin 1/2, negative charge, mass of 10-27 grams…

But why do particles exist? Why does anything exist?

Well, those are excellent and profound questions, and you see…

Aha! Aha! So science doesn’t have all the answers! Ultimately, then, science is just another form of faith!

The schoolman glances at the intermediate steps — how a toaster works, what electricity is, what electrons are — and is not only profoundly unimpressed, but baffled and annoyed that anyone thinks he should be impressed. What are these so-called “answers” but irrelevant distractions from the Answer? What are they but the build-up to the punchline, stepping-stones on the road to the metaphysical abyss?

Science, in the schoolman’s mind, is just a massive con game: an attempt to distract people from the ultimate questions of essence by petty conjuring tricks like curing diseases or discovering the constituents of matter. Even pure math is part of the con: all Wiles did was reduce Fermat’s Last Theorem to some supposedly “self-evident” axioms. But why bother with such a reduction, if you can’t justify the axioms or the laws of logic themselves?

I frequently encounter the schoolmen even in my little corner of the world. People will ask: isn’t computational complexity theory a colossal failure, since all you ever do is prove “this problem is as hard as that other one,” or “this problem is hard relative to an oracle,” and never really prove anything is hard?

Let’s leave aside the factual misunderstandings — we can prove certain problems are hard, etc. etc. — and concentrate on the subtext, which is:

Don’t waste my time with the accumulated insights of the last half-century. If you haven’t solved the P versus NP problem — and you haven’t, right? — then aren’t you, ultimately, just as ignorant about computation as I am?

Of course, “does P=NP?” differs from “where do the laws of physics come from?” in that we know, at least philosophically, what an answer to the former question would look like. And yet, if complexity theorists ever do prove P≠NP, I’m guessing the schoolmen will switch immediately to saying that that was merely a technical result, and that it doesn’t even touch the real question, which is something else entirely.

The schoolmen’s philosophy leads directly to a fatalist methodology. What causes polio? If you say a virus, then you also have to explain what viruses are, and why they exist, and why the universe is such that viruses exist, and even why the universe itself exists. And if you can’t answer all of these questions, then your so-called “knowledge” rests on a foundation of arbitrariness and caprice, and you’re no better off than when you started. So you might as well say that polio is caused by demons.

Yet so long as the schoolmen are careful — and define the “ultimate explanation for X” in such a way that no actual discovery about X will ever count — their position is at least logically consistent. I’ll even confess to a certain sympathy with it. I’ll even speculate that most scientists have a smidgen of schoolman inside.

All I really object to, then, is the notion that tracing every question down to what Davies calls “the bedrock of reality” represents a new, exciting approach to gathering knowledge — one at the cutting edge of physics and cosmology. Say whatever else you want about the schoolman’s way, it’s neither new nor untried. For most of human history, it’s the only approach that was tried.

Thanksgiving Special: D-Wave at MIT

November 22nd, 2007

Some people think I have a vendetta against D-Wave Systems and its questionable quantum computer claims (see here, here, here, here, here, here, here, here, here for context). But actually, nothing could be further from the truth. I keep trying and trying to change the subject! Wouldn’t you all rather hear about Wolfram, I say? Or unparadoxes? Or my #1 topic du jour, nothing whatsoever?

Apparently you wouldn’t. From my inbox to my comments section to the hallway, the masses have spoken, and what they want to know is: did I attend D-Wave’s presentation at MIT on Monday, and if so what did I think?

Yes, I attended, in body though not in mind. You see, Monday was also the day of the STOC deadline, so if our guests from D-Wave (Mohammad Amin and Andrew Berkley) were expecting a ferocious skeptic, they instead got a bleary-eyed zombie with visions of MAEXP, P/poly, and 7:59PM EST cavorting in his head.

This meant that Ed Farhi, Isaac Chuang, Peter Shor, and Lorenza Viola had to do most of the questioning. As it turned out, they did a vastly better job than I could have.

As others have pointed out in stronger terms, I’m not a physicist. (On the other hand, the gentleman linked to in the previous sentence is not correct about my being paid by the NSA to discredit Canadian quantum computing efforts: it’s actually the GCHQ and the Mossad.) As such, I can’t directly evaluate D-Wave’s central claim to have built an adiabatic quantum computer, nor have I ever tried to do so. All I can do is point out the many things D-Wave has said to the press (about NP-complete problems, for example) that I know are false, its history of making dramatic announcements without evidence, and its contemptuous attitude toward scientists who have asked for such evidence. For me, that’s more than enough to destroy D-Wave’s credibility on the claims I can’t directly evaluate. After all, the burden of proof is not on me; it’s on them.

However, other people have not been satisfied with this line of argument. “We don’t care who the burden the proof is on,” they say. “We just care whether D-Wave built an adiabatic quantum computer.”

But my physicist colleagues don’t suffer from the same argumentative limitations that I do. At the group meeting preceding the talk, Farhi announced that he didn’t care what the press releases said, nor did he want to discuss what problems quantum computers can solve (since we academics can figure that out ourselves). Instead he wanted to focus on a single question: is D-Wave’s device a quantum computer or not?

What followed was probably the most intense grilling of an invited speaker I’ve ever seen.

It quickly emerged that D-Wave wants to run a coherent quantum computation for microseconds, even though each of their superconducting qubits will have completely decohered within nanoseconds. Farhi had to ask Amin to repeat this several times, to make sure he’d gotten it right.

Amin’s claim was that what looks like total decoherence in the computational basis is irrelevant — since for adiabatic quantum computation, all that matters is what happens in the basis of energy eigenstates. In particular, Amin claimed to have numerical simulations showing that, if the temperature is smaller than the spectral gap, then one can do adiabatic quantum computation even if the conventional coherence times (the t1 and t2) would manifestly seem to prohibit it.

The physicists questioned Amin relentlessly on this one claim. I think it’s fair to say that they emerged curious but severely skeptical, not at all convinced by the calculations Amin provided, and determined to study the issue for themselves.

In other words, this was science as it should be. In contrast to their bosses, Amin and Berkley made a genuine effort to answer questions. They basically admitted that D-Wave’s press releases were litanies of hype and exaggeration, but nevertheless thought they had a promising path to a quantum computer. On several occasions, they seemed to be struggling to give an honest answer that would still uphold the company line.

Two other highlights:

  • I asked Amin and Berkley whether they could give any evidence for any sort of speedup over classical simulated annealing. They laughed at this. “It’s sixteen qubits!” they said. “Of course you’re not going to see a scaling effect with sixteen qubits.”I said I understood perfectly well (though I wondered silently whether the dozens of journalists covering D-Wave’s demo understood the same). But, I continued, surely you should be able to see a scaling effect by the end of 2008, when your business plan calls for 1024 qubits?”Well, that’s what it says in the press release,” they said.Forget about the press release, Farhi interjected. How many qubits are you actually going to make?Amin and Berkley shrugged; they said they’d just try to make as many qubits as they could.
  • Even though it hadn’t exhibited any sort of speedup, Amin and Berkley steadfastly maintained that their 16-qubit device was indeed a quantum computer. Their evidence was that simulations of its behavior that took quantum mechanics into account gave, they said, a better fit to the data than simulations that didn’t. On the other hand, they said they were not able to test directly for the presence of any quantum effect such as entanglement. (They agreed that entanglement was a non-negotiable requirement for quantum computing.)There was a Feynmanesque moment, when Ike Chuang asked Amin and Berkley an experimental question so simple even I understood it. Ike said: if you’re indeed seeing quantum effects, then by running your computer at higher and higher temperatures, at some point you should see a transition to classical behavior. Have you tried this simple control experiment?Amin and Berkley said that they hadn’t, but that it sounded like a good idea.

For a theorist like me — accustomed to talks ending with “if there are no questions, then let’s thank the speaker again” — this was exciting, heady stuff. And when it was over, I still had almost three hours until the STOC deadline.

‘Tis the week before deadline

November 15th, 2007

and at least over here

Not a blogger is stirring

The reason is clear

MIT sues Frank Gehry over Stata Center

November 6th, 2007

When I first saw the headline, I assumed it was from The Onion or (more likely) some local MIT humor publication. But no, it’s from the Associated Press.

Deoxyribononapproximability

November 4th, 2007

[Updates]

Alright, here’s a problem for all you bioinformatistas and inapproximabistas out there, which was inspired by this post of Eliezer Yudkowsky at Overcoming Bias (see also the comments there).

Let a DNA sequence be an element of {A,C,G,T}*, and suppose we’re allowed the following primitive operations: (1) insert a base pair anywhere we want, (2) delete any substring, (3) reverse any substring, and (4) copy any substring into any other part of the string. Then given a DNA sequence S, how hard is it to estimate the minimum number of operations needed to produce S starting from the empty string?

Closely related is the following problem: by starting from the empty string and applying o(n) operations, can we produce a “pseudorandom DNA sequence” of length n — that is, a sequence that can’t be distinguished in polynomial time from a uniform random one?

(Note 1: For both problems, we might also want to stipulate that every intermediate sequence should have size at most polynomial in n. Or better yet, maybe one can prove that such an assumption is without loss of generality.)

(Note 2: I’m also very interested in what happens if we disallow the powerful operation of reversal.)

For all I know, these problems might have trivial (or at any rate, known) answers; I just came up with them and haven’t thought them through.

What the problems are really getting at is this: is the “effective number of bits” in your genome (that is, the number of bits from a polynomial-time algorithm’s perspective) limited by how many ancestors you’ve had since life on Earth began? Or can it be vastly greater?

Update (11/4): Rereading the last few paragraphs of Eliezer’s post, I see that he actually argues for his central claim — that the human genome can’t contain more than 25MB of “meaningful DNA” — on different (and much stronger) grounds than I thought! My apologies for not reading more carefully.

In particular, the argument has nothing to do with the number of generations since the dawn of time, and instead deals with the maximum number of DNA bases that can be simultaneously protected, in steady state, against copying errors. According to Eliezer, copying a DNA sequence involves a ~10-8 probability of error per base pair, which — because only O(1) errors per generation can be corrected by natural selection — yields an upper bound of ~108 on the number of “meaningful” base pairs in any given genome.

However, while this argument is much better than my straw-man based on the number of generations, there’s still an interesting loophole. Even with a 10-8 chance of copying errors, one could imagine a genome reliably encoding far more than 108 bits (in fact, arbitrarily many bits) by using an error-correcting code. I’m not talking about the “local” error-correction mechanisms that we know DNA has, but about something more global — by which, say, copying errors in any small set of genes could be completely compensated by other genes. The interesting question is whether natural selection could read the syndrome of such a code, and then correct it, using O(1) randomly-chosen insertions, deletions, transpositions, and reversals. I admit that this seems unlikely, and that even if it’s possible in principle, it’s probably irrelevant to real biology. For apparently there are examples where changing even a single base pair leads to horrible mutations. And on top of that, we can’t have the error-correcting code be too good, since otherwise we’ll suppress beneficial mutations!

Incidentally, Eliezer’s argument makes the falsifiable prediction that we shouldn’t find any organism, anywhere in nature, with more than 25MB of functional DNA. Does anyone know of a candidate counterexample? (I know there are organisms with far more than humans’ 3 billion base pairs, but I have no idea how many of the base pairs are functional.)

Lastly, in spite of everything above, I’d still like a solution to my “pseudorandom DNA sequence” problem. For if the answer were negative — if given any DNA sequence, one could efficiently reconstruct a nearly-optimal sequence of insertions, transpositions, etc. producing it — then even my original straw-man misconstrual of Eliezer’s argument could put up a decent fight!

Update (11/5): Piotr Indyk pointed me to a paper by Ergün, Muthukrishnan, and Sahinalp from FSTTCS’2003, which basically solves my problem in the special case of no reversals. It turns out that you can estimate the number of insert, delete, and copy operations needed to produce a given DNA sequence to within a factor of 4, by just applying Lempel-Ziv compression to the sequence. Thanks, Piotr!

Another Update (11/5): Andy Drucker has pointed out that, in the case where reversals are allowed, we can approximate the number of insert, delete, copy, and reverse operations needed to produce a given DNA sequence to within a factor of 16, by combining the Lempel-Ziv approach of Ergün et al. with a clever trick: maintain both the sequence and its reversal at all times! Interestingly, though, this trick doesn’t seem to work for transforming one sequence into another (a more general problem than I asked about, and the one considered by Ergün et al).

Unparadox Contest

November 1st, 2007

In a recent talk at MIT, Umesh Vazirani appealed to the famous Birthday Paradox to say that two random subsets of {1,…,N}, each of size o(√N), probably wouldn’t intersect each other. Of course we all understood what he meant, but it occurred to me that Umesh was actually appealing to the Birthday Unparadox: “If you put three people in a room, chances are no two of them will have the same birthday.”

Once I realized that, I started seeing unparadoxes everywhere I looked:

The Banach-Tarski Unparadox: If you cut an orange into five pieces using a standard knife, then put them back together, the result will have exactly the same volume as the original orange.

Braess’ Unparadox: If you add an extra lane to a highway, one possible result will be to decrease congestion.

Hempel’s Unparadox: If you observe a bunch of ravens and find that all of them are black, this might increase your likelihood for the statement “All ravens are black.”

Russell’s Unparadox: The set of all sets that contain themselves as a member, might or might not contain itself as a member (either way is fine).

In the spirit of my highly-successful Best Umeshism and Best Anthropicism contests (remember those?), I now open the floor to you: come up with the best unparadox! The winner will receive absolutely nothing. (If you have to ask what the point is, this contest isn’t for you.)

The Aaronson $25.00 Prize

October 28th, 2007

[Update]

For those of you who’ve been living in a non-wifi-enabled cave, four days ago Stephen Wolfram awarded a $25,000 prize to a 20-year-old undergraduate named Alex Smith, for proving that a particular two-state, three-symbol Turing machine is universal. The prize was to celebrate the fifth anniversary of Wolfram’s paradigm-smashing, foundation-shaking masterpiece, A New Kind of Science. (More from Bill Gasarch’s blog, Nature, and Scientific American.)

Smith sounds like a swell guy who entered this contest for exactly the right reasons: he was at his parents’ place over summer break and had nothing better to do. He deserves the money, and I sincerely hope the CS theory community hasn’t heard the last from him.

Predictably, though, as soon as this story broke I started getting emails from journalists asking me about the far-reaching scientific implications of the new universality proof. In trying to give them an honest answer — one that wouldn’t be misunderstood, or spun to support a pre-existing storyline with which I disagreed — I inevitably came off like an ornery old sourpuss. From Scientific American:

Of course, there may be a reason the problem languished. “Finding the smallest universal [Turing machines] is a neat recreational pursuit,” quantum computation researcher Scott Aaronson of the Massachusetts Institute of Technology says, but “it’s no longer seen as connected to the central questions of the field.” …

“The impact of NKS on all the areas of computer science and physics I’m familiar with has been basically zero,” he says. “As far as I can tell, the main impact is that people now sometimes use the adjective ‘Wolframian’ to describe breathtaking claims for the trivial or well-known.” [Martin] Davis offers a sunnier take: “The book has a lot of beautiful pictures.”

And from Nature:

The solution isn’t hugely relevant to modern computer science, says Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. “Most theoretical computer scientists don’t particularly care about finding the smallest universal Turing machines,” he wrote in an e-mail. “They see it as a recreational pursuit that interested people in the 60s and 70s but is now sort of ‘retro’.”

Having partially degrumpified, in the remainder of this post I wish to offer something positive.

But first some background: a month after NKS came out, I wrote a review of it for the journal Quantum Information and Computation, in which I examined Wolfram’s claims about quantum mechanics and computational complexity, and explained what I saw as the problems with them. (Rather than rehash the review, I’ll just point you there if you’re interested.)

Today I’d like to celebrate the fifth anniversary of my critical review of NKS, by offering a $25 prize for stodgy, conventional work in the field of quantum complexity theory.

The Aaronson $25.00 Challenge

In NKS, Wolfram places himself among those computer scientists and physicists who doubt the possibility of quantum computers, not for any practical reason but as a consequence of their disbelieving quantum mechanics itself. As he writes on page 771:

Indeed within the usual formalism [of quantum mechanics] one can construct quantum computers that may be able to solve at least a few specific problems exponentially faster than ordinary Turing machines. But particularly after my discoveries in Chapter 9 [‘Fundamental Physics’], I strongly suspect that even if this is formally the case, it will still not turn out to be a true representation of ultimate physical reality, but will instead just be found to reflect various idealizations made in the models used so far.

Here, then, is the challenge:

If a quantum computer can efficiently solve a problem, can it also efficiently convince Wolfram that the solution is correct? More formally, does every language in the class BQP admit an interactive protocol where the prover is in BQP and the verifier is in BPP?

In other words: can quantum computers always “show their work”? It’s obvious, for example, that if a quantum computer spit out the factors of a 5,000-digit number, you wouldn’t have to believe quantum mechanics (or even know what it was) to check whether the answer was right. I’m asking whether every problem solvable by a quantum computer has the same property. And to make things fair to the quantum computer, I’ll let it give not just a static proof but also an interactive protocol, by which a distrustful polynomial-time classical verifier could become convinced, to arbitrarily high confidence, that the quantum computer knows the right answer.

(An example for the uninitiated: suppose you had two graphs G and H, and suppose you picked one of the graphs at random, randomly permuted its vertices, and gave the result to a quantum computer. And suppose the quantum computer could unfailingly tell you which graph you started with. Clearly this should convince you that G and H are not isomorphic — since if they were isomorphic, then the quantum computer couldn’t have done better than guessing! And this is true even though you never received a proof of non-isomorphism that you could hand to someone else.)

I’ll award $25 either for a proof that every quantum computation can be “counter-Wolframized,” or for an oracle relative to which some quantum computation provably can’t be. If both problems are solved then I’ll award $25 for each. Every serious submission will be reviewed by a Prize Committee consisting of me. The Committee may also choose to award smaller prizes for partial results.

Note: Much as I’d like to “pull a Wolfram,” the beautiful question above was (to my knowledge) first asked by Daniel Gottesman, at a conference in February 2004. Also, the idea of a $25 prize was suggested to me by Mike Mosca.

Update (10/30): A commenter pointed me to this thread in the Foundations of Mathematics (FOM) mailing list, which contains an actual technical discussion of Smith’s universality proof. Of particular interest:

  1. an argument by Vaughan Pratt that Smith’s universality proof is wrong,
  2. a response by Todd Rowland of Wolfram Research,
  3. a post from Wolfram himself, which, though written in his trademark way, comes the closest I’ve seen of anything by him to addressing actual hard questions about the definition of universality, and
  4. this comment from John McCarthy: “In the 1950s I thought that the smallest possible (symbol-state product) universal Turing machine would tell something about the nature of computation. Unfortunately, it didn’t. Instead as simpler universal machines were discovered, the proofs that they were universal became more elaborate, and [so] did the encodings of information.”

In judging the correctness of Smith’s proof, the key question is what counts as “universality.” As I explained to the journalists who emailed me, the rules of Wolfram’s prize left a huge gray area by explicitly refusing to specify this. In particular: what kinds of input and output encodings are allowed? How do we make sure the real computational work is done by the Turing machine itself and not the encoding procedures? Does the tape have to be filled with 0’s initially, or can it be filled with other patterns, and if so which ones? Since the two-state Turing machine in question has no “halt” state, what external conditions can we impose to determine when the machine has halted?

Still, I decided not to make a fuss about such things in my original post, since it seemed clear from Smith’s writeup that (1) he was aware of these issues, and (2) there was some nontrivial sense in which he proved universality. I wasn’t going to lose sleep over which sense, for the simple reason that I’d never lost sleep over the (2,3) universality question itself!