The Hiring Scott Aaronson FAQ

May 5th, 2007

Last weekend, I got back from interviewing at the University of Washington, Stanford, Caltech, Berkeley, and Cornell. Then I fell asleep, and am only just now waking up. On this trip — surely the most exhausting I’ve ever been on — I seem to remember giving a talk on The Limits of Quantum Computers. (You’ll have to go to presentation mode to get the full effect of my PowerPoint animations, and especially the D-Wave montage on slide 2.)

The bulk of the time, however, was taken up with interviews. My interviewers — maybe 20 or 30 at each school, in CS, physics, applied math, even chemistry and electrical engineering — asked me good questions, questionable questions, hard questions, soft questions, loaded questions, lots of questions. And that’s what enables me, without further ado, to present for your reading enjoyment The Official Hiring Scott Aaronson FAQ.

[Note: The questions below are all things that I was actually asked by at least one interviewer — in some cases, by dozens of interviewers.]

Q: What will you do if quantum computing doesn’t pan out in the next 20 years?

A: This question presupposes that quantum computing should be judged as a high-risk engineering project. But that’s never been my view. My view is that it should be judged as basic science. What we’re trying to do is unify the theory of computing with our best theory of the physical world, and to perform the most stringent tests to which quantum mechanics itself has ever been subjected. For me, the payoff for better scientific understanding is not in some remote future — it’s as soon as the understanding is achieved.

Q: But why should we care about basic science?

A: Uhh, we are called the computer science department…

Q: Does quantum computing really belong in CS departments, as opposed to physics departments?

A: It belongs if we want it to belong! In my experience, the physicists have a bigger hurdle than the computer scientists in getting started with quantum computing research. All we need to do is ask ourselves: “what happens if we generalize probability theory to allow minus signs, and base it on the L2 norm instead of the L1 norm?” From then on it’s just the concepts we know and love: states, transformations, recursion, reductions, universality, asymptotic efficiency, and so on. Physicists, by contrast, have to learn most of this stuff for the first time. It’s been a great personal pleasure to watch physicists who once suspected that CS was devoid of intellectual content, struggle with that content while trying to learn quantum computing!

Now, if we want to take a dramatic scientific development that wouldn’t have been possible without computer science, and hand it over to the physicists on a silver platter, that’s certainly our prerogative. But is it in our interest as a field?

Q: What if quantum computing is fundamentally impossible?

A: That would be much more interesting than if it’s possible! Merely building a quantum computer would be the more boring outcome — the one consistent with all the physics we already know.

Q: But no one really questions quantum mechanics, do they?

A: Well, you just did!

Q: No, I only questioned whether quantum computing is possible. Couldn’t quantum mechanics be valid, but quantum computing still be impossible because of noise and decoherence?

A: If so, then there’s something enormous that we don’t yet understand about the relevant physics. Look, in light of the Threshold Theorem (that if the rate of decoherence per qubit per time step is smaller than some constant threshold, then one can perform an arbitrarily long quantum computation), it’s hard to maintain that we’re talking about some niggling technical issue. What we’re really talking about is this: to keep track of the state of N entangled particles, does Nature have to do an amount of computational work that increases exponentially with N? And if it doesn’t, then (turning the question around) is there an efficient classical algorithm to simulate the behavior of N entangled particles? These are not questions that will just go away for some trivial reason that everyone’s overlooked.

Q: Suppose Ed Witten spent a week thinking about it, and came up with some profound reason why quantum computing is impossible. What would you do next?

A: I’d drop whatever else I was doing, and devote all of my time to understanding the implications of his discovery for computer science and physics!

[Pause]

Of course, since this is Witten, maybe he would’ve spent a second week and worked out all the implications himself. So I guess all I can say is that to my knowledge, he hasn’t in fact been thinking about these issues.

Q: How long until we have practical quantum computers?

A: In my opinion, quantum computing experiments are not yet at a stage where one can make “Moore’s Law” type predictions. We might be in the same situation with quantum computing that Babbage was with classical computing in the 1840’s. In other words, we think we know the fundamental principles, and we’re right — but the technology isn’t there yet, and might not be for a long time.

Of course, as with any technology, progress could happen faster than almost any of us expect. But I prefer to be pessimistic: that way either you’re right, or else you don’t mind being wrong!

Q: How many qubits are the experimentalists at so far?

A: It depends how you measure. People got up to twelve qubits in liquid-state NMR, the platform that was used some years ago to factor 15 into 3×5 (at least with high probability!). The trouble with liquid NMR is that no one knows how to scale it: currently the signal decreases exponentially with the number of qubits. So people turned their attention to other platforms, such as ion traps, photonics, and solid-state NMR. With these platforms the quantum computer’s state is much closer to being pure, so the prospects for scalability are much better. But manipulating the qubits is correspondingly harder. With ion traps, Rainer Blatt’s group in Innsbruck did tomography of an 8-qubit state, and other groups have done computations involving 2 or 3 qubits. With photonics, it’s easy to get a huge number of qubits that are highly coherent; the problem is that photons don’t like to talk to each other (in fact they fly right past each other), and therefore you can only apply two-qubit gates by using matter particles as intermediaries.

There are other more exotic proposals for scalable quantum computing, such as “nonabelian anyons.” With these I think it’s fair to say we’re not even at one qubit yet. But if these proposals did work, then the hope would be that they could leapfrog over the other proposals by building in error-correction for free.

Q: Which universities in North America are the major centers for quantum computing theory?

A: Right now there are four: Waterloo, Caltech, MIT, and Berkeley.

Q: Supposing we had scalable quantum computers, are your lower-bound results telling us that they would have no applications?

A: Absolutely not. Aside from their intrinsic scientific interest, quantum computers would have real applications. In my opinion, the most important would be the one so obvious that we computer scientists hardly ever talk about it: namely, simulating quantum physics and chemistry! This, of course, is what a quantum computer does in its sleep. At the same time, it’s also a fundamental problem in nanotechnology, high-temperature superconductivity, QCD, and other areas, important enough that Nobel prizes have been awarded even for ways to solve special cases efficiently on a classical computer.

Admittedly, you could say that every physical system in the universe is a quantum computer computing its own evolution! But the goal here would be to build a universal quantum simulator: a single machine that can be programmed to efficiently simulate any quantum system of interest. It’s the difference between building a wind tunnel versus writing code in order to simulate an airplane.

Now, by a sort of lucky accident, we can sometimes coax a quantum computer into solving classical problems asymptotically faster than we know how to solve them with a classical computer. The famous examples are of course (1) breaking RSA and other cryptographic codes, and (2) solving ‘generic’ search problems quadratically faster than a classical computer. These discoveries have enormous theoretical interest, but (as far I can tell) only limited practical interest. Maybe I’m wrong though.

Q: Granted that quantum computing is already interesting as basic science, do you agree that it would be more interesting if we had practical quantum computers?

A: Well, I certainly wouldn’t mind it.

Q: You work on quantum computing, yet most of your research is about how quantum computers wouldn’t be very powerful. Isn’t that a bit strange?

A: In the long run, I don’t think quantum computing research is helped by falsehood. If we’re going to be scientists and not PR flaks, then obviously we ought to welcome the truth, whichever way it goes.

But personally, I’d go even further than that. For me, a model of computation without any limitations would be like Superman without kryptonite. There just wouldn’t be a whole lot to say about it! To my way of thinking, a model that lets you factor integers efficiently but not solve NP-complete problems is actually more interesting than a model that gives you everything!

Oh, and one further point: if you’re interested (as I am) in the ultimate limits of computation, then you’re almost professionally obligated to study quantum computing. Why? Because any time you prove a limit of classical computers, you now have to ask yourself: is this something fundamental, or is it just an artifact of my working in a high-decoherence regime?

Q: Why are you so interested in the limits of computation?

A: To show that something is possible, you just have to find a way to do it. But to show that something’s not possible, you have to consider every way of doing it, and prove that none of them work. This is why negative results are so much rarer than positive results, but also why they often give us deeper understanding.

Q: That seems like an extremely male perspective! [said, jokingly, by a female interviewer]

A: I respectfully disagree. Look, as with pretty much every area of CS, we could certainly use more talented women in quantum computing theory: maybe a few dozen more Dorit Aharonovs, Julia Kempes, and Barbara Terhals. I find the gender imbalance in CS depressing, and I’ve long been interested in what it would take to correct it. But the relevant question is this: is the proportion of women working on quantum lower bounds smaller than the proportion working on quantum algorithms? I don’t think that it is.

Q: What’s your vision for where your research is headed in the next 5-10 years?

A: I know I’m not supposed to say this in an interview, but I don’t have a vision. I have this annoying open problem, that conjecture, this claim that seems wrong to me. I know some people have a coherent vision for where their research is headed. And in experimental areas, obviously you have to justify what you’re going to do with your $200 million of equipment. But at least in theoretical computer science, having a “vision” always seemed incredibly difficult to me.

For example, let’s say you have a vision that you’re going to solve problem X using techniques A, B, C. Then what do you do when you find out that techniques A and C are total nonstarters — but that technique B, while it’s useless for X, does solve a completely unrelated problem Y? What you do is make up a story about how Y was the problem you wanted to solve all along! We all do that: drawing targets around where the arrows hit is simply the business we’re in.

What I can tell you is this: I’m interested in fundamental limits on what can be efficiently computed in the physical world. I look for problems that can be addressed with tools from theoretical computer science, but that also have some physical or philosophical point: something that makes me feel like the universe would be a different place if the conjecture were true than if it were false.

In the past, quantum computing has been an incredibly rich source of that sort of problem for me. But it’s never been my exclusive interest — I’ve also worked on circuit complexity, Bayesian agreement protocols, and even information retrieval and clustering. And if quantum computing ever stops being a source of conceptually rich open problems, then I’ll look for those problems somewhere else.

Q: I noticed that, on at least three occasions where you proved a new quantum lower bound, other people quickly improved it to an optimal bound. Is there a reason why you didn’t prove the optimal bounds yourself?

A: Yeah, I don’t seem to be very good at tightening my lower bounds! I’ve had more success in proving the first nontrivial lower bound for a given problem — that is, in understanding why the complexity scales exponentially rather than polynomially. After that, I’m more than happy to let others pin down the order of the exponential. Every time that’s happened, far from feeling disappointed over being “scooped,” I felt great that my work gave other people a foundation to build on.

Q: You look tired. Would you like some coffee?

A: Yes.

Q: How did you get interested in quantum computing?

A: When I first learned about programming as an 11-year-old, it wasn’t only a revelation to me because I now understood how video games worked (though that was definitely important). The real revelation was: this is how the entire universe must work! It’s all just bits getting updated by simple rules. I don’t have to understand physics if I want to understand physics.

Of course I’d heard of quantum effects, and I knew they were supposed to be important — but since I didn’t understand them, they made no difference to me. Then later, as an undergrad at Cornell, I read the early quantum computing papers, and found out that this “quantum weirdness” the physicists kept babbling about was nothing more than linear algebra over the complex numbers. “Hey, linear algebra … even I can do that!”

But I didn’t really become engrossed in quantum computing until a summer internship at Bell Labs. As a diversion from my “real” work that summer (which had to do with multivariate isotone regression), I went through the Bernstein-Vazirani paper, and managed to improve their containment BQP ⊆ P#P to BQP ⊆ PP. Then I found out that Lov Grover worked in the same building as me, so I went and told him about my result. Well, it turned out that BQP ⊆ PP was already known — it had been proved by Adleman, DeMarrais, and Huang the year before. But one consequence of my talking to Lov was that I ended up doing an internship with him the next summer, working (mostly unsuccessfully) on quantum lower bounds. Ashwin Nayak was also working with Lov that summer; from Ashwin I found out about Umesh Vazirani’s group at Berkeley and how all the cool people were there.

After that, the main questions in my mind were whether I could get accepted to Berkeley, whether Umesh would take me on as a student, and whether I was good enough to do anything original in this field. I emailed Umesh and he never responded, which I took as an extremely bad sign — how little I knew back then! Luckily I did get in to Berkeley, I did start working with Umesh, I did stumble on some new results, and I guess the rest is history.

Q: How many people work on the computer science side of quantum computing?

A: Probably the best way to measure that is by how many people attend the annual QIP conference (for if they don’t go to QIP, do they really exist?) Last year’s QIP drew almost 200 attendees.

Q: Would you be willing to supervise grad students in classical theoretical computer science?

A: Willing is an understatement! I would love to supervise talented grad students in derandomization, circuit lower bounds, learning theory, or any of the other classical areas that I try hard to keep up with and occassionally even work on. Admittedly, when it comes to (say) list decoding, extractors, approximation algorithms, or PCP, the students would first have to teach me what’s going on, but after that I’d be happy to supervise them.

Q: What would you say if I told you that I think quantum computing is like postmodern literary criticism, just a way for people to churn out one paper after another by switching words around, citing each other in a circular way, recycling the same few mathematically trivial ideas over and over — and indeed, that the whole field of theoretical computer science has no real ideas and no connections to anything outside itself?

A: I’d say thank you very much for your opinion, and you’ve got me for — let’s see, 25 more minutes, so what can I do for you?

The most trivial theorem I’ve ever written up

May 2nd, 2007

Theorem: Suppose NP-complete problems are efficiently solvable by quantum computers. Then either the polynomial hierarchy collapses, or else BQP ⊄ AM (that is, quantum computations can’t be efficiently simulated by Arthur-Merlin protocols).

Proof: Suppose NP ⊆ BQP and BQP ⊆ AM. Then coNP ⊆ BQP ⊆ AM, and hence the polynomial hierarchy collapses to the second level by a result of Boppana, Håstad, and Zachos.

Note: If only we could delete the weasel phrase “or else BQP ⊄ AM” from my Most Trivial Theorem, we would’ve achieved a long-sought breakthrough in quantum computing theory. In particular, we would’ve shown that any fast quantum algorithm to solve NP-complete problems would imply an unlikely collapse of classical complexity classes. But while the weasel phrase is weaselly enough to make the Most Trivial Theorem a triviality, I don’t think it’s infinitely weaselly. The reason is my growing suspicion that BQP ⊆ AM in the unrelativized world.

Second Note: When I call this my “Most Trivial Theorem,” obviously I’m excluding homework exercises.

Refilling your RSS glass before the entrée arrives

May 2nd, 2007

A reader points me to this recent Topology paper by Nabutovsky and Weinberger, which probably contains the biggest numbers to have ever arisen naturally in mathematics. Specifically, the authors show that, if we maximize the kth Betti number (for k≥3) over all groups whose presentation has size N (while keeping the number finite), then it grows like the “super-duper Busy Beaver function” (that is, Busy Beaver with an oracle for the halting problem with an oracle for the halting problem).

The spiked magazine survey I blogged about earlier has finally been published. (Warning: Spouters ahead.)

Purely out of intellectual duty

May 1st, 2007

Alright, alright … two separate readers pointed me to this story (from today’s New York Times), about recent research into defense mechanisms in female duck genitalia (“not quite biting, but it sure looks unpleasant,” as one of them says).

Even I have gotten bored of this topic.

Ecoprocrastination

April 22nd, 2007

While the reasons I haven’t updated this blog for a week are complex and multifaceted, the fact that I’ve been flying to another university every 2-3 days, waking up at 7 (AM, not PM) each morning, defending quantum computing research all day including mealtimes, and collapsing in my hotel room during rare free intervals is undoubtedly one of the contributing factors.

And so it is, alas, that I don’t have time to share anything nontrivial today. Instead, in honor of Earth Day, I’ll just link to the text of the landmark US Supreme Court ruling three weeks ago, which forced Bush’s emasculated EPA to either regulate CO2 emissions or else give scientific reasons for refusing to do so. If you the time (and who doesn’t?), I’d also recommend reading the oral arguments, wherein you can enjoy the acidic barbs of Justice Scalia, surely one of the most interesting and articulate assholes of our time.

As with intelligent design cases, it’s not the science that’s on trial here but rather the legal system itself. Is a system set up to decide which farmer was grazing his cows on which other farmer’s land capable of weighing the origin and future of eukaryotic life on Earth? In this particular case, the legal system eked out a 5-4 victory; it could easily have gone the other way.

And yes, I know that Massachusetts v. EPA wasn’t “really” about global warming: it was about whether Massachusetts had standing to sue, the definition of the word “pollutant” in the Clear Air Act, whether the EPA can decline to regulate based on foreign-policy considerations, and so on. Similarly, Plessy v. Ferguson wasn’t “really” about racism, Griswold v. Connecticut wasn’t “really” about contraception, etc. In each case, it was just a happy coincidence, p≈1/512, that all nine justices found that the legal technicalities lined up perfectly with how they felt about the underlying issue.

For those who don’t want to read the whole decision, here are a few key passages:

When a State enters the Union, it surrenders certain sovereign prerogatives. Massachusetts cannot invade Rhode Island to force reductions in greenhouse gas emissions, it cannot negotiate an emissions treaty with China or India … These sovereign prerogatives are now lodged in the Federal Government, and Congress has ordered EPA to protect Massachusetts (among others) by prescribing standards applicable to the “emission of any air pollutant… which may reasonably be anticipated to endanger public health or welfare.”

The harms associated with climate change are serious and well recognized … That these climate-change risks are “widely shared” does not minimize Massachusetts’ interest in the outcome of this litigation … According to petitioners’ unchallenged affidavits, global sea levels rose somewhere between 10 and 20 centimeters over the 20th century as a result of global warming … These rising seas have already begun to swallow Massachusetts’ coastal land … The severity of that injury will only increase over the course of the next century: If sea levels continue to rise as predicted, one Massachusetts official believes that a significant fraction of coastal property will be “either permanently lost through inundation or temporarily lost through periodic storm surge and flooding events.”

EPA does not dispute the existence of a causal connection between man-made greenhouse gas emissions and global warming. At a minimum, therefore, EPA’s refusal to regulate such emissions “contributes” to Massachusetts’ injuries. EPA nevertheless maintains that its decision not to regulate greenhouse gas emissions from new motor vehicles contributes so insignificantly to petitioners’ injuries that the agency cannot be haled into federal court to answer for them … But EPA overstates its case. Its argument rests on the erroneous assumption that a small incremental step, because it is incremental, can never be attacked in a federal judicial forum. Yet accepting that premise would doom most challenges to regulatory action.

Unlike EPA, we have no difficulty reconciling Congress’ various efforts to promote interagency collaboration and research to better understand climate change with the agency’s pre-existing mandate to regulate “any air pollutant” that may endanger the public welfare … Collaboration and research do not conflict with any thoughtful regulatory effort; they complement it.

EPA no doubt has significant latitude as to the manner, timing, content, and coordination of its regulations with those of other agencies. But once EPA has responded to a petition for rulemaking, its reasons for action or inaction must conform to the authorizing statute. Under the clear terms of the Clean Air Act, EPA can avoid taking further action only if it determines that greenhouse gases do not contribute to climate change or if it provides some reasonable explanation as to why it cannot or will not exercise its discretion to determine whether they do … To the extent that this constrains agency discretion to pursue other priorities of the Administrator or the President, this is the congressional design.

EPA has refused to comply with this clear statutory command. Instead, it has offered a laundry list of reasons not to regulate. For example, EPA said that a number of voluntary executive branch programs already provide an effective response to the threat of global warming … that regulating greenhouse gases might impair the President’s ability to negotiate with “key developing nations” to reduce emissions … and that curtailing motor-vehicle emissions would reflect “an inefficient, piecemeal approach to address the climate change issue” …

Although we have neither the expertise nor the authority to evaluate these policy judgments, it is evident they have nothing to do with whether greenhouse gas emissions contribute to climate change. Still less do they amount to a reasoned justification for declining to form a scientific judgment. In particular, while the President has broad authority in foreign affairs, that authority does not extend to the refusal to execute domestic laws.

Nor can EPA avoid its statutory obligation by noting the uncertainty surrounding various features of climate change and concluding that it would therefore be better not to regulate at this time. … If the scientific uncertainty is so profound that it precludes EPA from making a reasoned judgment as to whether greenhouse gases contribute to global warming, EPA must say so.

In short, EPA has offered no reasoned explanation for its refusal to decide whether greenhouse gases cause or contribute to climate change. Its action was therefore “arbitrary, capricious, … or otherwise not in accordance with law.”

Physics for Doofuses: Understanding Electricity

April 15th, 2007

Welcome to an occasional new Shtetl-Optimized series, where physicists get to amuse themselves by watching me struggle to understand the most basic concepts of their discipline. I’ll consider my post on black hole singularities to be retroactively part of this series.

Official motto: “Because if I talked about complexity, you wouldn’t understand it.”

Unofficial motto: “Because if I talked about climate change, I’d start another flamewar — and as much as I want to save civilization, I want even more for everyone to like me.”

Today’s topic is Understanding Electricity. First of all, what makes electricity confusing? Well, besides electricity’s evil twin magnetism (which we’ll get to another time), what makes it confusing is that there are six things to keep track of: charge, current, energy, power, voltage, and resistance, which are measured respectively in coulombs, amps, joules, watts, volts, and ohms. And I mean, sure you can memorize formulas for these things, but what are they, in terms of actual electrons flowing through a wire?

Alright, let’s take ’em one by one.

Charge is the q in kqq/r2. Twice as many electrons, twice as much charge. ‘Nuff said.

Current is charge per unit time. It’s how many electrons are flowing through a cross-section of the wire every second. If you’ve got 100 amps coming out, you can send 50 this way and 50 that way, or π this way and 100-π that way, etc.

Energy … Alright, even I know this one. Energy is what we fight wars to liberate. In our case, if you have a bunch of electrons going through a wire, then the energy scales like the number of electrons times the speed of the electrons squared.

Power is energy per unit time: how much energy does your appliance consume every second? Duh, that’s why a 60-watt light bulb is environmentally-friendlier than a 100-watt bulb.

Voltage is the first one I had trouble with back in freshman physics. It’s energy per charge, or power per current. Intuitively, voltage measures how much energy gets imparted to each individual electron. Thus, if you have a 110-volt hairdryer and you plug it into a 220-volt outlet, then the trouble is that the electrons have twice as much energy as the hairdryer expects. This is what transformers are for: to ramp voltages up and down.

Incidentally, the ability to transform voltages is related to why what comes out of your socket is alternating current (AC) instead of direct current (DC). AC, of course, is the kind where the electrons switch direction 60 times or so per second, while DC is the kind where they always flow in the same direction. For computers and other electronics, you clearly want DC, since logic gates are unidirectional. And indeed, the earliest power plants did transmit DC. In the 1890’s, Thomas Edison fought vigorously against the adoption of AC, going so far as to electrocute dogs, horses, and even an elephant using AC in order to “prove” that it was unsafe. (These demonstrations proved about as much as D-Wave’s quantum computer — since needless to say, one can also electrocute elephants using DC. To draw any conclusions a comparative study is needed.)

So why did AC win? Because it turns out that it’s not practical to transmit DC over distances of more than about a mile. The reason is this: the longer the wire, the more power gets lost along the way. On the other hand, the higher the voltage, the less power gets lost along the way. This means that if you want to send power over a long wire and have a reasonable amount of it reach its destination, then you want to transmit at high voltages. But high voltages are no good for household appliances, for safety and other reasons. So once the power gets close to its destination, you want to convert back down to lower voltages.

Now, the simplest way to convert high voltages to low ones was discovered by Michael Faraday, and relies on the principle of electromagnetic induction. This is the principle according to which a changing electric current creates a changing magnetic field, which can in turn be used to drive another current. (Damn, I knew we wouldn’t get far without bumping into electricity’s evil and confusing magnetwin.) And that gives us a simple way to convert one voltage to another — analogous to using a small, quickly-rotating gear to drive a big, slowly-rotating gear.

So to make a long story short: while in principle it’s possible to convert voltages with DC, it’s more practical to do it with AC. And if you don’t convert voltages, then you can only transmit power for about a mile — meaning that you’d have to build millions of tiny power plants, unless you only cared about urban centers like New York.

Resistance is the trickiest of the six concepts. Basically, resistance is the thing you need to cut in half, if you want to send twice as much current through a wire at the same voltage. If you have two appliances hooked up serially, the total resistance is the sum of the individual resistances: Rtot = R1 + R2. On the other hand, if you have two appliances hooked up in parallel, the reciprocal of the total resistance is the sum of the reciprocals of the individual resistances: 1/Rtot = 1/R1 + 1/R2. If you’re like me, you’ll immediately ask: why should resistance obey these identities? Or to put it differently, why should the thing that obeys one or both of these identities be resistance, defined as voltage divided by current?

Well, as it turns out, the identities don’t always hold. That they do in most cases of interest is just an empirical fact, called Ohm’s Law. I suspect that much confusion could be eliminated in freshman physics classes, were it made clear that there’s nothing obvious about this “Law”: a new physical assumption is being introduced. (Challenge for commenters: can you give me a handwaving argument for why Ohm’s Law should hold? The rule is that your argument has to be grounded in terms of what the actual electrons in a wire are doing.)

Here are some useful formulas that follow from the above discussion:

Power = Voltage2/Resistance = Current2 x Resistance = Voltage x Current
Voltage = Power/Current = Current x Resistance = √(Power x Resistance)
Resistance = Voltage/Current = Power/Current2 = Voltage2/Power
Current = Power/Voltage = Voltage/Resistance = √(Power/Resistance)

Understand? Really? Take the test!

Update (4/16): Chad Orzel answers my question about Ohm’s Law.

New comment policy

April 10th, 2007

If you reject an overwhelming consensus on some issue in the hard sciences — whether it’s evolution, general relativity, climate change, or anything else — this blog is an excellent place to share your concerns with the world. Indeed, you’re even welcome to derail discussion of completely unrelated topics by posting lengthy rants against the academic orthodoxy — the longer and angrier the better! However, if you wish to do this, I respectfully ask that you obey the following procedure:

  1. Publish a paper in a peer-reviewed journal setting out the reasons for your radical departure from accepted science.
  2. Reference the paper in your rant.

If you attempt to skip to the “rant” part without going through this procedure, your comments may be deleted without warning. Repeat offenders will be permanently banned from the blog. Life is short. I make no apologies.

Scott Aaronson
Rebel for the Scientific Consensus

Update (4/11): I am, of course, under no illusions whatsoever that my requirement of having published a relevant peer-reviewed paper will eliminate all tinfoil-hat rants from the comments section. My hope, rather, is that it will make those rants that I do receive more interesting and original.

The wisdom of Gian-Carlo Rota (1932-1999)

April 9th, 2007

From www.rota.org:

Graph theory, like lattice theory, is the whipping boy of mathematicians in need of concealing their feelings of insecurity.

Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say ‘We’ve discovered a great new law of nature. Give us a billion dollars.’ And if it doesn’t change the world, then they say, ‘There’s an even deeper thing. Give us another billion dollars.’

When an undergraduate asks me whether he or she should major in mathematics rather than in another field that I will simply call X, my answer is the following: “If you major in mathematics, you can switch to X anytime you want to, but not the other way around.”

Flakiness is nowadays creeping into the sciences like a virus through a computer system, and it may be the greatest present threat to our civilization. Mathematics can save the world from the invasion of the flakes by unmasking them, and by contributing some hard thinking. You and I know that mathematics, by definition, is not and never will be flaky.

Note: Quotation here does not necessarily imply endorsement by Shtetl-Optimized LLC or any of its subsidary enterprises.

D-Wave Easter Spectacular

April 7th, 2007

Look, I promise this will be the last D-Wave post in a while. But there have been two developments that, as Planet Earth’s primary D-Wave skepticism clearinghouse, I feel a duty to report.

First, Jason Pontin’s article in the Sunday New York Times has appeared. It’s not perfect, but to get in a description of quantum computing that was even somewhat accurate required a long, word-by-word and phrase-by-phrase battle with the editors of the business section.

Second, Umesh Vazirani sent me a document summarizing the skeptical case against D-Wave, which anyone coming to this blog from the Tech Review or New York Times might find helpful. (Hey, as long as you’re here, stick around for a bit!) I’ve posted Umesh’s criticisms below.

Finally, Happy Easter from all of us here in the shtetl!

Reasons To Be Skeptical About D-Wave’s Claims

by Guest Blogger Umesh Vazirani

1. An Unconvincing Demo: D-wave’s demo consisted of a computer in a box that could solve simple problems. We have no way of knowing whether the computer in the box was an ordinary classical computer or a quantum computer. For the problem the computer solves — finding ground states for 16 bit Ising problems — a classical computer would work just as quickly. This demo is the only public evidence D-wave has presented in support of its claims.

2. A Physics Breakthrough?: Achieving 16 coherent superconducting quantum bits would be quite a breakthrough. Physicists working on superconducting qubits have not been able to achieve more than two coherent quantum bits in the lab. In the absence of evidence from D-Wave that their 16 qubits are coherent, scientists are understandably skeptical. If D-Wave’s qubits are not coherent, as many scientists suspect, their computer would be classical, not quantum. This would still be consistent with the results of the demo, since the decohering qubits would act like classical random bits, and the adiabatic computer would act like a classical computer implementing simulated annealing, which would be quite fast for a small 16 bit Ising problem. It is possible to test the quantum states of D-Wave’s computer for coherence, but Geordie Rose’s statements suggest that no such tests have been made.

3. Claims of Big Algorithmic Breakthrough Without Evidence: 16-bit quantum computers are useless from a practical standpoint because they can only solve very small problems that could just as easily be solved using a classical computer. Thus, D-Wave’s demo, even if it really was a quantum computer, will only be practically useful if the technology will scale to the larger problems that cannot be solved with a classical computer. Unless D-Wave has made a major algorithmic breakthrough as well as a major practical one, however, D-Wave’s computer, even if implemented with thousands of qubits, will not provide a speedup over classical computers. D-Wave does not implement a general purpose quantum computer, only one that can implement adiabatic optimization. They wish to use it to solve the Ising model, which is thought to be beyond the reach of classical computers, but there is no known efficient algorithm for solving the Ising model using this adiabatic approach. It is possible to achieve a quadratic speedup for unstructured search problems using adiabatic optimization, but that result requires an ability to tune the rate of the adiabatic process — something which appears to researchers to be extremely hard if not impossible for the Ising problem. Geordie Rose’s public statements suggest that he doesn’t understand this issue, which makes computer scientists skeptical that any breakthrough has been made.

To summarize: For D-Wave to achieve a practically useful quantum computer using their technology, they would have to have made a breakthrough in physics, as well as a breakthrough in the design of their algorithm. Scientists are skeptical both because D-Wave has failed to provide any supporting evidence, and also because their public statements suggest a lack of understanding of the issues involved.

You might ask why researchers are putting so much energy into debunking the D-Wave hype. One reason is that QC researchers feel a responsibility to the public to not overhype quantum computers. Quantum computing is an exciting field that has caught the imagination of the public. This is a good thing. But if the quantum computing effort starts to mingle fact with fiction, then the entire effort loses its credibility.

Another reason is that D-Wave’s unsupported claims are undermining the efforts of the researchers who are working very hard on these problems. It’s as if there was a new biotech company claiming to be at the brink of a revolutionary cure for cancer. If it is true, it is great, but if it’s not, then it undermines the efforts of the legitimate cancer researchers.

D-Wave: Still propagating

April 5th, 2007

Last night Jason Pontin, the Editor-in-Chief of MIT’s Technology Review, published a hard-hitting article about D-Wave, the Vancouver startup that claims to have built “the world’s first commercial quantum computer.” A condensed (and, alas, considerably mangled and dumbed-down) version of his article will appear in Sunday’s New York Times.

Jason wrote to me a couple weeks ago and said that, while he knew that most journalists had gotten the D-Wave story wrong, he was determined to get it right, and wanted my help in understanding the computer-science issues. He didn’t have to ask me twice.

Since I come across as pretty harsh on D-Wave in the article, I think it’s worth recalling how we got to this point. When the D-Wave story broke, my first reaction was to give them some benefit of the doubt (as you can see from the mild tone of my “Orion Anti-Hype FAQ”). So what changed? Well, four things:

  1. I asked Geordie Rose (the founder of D-Wave and sometime commenter on this blog) twice for actual information about the speedup D-Wave claimed to be seeing over classical algorithms, and he never provided any.
  2. Instead of answering my and others’ technical questions, D-Wave decided to attack academic science itself. Here’s what Herb Martin, the CEO of D-Wave, told an interviewer from ITworld: “Businesses aren’t too fascinated about the details of quantum mechanics, but academics have their own axes to grind. I can assure you that our VCs look at us a lot closer than the government looks at the academics who win research grants.” The track record of companies that engage in this sort of behavior is not good.
  3. I read article after article claiming that quantum computers can solve NP-complete problems in polynomial time. I reflected on the fact that a single one of these articles will probably reach more people than every quantum complexity paper ever written.
  4. It became clear, both from published interviews and from talking to Jason, that D-Wave was doing essentially nothing to correct journalists’ misconceptions about what sorts of problems are believed to be efficiently solvable by quantum computers.

Update (4/6): Geordie Rose has responded to me. A few responses to his response:

  • I apologize for getting the CEO’s name wrong. I’d originally written Herb Martin, but then noticed that the ITworld article referred to him as “Ed Martin” and therefore changed it. This seems like another case where D-Wave has to work harder to correct journalists’ misconceptions!
  • In a discussion with Geordie on my blog, following my Orion Anti-Hype FAQ, I asked:

    You say you’re seeing a speedup in your experiments — but (1) how big of a speedup, and (2) do you mean compared to the best-known classical algorithms (like simulated annealing), or compared to brute-force search?

    Then, in another discussion with Geordie following my Speaking truth to parallelism post, I asked him again:

    I’m certainly glad that you’re not claiming an exponential speedup. But at least based on the evidence I know about, whether you’re seeing a quadratic speedup — or indeed any asymptotic speedup — is very much open to question. Hence the question I asked you earlier: have you compared the empirical scaling for your adiabatic algorithm to the empirical scaling for the best classical algorithms like simulated annealing? If so, what were the results?

  • Geordie now says that “the only way scaling can be extracted is empirically, and we can’t build big enough systems (yet) to answer scaling questions.” Thanks; that’s actually extremely helpful to me. I must have gotten a wrong impression from some of D-Wave’s previous statements. For example, here’s from D-Wave’s recently-released “Introduction to Orion” document (which now seems to be available for “premium subscribers” only):

    Q: Are D-Wave processors quantum computers?
    A: Yes. We have determined that quantum effects are being harnessed to accelerate computation in our processors.

    And here’s from a comment on Dave Bacon’s blog (blog comments seem to be D-Wave’s preferred venue for announcing scientific results):

    While the jury is still not in, our studies of these systems seem to indicate that AQCs, in the presence of thermal and spin bath environments, can still provide O(sqrt(N)) scaling even though the central QC system is definitely NOT a “system that’s globally phase coherent over the entire calculation”.

  • The core of Geordie’s response is the following paragraph:

    This is worth emphasizing, because I thought it was obvious, but it turns out alot of people don’t get this. Most of the poorly thought out comments related to what we’re trying to do have come from theoretical computer scientists, who assume that the things they hold dear are likewise treasured by everyone else. Because they worship efficiency, they have assumed that’s the objective of our projects, when I have repeatedly said it’s not.

    When I read this to Umesh Vazirani over the phone, he sardonically replied, “it will be interesting to find out what’s left of this field after you’ve removed the notion of efficiency…”