Update (6/20/2009): If you agree about Mahmoud deserving his vacation, please read and sign this petition (courtesy of Elham Kashefi). I have no doubt that if enough Shtetl-Optimized readers sign, it will force the ayatollahs to reconsider.
I haven’t heard from my pal Mahmoud in years, but some mutual friends told me that he’s been pretty stressed about his job lately. They said you’re supposed to turn your blog’sbackgroundgreen if you agree with some concerned folks who’ve been marching around Tehran encouraging him to take a much-needed breather.
This was a tough call for me. On the one hand, the voters clearly want Mahmoud at his desk by spectacularmargins:
On the other hand, it seemed hypocritical for me to deny a close friend his vacation, given how much procrastinating I’ve been doing myself lately. For example, I’ve barely been blogging—and when I have, it’s often just bargain-basement fare you could get anywhere else on the Internet! Ultimately, then, I decided I had to go green out of a sort of Kantian blogegorical imperative—regardless of all the complex ways my editing a WordPress stylesheet might reverberate through history.
So, what was I doing these past few weeks that could possibly take precedence over writing ill-considered blog entries that I’d probably regret for the rest of my life?
1. On the gracious invitation of Renato Renner, I visited one of Al Einstein’s old stomping-grounds: ETH Zürich. There I gave a physics colloquium called How Much Information Is In A Quantum State?, as well as a talk on my paper Quantum Copy-Protection and Quantum Money, which has been more than three years in the procrastinating. Though I was only in Switzerland for three days, I found enough time to go hiking in the Swiss Alps, if by “Swiss Alps” you mean a 200-foot hill outside the theoretical physics building. I’m quite proud of having made it through this entire trip—my first to Switzerland—without once yodeling or erupting into cries of “Riiiiiiicola!” On the other hand, what with the beautiful architecture, excellent public transportation, and wonderful hosts, it was a struggle to maintain my neutrality.
2. On the plane to and from Switzerland, I had the pleasure of perusing Computational Complexity: A Modern Approach, by Sanjeev Arora and Boaz Barak, which has just been published after floating around the interweb for many years. If you’re a hardcore complexity lover, I can recommend buying a copy in the strongest terms. The book lives up to its subtitle, concentrating almost entirely on developments within the last twenty years. Classical complexity theorists should pay particular attention to the excellent quantum computing chapter, neither of whose authors has the slightest background in the subject. You see, people, getting quantum right isn’t that hard, is it? The book’s only flaw, an abundance of typos, is one that can and should be easily fixed in the next edition.
3. I then visited the National Institute of Standards and Technology—proud keepers of the meter and the kilogram—at their headquarters in Gaithersburg, MD. There I gave my talk on Quantum Complexity and Fundamental Physics, a version of the shtick I did at the QIS workshop in Virginia. Afterwards, I got to tour some of the most badass experimental facilities I’ve seen in a while. (Setting standards and making precision measurements: is there anything else that sounds so boring but turns out to so not be?) A highlight was the Center for Neutron Research, which houses what’s apparently the largest research reactor still operating in the US. This thing has been operating since 1967, and it shoots large numbers of slow-moving neutrons in all directions so that archaeologists, chemists, physicists, etc. can feed off the trough and do their experiments. The basic physics that’s been done there recently has included setting bounds on possible nonlinearities in the Schrödinger equation (even though any nonlinearity, no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time), as well as observing the photons that the Standard Model apparently predicts are emitted 2% of the time when a neutron decays. I also got to see one of the world’s least jittery floors: using dynamical feedback, they apparently managed to make this floor ~107 times less jittery than a normal floor, good enough that they can run a double-slit experiment with slow neutrons on top of it and see the interference pattern. (Before you ask: yes, I wanted to jump on the floor, but I didn’t. Apparently I would’ve messed it up for a day.)
I have to add: the few times I’ve toured a nuclear facility, I felt profoundly depressed by the “retro” feel of everything around me: analog dials, safety signs from the 60s… Why are no new reactors being built in the US, even while their value as stabilization wedges becomes increasingly hard to ignore? Why are we unwilling to reprocess spent fuel rods like France does? Why do people pin their hopes on the remote prospect of controlled fusion, ignoring the controlled fission we’ve had for half a century? Why, like some horror-movie character unwilling to confront an evil from the past, have we decided that a major technology possibly crucial to the planet’s survival must remain a museum piece, part of civilization’s past and not its future? Of course, these are rhetorical questions. While you can be exposed to more radiation flying cross-country than working at a nuclear reactor for months, while preventing a Chernobyl is as easy as using shielding and leaving on the emergency cooling system, human nature is often a more powerful force than physics.
4. Next I went to STOC’2009 in Bethesda, MD. Let me say something about a few talks that are impossible not to say something about. First, in what might or might not turn out to be the biggest cryptographic breakthrough in decades, Craig Gentry has proposed a fully homomorphic encryption scheme based on ideal lattices: that is, a scheme that lets you perform arbitrary computations on encrypted data without decrypting it. Currently, Gentry’s scheme is not known to be breakable even by quantum computers—despite a 2002 result of van Dam, Hallgren, and Ip, which said that if a fully homomorphic encryption scheme existed, then it could be broken by a quantum computer. (The catch? Van Dam et al.’s result applied to deterministic encryption schemes; Gentry’s is probabilistic.)
Second, Chris Peikert (co-winner of the Best Paper Award) announced a public-key cryptosystem based on the classical worst-case hardness of the Shortest Vector Problem. Previously, Regev had given such a cryptosystem based on the assumption that there’s no efficient quantum algorithm for SVP (see also here for a survey). The latter was a striking result: even though Regev’s cryptosystem is purely classical, his reduction from SVP to breaking the cryptosystem was a quantum reduction. What Peikert has now done is to “dequantize” Regev’s security argument by thinking very hard about it. Of course, one interpretation of Peikert’s result is that classical crypto people no longer have to learn quantum mechanics—but a better interpretation is that they do have to learn QM, if only to get rid of it! I eagerly await Oded Goldreich‘s first paper on quantum computing (using it purely as an intellectual tool, of course).
Third, Robin Moser (co-winner of the Best Paper Award and winner of the Best Student Paper Award) gave a mindblowing algorithmic version of the Lovász Local Lemma. Or to put it differently, Moser gave a polynomial-time algorithm that finds a satisfying assignment for a k-SAT formula, assuming that each clause intersects at most 2k-2 other clauses. (It follows from the Local Lemma that such an assignment exists.) Moser’s algorithm is absurdly simple: basically, you repeatedly pick an unsatisfied clause, and randomly set its variables so that it’s satisfied. Then, if doing that has made any of the neighboring clauses unsatisfied, you randomly set their variables so that they’re satisfied, and so on, recursing until all the damage you’ve caused has also been fixed. The proof that this algorithm actually halts in polynomial time uses a communication argument that, while simple, seemed so completely out of left field that when it was finished, the audience of theorists sort of let out a collective gasp, as if a giant black “QED” box were hovering in the air.
Fourth, Babai, Beals, and Seress showed that if G is a matrix group over a finite field of odd order, then the membership problem for G can be solved in polynomial time, assuming an oracle for the discrete logarithm problem. This represents the culmination of about 25 years of work in computational group theory. I was all pumped to announce an important consequence of this result not noted in the abstract—that the problem is therefore solvable in quantum polynomial time, because of Shor’s discrete log algorithm—but Laci, alas, scooped me on this highly nontrivial corollary in his talk.
5. Finally, I took the train up to Princeton, for a workshop on “Cryptography and Complexity: Status of Impagliazzo’s Worlds”. (For the insufficiently nerdy: the worlds are Algorithmica, where P=NP; Heuristica, where P≠NP but the hard instances of NP-complete problems are hard to find; Pessiland, where the hard instances are easy to find but none of them can be used for cryptographic one-way functions; Minicrypt, where one-way functions do exist, enabling private-key cryptography, but not the trapdoor one-way functions needed for public-key cryptography; and Cryptomania, where trapdoor one-way functions exist, and cryptography can do pretty anything you could ask.) I gave a talk on Impagliazzo’s worlds in arithmetic complexity, based on ongoing join work with Andy Drucker (where “ongoing” means we’re pretty sure more of our results are correct than would be expected by random guessing).
Tell you what: since it’s been a long time, feel free to ask whatever you feel like in the comments section, whether related to my journeys or not. I’ll try to answer at least a constant fraction of questions.
One thing I’ve always liked about theoretical computer science is the number of proofs that are patently ridiculous—whose concluding steps seem to call more for a gong or cymbal than a “QED” box. This is a property that CS theory inherited from logic, and that it shares with several other mathematical fields (though by no means all of them). The titans of the comedy-proof genre are of course Gödel and Turing’s undecidability results, the latter of which arguably found its best expression as a poem. But there are other examples all over the place, and many aren’t as well known as they should be.
I was reminded of this side of theory when my student Andy Drucker and I came up with yet another absurd proof: basically, a theorem that’s true for one reason if a certain algebraic problem is hard, and true for a completely different reason if the problem is easy. We’re still writing it up, so at Andy’s request I won’t spill the joke yet. For now, please content yourself with the following tried-and-true komedy klassics.
Theorem 1 (folklore):E (that is, the class of problems solvable in 2O(n) time) does not equal PSPACE, the class of problems solvable in polynomial space. (Though we have no idea how to prove which one is bigger than the other one—or that they’re incomparable, as seems most likely.)
Proof: Suppose E=PSPACE. Then E=EXP by padding, where EXP is the class of problems solvable in 2poly(n) time. But that would contradict the Time Hierarchy Theorem.
Theorem 2 (classic, attributed to Levin): One can give a fixed, explicit algorithm, which finds satisfying assignments to Boolean formulas in polynomial time whenever they exist, assuming P=NP.
Proof: let M1, M2, … be a list of Turing machines that take a SAT instance φ as input. The algorithm is as follows: dovetail (that is, run a step of M1, then another step of M1 and a step of M2, then another step of M1 and M2 and a step of M3, etc.), halting when one of the machines has output a valid satisfying assignment for φ. If P=NP, then there’s some Turing machine Mi in the list that solves SAT, and that causes the whole algorithm to work in polynomial time assuming φ was satisfiable. (The fact that you’re also simulating quadrillions of other machines merely slows things down by a “polynomial factor,” independent of the input size n.)
Theorem 3 (Gutfreund, Shaltiel, Ta-Shma): Let A be an algorithm that’s supposed to solve SAT in polynomial time (that is, find a satisfying assignment whenever one exists), but that actually fails on some SAT instance of size n. Then if someone gives you the source code of A, you can, in time polynomial in n, find a specific SAT instance that actually witnesses A’s failure.
Proof: By the Cook-Levin Theorem, you can create a SAT instance φ(x) which encodes the statement, “x is a SAT instance of size n on which A fails (that is, either there’s a satisfying assignment A fails to find, or A outputs an assignment for x that isn’t satisfying).” Then feed φ as input to A. There are two cases: on the one hand, if A succeeds, then it’s helpfully provided you with a SAT instance on which it itself fails. On the other hand, if A fails on φ, then φ itself is the SAT instance you were looking for.
Theorem 4 (Barak et al.): There exist programs that can’t be obfuscated—that is, for which having the actual code of the program lets you do something that you couldn’t do if you could only run the program as a subroutine.
Proof: Let P be a program that takes a string x as input, and does the following. First, if x=a, where a is some n-bit “secret string” hardwired into P’s source code, then P(a) outputs another n-bit secret string b. Second, if x is the source code of a program Q such that Q(a) outputs b (after some fixed number of steps, say t=O(n)), then P outputs a third secret string c. Third, if x satisfies neither constraint, then P(x) outputs “FAIL.” Now, given the source code of P, it’s easy to find c: just run P with its own code as input. On the other hand, if you can only run P as a subroutine, then (unless you get extremely lucky) it will take exponential time to find any x for which P(x) outputs anything besides “FAIL.” Hence it’s infeasible to find c by running P, and yet there’s no way to obfuscate P’s source code so as to hide c.
Theorem 5 (attributed by Razborov and Rudich to Wigderson): No natural proof can prove better than a half-exponential lower bound on the circuit complexity of the discrete logarithm problem. (Here half-exponential refers to a function f—which exists, but can’t be described analytically—such that f(f(n)) grows exponentially with n.)
Proof Sketch: Suppose we can prove an f(n) lower bound on the circuit complexity of discrete log, using a natural proof. Then by the definition of natural proof, there exists a 2O(n)-time algorithm to distinguish a truly random function g:{0,1}n→{0,1} from a function with circuit size f(n). This means that any efficiently-computable pseudorandom function family (PRFF), with seed length m=f(n), can be distinguished from a random function in exp(f-1(m)) time. By standard equivalence theorems in cryptography, this means that any purported one-way function—so for example, the modular exponentiation function—can be inverted in exp(f-1(n)) time. In other words, to prove a natural f(n) lower bound for the discrete logarithm problem, you must also discover an exp(f-1(n))-time algorithm for the discrete logarithm problem. As you show the discrete log problem to be harder, you simultaneously show it to be easier! And when f is greater than half-exponential, the lower bound becomes greater than the upper bound.
What is it that makes these theorems funny? (Alright, maybe not ha-ha funny, but at least snort-snort funny?) This is one case where a few readers might want me to break the rule about never explaining a joke. Theorems 1 and 2 are sort of like “you’re lost,” as an answer to the balloonist’s plea of “where am I?”: they’re logically impeccable yet tell you nothing whatsoever about what you wanted to know. Theorems 3 and 4 are like someone who’s so hungry he devours the menu at a restaurant, not even realizing that the menu itself was listed on the menu. They seem to involve a category mistake: a reference gluttonously repurposed as a referent only to become a reference again. (This, of course, is the joke behind Gödel’s theorem as well.) Theorem 5 is like a literary critic proving there’s no ‘reality’ separate from race, class, and gender biases, using arguments that are so well-grounded, even a white male plutocrat would have to concede their truth. The case is a self-immolating one: every argument that makes it stronger necessarily makes it weaker as well.
I tried to be cynical—really I did. But despite my best efforts, somehow I went home more excited about quantum than I’ve been in a long time.
The highlight of the workshop was of course the closed, invitation-only, late-night meeting in the basement of NSF headquarters, at which a group of us hidebound quantum computing reactionaries plotted to keep the field focused on irrelevant mathematical abstractions, and to ostracize the paradigm-smashing entrepreneurial innovators who threaten our status. I don’t think I’ve ever heard so much cackling in the space of a single evening, or so much clinking of bone goblets. Stuff like that is why I entered the field in the first place.
1. In his talk on quantum algorithms with polynomial speedups, Andris Ambainis called attention to a spectacular recent paper by Ben Reichardt, which characterizes the quantum query complexity of any partial or total Boolean function f (up to a logarithmic factor) as the optimal witness size of a span program for f, and also as the negative-weight quantum adversary lower bound for f. Assuming this result is correct, it seems possible that the remaining open problems in quantum query complexity will be pulverized, one after another, by solving the associated SDPs for the optimal span programs. (Incidentally, using Reichardt’s result, it must be possible to prove, e.g., a Ω(n1/3/log(n)) lower bound for the quantum query complexity of the collision problem using the adversary method. This was a longstanding open problem. Can one say, explicitly, what the adversary matrices are in this case?) Alas, it also seems possible that span programs will turn out to be almost as hard to analyze as quantum algorithms were…
(1+√5)/2. Despite the obvious danger to the future funding of the entire field, by some clerical error I was released from my padded cell to speak about “Quantum Complexity and Fundamental Physics”. My “talk,” if it can be called that, was in my opinion neither rational nor integral to the workshop.
2. In her talk on blind quantum computation, Anne Broadbent (who’s also visiting MIT this week) described some beautiful new results that partly answer my Aaronson $25.00 Challenge from a year and a half ago. The Challenge, if you recall, was whether a quantum computer can always “prove its work” to a classical skeptic who doesn’t believe quantum mechanics—or more formally, whether every problem in BQP admits an interactive protocol where the prover in BQP and the verifier is in BPP. Anne, Joe Fitzsimons, and Elham Kashefi haven’t quite answered this question, but in a recent paper they’ve come close: they’ve shown that a quantum computer can prove its work to someone who’s almost completely classical, her only “quantum” power being to prepare individual polarized photons and send them over to the quantum computer. Furthermore, their protocol has the amazing property that the quantum computer learns nothing whatsoever about which particular quantum computation it’s performing! (Aharonov, Ben-Or, and Eban independently gave a protocol with the same amazing properties, except theirs requires the “classical” verifier to have a constant-sized quantum computer.) Anne et al. also show that two quantum computers, who share entanglement but can’t communicate with each other, can prove their work to a completely classical verifier (while, again, remaining completely oblivious to what they computed).
On top of everything else, these results appear to be the first complexity-theoretic application of the measurement-based quantum computing paradigm, as well as the first “inherently quantum” non-relativizing results. (Admittedly, we don’t yet have an oracle relative to which the blind quantum computing protocols don’t work—but the protocols rely essentially on the gate structure of the quantum circuits, and I conjecture that such an oracle exists.)
Rereading my Challenge, I noticed that “the [one-member] Committee may also choose to award smaller prizes for partial results.” And thus, yesterday I had the pleasure of awarding Anne a crumpled $10 bill, with an additional $5 contributed by Seth Lloyd, for a grand total of $15.00 to be shared equally among Anne, Joe, and Elham. (Update: Since I wrote that, Anne has elected to trade in for three signed and doodled-upon $5 bills.) (Another Update: A $12, or $15-$O(1), prize shall be awarded to Dorit Aharonov, Michael Ben-Or, and Elad Eban the next time I see them.) This is, I believe, the first time a monetary reward offered on Shtetl-Optimized has actually been paid out.
3. In a talk that was so good, you almost forgot it involved chemistry, Alán Aspuru-Guzik discussed applications of quantum complexity theory to understanding photosynthesis and the design of efficient solar cells (!). To give you a sense of how mindblowing that is, it briefly made me wonder whether I should reread some of John Sidles’ cheerful ramblings about the coming merger of quantum systems engineering with biology in the 21st century (of which, I predict, this very sentence will inspire dozens more).
So what then is the connection between quantum complexity theory and photosynthesis? Well, a few of you might remember my post “Low-Hanging Fruit from Two Conjoined Trees” from years ago, which discussed the lovely result of Childs et al. that a quantum walk on two conjoined binary trees can reach a designated end vertex exponentially faster than a classical walk on the same graph. That result interested me, among other things, because it can be shown to lead to an oracle relative to which BQP ⊄ SZK, which at the time I didn’t know how to find otherwise. But especially given the bizarre nature of the graph needed to produce the oracle separation, I thought of this result as pretty much the prototype of an irrelevant complexity-theoretic curiosity (which, naturally, made me like it all the more).
You can probably guess where this is going.
Shown above is a light-harvesting molecule (image snagged from Alán’s slides), which apparently is efficient at concentrating light at its center for essentially the same reason the Childs et al. quantum walk reaches the target vertex exponentially faster than a classical walk: namely, because of destructive interference between the paths that point backward, toward the leaves. According to Alán, what plants do to harvest sunlight is not entirely unrelated either (it also involves quantum coherence), and fully understanding these mechanisms in quantum information terms might conceivably be useful in designing better solar cells. To be fair, a part of me always did suspect that quantum oracle separations would turn out to be the key to solving the world energy crisis. I’ll point you here or here if you want to know more.
Incidentally, Alán’s talk had another, also extremely interesting part, which was about coming up with precise numerical estimates of the number of qubits you’d need to simulate the wavefunctions of (say) benzene, caffeine, and cholesterol. (Many of us have long thought that simulating physics and chemistry will be the real application for scalable quantum computers if we ever build them, practical long before breaking RSA and ultimately more useful too. But it’s not something we often talk about—ostensibly for lack of meaty things to say, really because we don’t know chemistry.)
4. In her talk, Dorit Aharonov posed an open problem that I now have no choice but to inflict on others, if I don’t want to feel forced to think about it myself. So here’s her problem: how hard is it to find the ground state of a local Hamiltonian H=H1+…+Hm (that is, a sum of k-qubit interactions, for some constant k), if we impose the constraint that the Hi‘s all commute with each other? Clearly it’s somewhere between NP and QMA. It might seem obvious that this problem should be in NP—to which I can only respond, prove it!
There were also lots of great talks by the experimentalists. Having attended them, I can report with confidence that (1) they’re still trying to build a quantum computer but (2) decoherence is still a big problem. If you want to know even more detail than I’ve just provided—or you want to know about the theory talks I didn’t mention, or more about the ones I did mention—ask away in the comments. I can’t promise that no one will know the answer.
Dear Scott, Please keep the focus of your blog. You have lately been losing science to your blog and started blogging about various loosely related things. One of the ways I subscribed to your blog was because your articles were very computation-oriented. Now you no longer keep the theme. And as you might have heard, shifting topics in your blog will lose your readers.
So today I noticed something bizarre. A celebrated result in cryptography, due to Goldreich, Goldwasser, and Micali, states that any pseudorandom generator gives rise to a pseudorandom function family. See Luca’s notes or the original GGM paper for more.
Now I’d always assumed, without thinking about it, that the GGM result “obviously” carries over to the quantum case—so that any pseudorandom generator secure against quantum attack would give rise to a pseudorandom function family secure against quantum attack. But now that I’m writing a paper that actually relies on this “fact,” I realized I have no idea why it’s true.
Look: in the GGM argument, you start with a pseudorandom generator G:{0,1}n→{0,1}2n, and you apply it recursively to produce a family of functions fs:{0,1}n→{0,1}n, where s is the seed. You then consider a hypothetical polynomial-time algorithm A that distinguished fs from a truly random function. You show how you could use A to create a polynomial-time algorithm that distinguished the output of G from a truly random 2n-bit string—thereby contradicting the starting assumption that G was pseudorandom.
The trouble is, the argument relies crucially on the fact that A examines only a polynomial number of outputs of fs—intuitively so that you can run a hybrid argument, changing the outputs that A actually examines one by one into truly random strings. But if A is a quantum algorithm, then (duh) it can examine all 2n outputs of fs in superposition! So any argument that depends on “watching A to see which inputs it queries” is toast.
But maybe we can recover the same conclusion in a fancier way? For at least seven years, I’ve been going around conjecturing the following:
Conjecture (): Let Q be a quantum algorithm that makes T queries to a Boolean input X∈{0,1}N. Then for all ε,δ>0, there exists a deterministic classical algorithm that makes poly(T,1/ε,log(1/δ)) queries to X, and that approximates Q’s acceptance probability to within ε on a 1-δ fraction of inputs.
My motivation for Conjecture () had nothing to do with cryptography. I was interested in whether we could rule out the possibility that P=BQP relative to a random oracle with probability 1. If Conjecture () holds—and if the classical algorithm is anything like I think it is—then we can’t rule it out, at least not without proving P≠PSPACE or an even stronger separation in the unrelativized world.
It now occurs to me that, if we knew how to prove Conjecture (), then maybe we could push through a quantum GGM argument using similar ideas—that is, by identifying a tiny subset of inputs to fs that the quantum algorithm’s acceptance probability “really” depends on. Alas, I have good reason to believe that Conjecture () is hard.
So the task remains: prove a quantum GGM theorem. Or maybe I’m missing something completely obvious?
PS. The promised report on the QIS conference in Virginia is coming tomorrow. Take that, future self!
Update (5/3): An anonymous commenter points out that we can use a simpler hybrid argument of Razborov and Rudich—which doesn’t break down in the quantum case—to show that if there exists a PRG that’s secure against 2n^Ω(1)-time quantum adversaries, then there also exists a PRF with polynomial seed length that’s secure against exponential-time quantum adversaries. That somehow hadn’t occurred to me, and it’s good enough for my purposes. (Masked cryptographer: emerge ye from the shadows, and claim thy rightful honour in my Acknowledgments!) On the other hand, the extremely interesting question still stands of whether one can prove a “strong,” GGM-style reduction: from PRGs secure against f(n)-time quantum adversaries to PRFs with linear seed length secure against f(n)Ω(1)-time quantum adversaries, for any superpolynomial f.
By giving me a free blog post. From his address to the National Academy of Science (full text here):
A few months after a devastating defeat at Fredericksburg, before Gettysburg would be won and Richmond would fall, before the fate of the Union would be at all certain, President Lincoln signed into law an act creating the National Academy of Sciences. Lincoln refused to accept that our nation’s sole purpose was merely to survive. He created this academy, founded the land grant colleges, and began the work of the transcontinental railroad, believing that we must add “the fuel of interest to the fire of genius in the discovery … of new and useful things” …
At such a difficult moment, there are those who say we cannot afford to invest in science. That support for research is somehow a luxury at a moment defined by necessities. I fundamentally disagree…
I am here today to set this goal: we will devote more than three percent of our GDP to research and development … This represents the largest commitment to scientific research and innovation in American history…
The fact is, an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not. That’s why the private sector under-invests in basic science – and why the public sector must invest in this kind of research. Because while the risks may be large, so are the rewards for our economy and our society…
We double the budget of key agencies, including the National Science Foundation, a primary source of funding for academic research, and the National Institute of Standards and Technology, which supports a wide range of pursuits – from improving health information technology to measuring carbon pollution, from testing “smart grid” designs to developing advanced manufacturing processes. And my budget doubles funding for the Department of Energy’s Office of Science which builds and operates accelerators, colliders, supercomputers, high-energy light sources, and facilities for making nano-materials…
Our future on this planet depends upon our willingness to address the challenge posed by carbon pollution. And our future as a nation depends upon our willingness to embrace this challenge as an opportunity to lead the world in pursuit of new discovery…
On March 9th, I signed an executive memorandum with a clear message: Under my administration, the days of science taking a back seat to ideology are over. Our progress as a nation – and our values as a nation – are rooted in free and open inquiry. To undermine scientific integrity is to undermine our democracy…
We know that the quality of math and science teachers is the most influential single factor in determining whether or a student will succeed or fail in these subjects. Yet, in high school, more than twenty percent of students in math and more than sixty percent of students in chemistry and physics are taught by teachers without expertise in these fields…
My budget also triples the number of National Science Foundation graduate research fellowships. This program was created as part of the Space Race five decades ago. In the decades since, it’s remained largely the same size – even as the numbers of students who seek these fellowships has skyrocketed. We ought to be supporting these young people who are pursuing scientific careers, not putting obstacles in their path…
I had only one quibble with the speech. The President says: “The calculations of today’s GPS satellites are based on the equations that Einstein put to paper more than a century ago.” True enough—but they depend not only on SR but even on GR, which was “put to paper” around 1916.
Predictably, coverage of this speech has concentrated on (1) some remarks about swine flu, and (2) a trivial incident where Obama got ahead of his TelePrompter. Clearly, he has a ways to go before matching the flawless delivery of our previous leader.
I’m back in Boston, having returned from my trip to Berkeley and to the Quantum Information Science Workshop in Virginia. I understand that the slides from the QIS workshop will be available any day now, and I’ll blog about the workshop once they are. (Sneak preview: it turns out that more quantum algorithms should be discovered, battling decoherence is important, and interdisciplinary insights are needed—but there were actually some pretty spectacular results and open problems that I hadn’t heard before.)
I’d also like to blog about two books I’m reading: Outliers by Malcolm Gladwell, and First Principles by Howard Burton (about the founding of the Perimeter Institute, and the first scientific history I’ve ever read for which I was there when a lot of it happened). Then again, if enough people discuss these books in the comments section, I won’t have to.
Da MP3, as recently recorded by “Homage the Halfrican Cracker.”
(Stage name of Dustin Lee, a singer and dance instructor based in Calgary, Canada. Homage is currently a finalist for Best Song at the Calgary Folk Festival. Here is his YouTube channel, and here are previews of his music. Hey, you sing the greatest CS theory rap of all time, you get a free plug on Shtetl-Optimized.)
Lyrics by Aaron Sterling, 23 June 2007.
Inspired by Weird Al Yankovic’s “White & Nerdy.”
Original music and words by Chamillionaire, “Riding.”
They see me proving my theorem.
I know they’re all thinking I just do theory.
Think I just do theory.
Think I just do theory.
Can’t you see I just do theory?
Look at me, I just do theory!
I wanna code with the hackers
But so far they all think I just do theory.
Think I just do theory.
I just do theory.
I just do theory.
Really, truly, just do theory.
I wrote a program that solved TSP
Superquasipolynomially.
Ain’t no such thing as lunch for free
When you’re digesting P-NP.
Unnatural proofs are my favorite vice
When I dream of solver’s paradise.
But my poor construction won’t suffice,
Even when I add Karp-Lipton advice.
Yo! There’s more to life than just systems!
Just too mathy? Quit your grumping.
I may not get the joint jumping
But my lemmas can do some pumping.
I declare to all my detractors
To exchange keys you need extractors.
You can’t improve with blind refactors.
You need me, not ten contractors.
Don’t know how to start an IDE
But I always win at compIP.
I’m a wizard bounding MA-E,
Playing games in PPAD.
My languages are always acceptable.
My LaTeX skills? They are impeccable.
My proofs are probabilistically checkable.
But what I compile just isn’t respectable.
You see, I just do theory.
They’re on RA, while I’m teaching.
That’s how they know that I just do theory.
Know I just do theory
Know I just do theory
I admit it, I just do theory.
Look at me, I just do theory.
I’d like to code with the hackers
Although it’s apparent I just do theory
Yes, I just do theory
Right, I just do theory
I just do theory.
Why is it I can just do theory?
I aced math classes in school.
One-Ten is my favorite rule.
Intractability’s really cool.
I’ve been unplugging while you were debugging.
Your Windows crashed, your hard disk’s whirring,
But my platforms all are Turing.
Not a lot of exceptions get thrown
Approximating Diophantines with twelve unknowns.
I’m the department’s main instructor.
When they need a course taught, who do they ask?
I’m always up to the task.
It beats sitting on my ass.
I’m trying to cold-start my social network
Saying “Busy Beaver” with a smirk.
In galleries I troll, in weblogs I lurk.
But it’s hard to reach Big O if you won’t tell the world hello.
My grandest conceit is that my brain is PSPACE-complete.
My calculus is lambda and my math is discrete.
The only problem that ever made me halt
Was whether Samson or Delilah won by default.
My theorem statements are ungrounded.
All my measures are resource-bounded.
They see me struggling at runtime.
They feel sorry because I just do theory.
Yes, it’s true, I just do theory.
Yes, it’s true, I just do theory.
All because I just do theory.
BQP, I just do theory.
I wanna code with the hackers
But oh well, they can tell I just do theory.
I just do theory.
I just do theory.
Yes, I just do theory.
QED, I just do theory.
(everybody shout) Box!
[Here’s the PDF. Thanks so much to Aaron and Homage for the permission. After this song goes viral, and gets ten times more hits than Susan Boyle, just remember: you heard it here first. Peace out, BQP-dawg]
Now, I’m not much of a farming type. But for some reason, about a year ago I became intensely curious about three cereal grains—corn, rice, and wheat—and the central role they played in getting civilization off the ground. And so, on this Passover holiday, when Ashkenazi Jews are supposed to avoid not only leavened bread, but corn and rice as well (the reason? apparently some 13th-century rabbi feared that a grain of wheat might fall in undetected), I thought I’d “go against the grain,” and ask “Four Questions” about all three of these strange plants.
Question I. How did hunter-gatherers ever get the idea to breed these grains? Of course, we know today that whether or not they’re labeled “organic” at Whole Foods, cereal grains aren’t much like anything found in nature, but are the result of thousands of years of selective breeding: massive genetic-engineering projects of the ancient world. The trouble is that, if you ran into one their wild ancestors, there probably wouldn’t be anything appetizing about it. Corn’s ancestor, for example, seems to have been a barely-edible grass called teosinte. Does the only explanation we can ever hope for rely on anthropic postselection: eventually some cave-dwellers stumbled on the idea of breeding grain, and we’re all living in the aftermath of the resulting population explosion? But the fact that it happened not once, not twice, but three times independently—with wheat in the Middle East, rice in Asia, and corn in the Americas—suggests that it couldn’t have been all that unlikely. Which brings us to…
Question II. What other plants could similarly be used as the basis for a large civilization? The one other plant I can think of that’s played a similar role is the yam, in parts of Africa. Has there ever been a culture that used the potato as its main food source—maybe in Russia or Eastern Europe? (Update, 4/12: Duhhhhhhh, the Irish, of course, hence the Irish Potato Famine. Thanks to several commenters for pointing this out.) OK, what about oats, barley, rye, or sorghum?
Question III. Corn, rice, wheat: which one is best? Is there one such that, if we all switched to it, we’d be ten times healthier and also save the planet? Or, on the tiny chance that we can’t settle that question via blog comments, can we at least elucidate the salient differences? (Corn does seem like the outlier among the three, much as I enjoy grilled rice and wheat on the cob…)
Question IV. Should we still be eating these grains today? It seems clear that corn, rice, and wheat were directly responsible for a human population explosion, and that even today, the planet couldn’t support most of its inhabitants without them. But for those who can afford to, the promoters of “hunter-gatherer diets” advocate returning to foods that were available in the ancestral environment, such as nuts, berries, and roasted mammoth leg. The underlying question here is actually an interesting one: did the switch to agriculture cause some sort of massive change in human health? The most surprising answer would seem to be that it didn’t.
Despite the staggering amount of research I did for this post, it remains conceivable that there are readers who know more about these topics than I do. And so, having thrown out a few seeds, I look forward to reaping a bounteous harvest of grain-related comments.
As a general rule, I don’t post workshop announcements on this blog: if I did it for one, I’d have to do it for all, etc. etc. But I’ve decided that an exception can be made, if the requesting party has won a bet against Stephen Hawking. And so it is that I, on behalf of John Preskill, hereby encourage you to attend the Quantum Information Science Workshop in Vienna, VA, from April 23-25, which has been hastily called in response to the report A Federal Vision for Quantum Information Science. The whole quantum information community is invited, but the deadline for the workshop hotel rate is today! The future of our entire field will be decided at this workshop:
Should more quantum algorithms be discovered, or not?
Is battling decoherence important, or unimportant?
Are interdisciplinary insights needed from CS, physics, and other fields, or will a single discipline suffice?
If you’re as hungry for the answers as I am, you won’t want to miss this.