My long, complexity-theoretic journey

So, what was I doing these past few weeks that could possibly take precedence over writing ill-considered blog entries that I’d probably regret for the rest of my life?

1. On the gracious invitation of Renato Renner, I visited one of Al Einstein’s old stomping-grounds: ETH Zürich.  There I gave a physics colloquium called How Much Information Is In A Quantum State?, as well as a talk on my paper Quantum Copy-Protection and Quantum Money, which has been more than three years in the procrastinating.  Though I was only in Switzerland for three days, I found enough time to go hiking in the Swiss Alps, if by “Swiss Alps” you mean a 200-foot hill outside the theoretical physics building.  I’m quite proud of having made it through this entire trip—my first to Switzerland—without once yodeling or erupting into cries of “Riiiiiiicola!”  On the other hand, what with the beautiful architecture, excellent public transportation, and wonderful hosts, it was a struggle to maintain my neutrality.

2. On the plane to and from Switzerland, I had the pleasure of perusing Computational Complexity: A Modern Approach, by Sanjeev Arora and Boaz Barak, which has just been published after floating around the interweb for many years.  If you’re a hardcore complexity lover, I can recommend buying a copy in the strongest terms.  The book lives up to its subtitle, concentrating almost entirely on developments within the last twenty years.  Classical complexity theorists should pay particular attention to the excellent quantum computing chapter, neither of whose authors has the slightest background in the subject.  You see, people, getting quantum right isn’t that hard, is it?  The book’s only flaw, an abundance of typos, is one that can and should be easily fixed in the next edition.

3. I then visited the National Institute of Standards and Technology—proud keepers of the meter and the kilogram—at their headquarters in Gaithersburg, MD.  There I gave my talk on Quantum Complexity and Fundamental Physics, a version of the shtick I did at the QIS workshop in Virginia.  Afterwards, I got to tour some of the most badass experimental facilities I’ve seen in a while.  (Setting standards and making precision measurements: is there anything else that sounds so boring but turns out to so not be?)  A highlight was the Center for Neutron Research, which houses what’s apparently the largest research reactor still operating in the US.  This thing has been operating since 1967, and it shoots large numbers of slow-moving neutrons in all directions so that archaeologists, chemists, physicists, etc. can feed off the trough and do their experiments.  The basic physics that’s been done there recently has included setting bounds on possible nonlinearities in the Schrödinger equation (even though any nonlinearity, no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time), as well as observing the photons that the Standard Model apparently predicts are emitted 2% of the time when a neutron decays.  I also got to see one of the world’s least jittery floors: using dynamical feedback, they apparently managed to make this floor ~107 times less jittery than a normal floor, good enough that they can run a double-slit experiment with slow neutrons on top of it and see the interference pattern.  (Before you ask: yes, I wanted to jump on the floor, but I didn’t.  Apparently I would’ve messed it up for a day.)

I have to add: the few times I’ve toured a nuclear facility, I felt profoundly depressed by the “retro” feel of everything around me: analog dials, safety signs from the 60s…   Why are no new reactors being built in the US, even while their value as stabilization wedges becomes increasingly hard to ignore?  Why are we unwilling to reprocess spent fuel rods like France does?  Why do people pin their hopes on the remote prospect of controlled fusion, ignoring the controlled fission we’ve had for half a century?  Why, like some horror-movie character unwilling to confront an evil from the past, have we decided that a major technology possibly crucial to the planet’s survival must remain a museum piece, part of civilization’s past and not its future?  Of course, these are rhetorical questions.  While you can be exposed to more radiation flying cross-country than working at a nuclear reactor for months, while preventing a Chernobyl is as easy as using shielding and leaving on the emergency cooling system, human nature is often a more powerful force than physics.

4. Next I went to STOC’2009 in Bethesda, MD.  Let me say something about a few talks that are impossible not to say something about.  First, in what might or might not turn out to be the biggest cryptographic breakthrough in decades, Craig Gentry has proposed a fully homomorphic encryption scheme based on ideal lattices: that is, a scheme that lets you perform arbitrary computations on encrypted data without decrypting it.  Currently, Gentry’s scheme is not known to be breakable even by quantum computers—despite a 2002 result of van Dam, Hallgren, and Ip, which said that if a fully homomorphic encryption scheme existed, then it could be broken by a quantum computer.  (The catch?  Van Dam et al.’s result applied to deterministic encryption schemes; Gentry’s is probabilistic.)

Second, Chris Peikert (co-winner of the Best Paper Award) announced a public-key cryptosystem based on the classical worst-case hardness of the Shortest Vector Problem.  Previously, Regev had given such a cryptosystem based on the assumption that there’s no efficient quantum algorithm for SVP (see also here for a survey).  The latter was a striking result: even though Regev’s cryptosystem is purely classical, his reduction from SVP to breaking the cryptosystem was a quantum reduction.  What Peikert has now done is to “dequantize” Regev’s security argument by thinking very hard about it.  Of course, one interpretation of Peikert’s result is that classical crypto people no longer have to learn quantum mechanics—but a better interpretation is that they do have to learn QM, if only to get rid of it!  I eagerly await Oded Goldreich‘s first paper on quantum computing (using it purely as an intellectual tool, of course).

Third, Robin Moser (co-winner of the Best Paper Award and winner of the Best Student Paper Award) gave a mindblowing algorithmic version of the Lovász Local Lemma.  Or to put it differently, Moser gave a polynomial-time algorithm that finds a satisfying assignment for a k-SAT formula, assuming that each clause intersects at most 2k-2 other clauses.  (It follows from the Local Lemma that such an assignment exists.)  Moser’s algorithm is absurdly simple: basically, you repeatedly pick an unsatisfied clause, and randomly set its variables so that it’s satisfied.  Then, if doing that has made any of the neighboring clauses unsatisfied, you randomly set their variables so that they’re satisfied, and so on, recursing until all the damage you’ve caused has also been fixed.  The proof that this algorithm actually halts in polynomial time uses a communication argument that, while simple, seemed so completely out of left field that when it was finished, the audience of theorists sort of let out a collective gasp, as if a giant black “QED” box were hovering in the air.

Fourth, Babai, Beals, and Seress showed that if G is a matrix group over a finite field of odd order, then the membership problem for G can be solved in polynomial time, assuming an oracle for the discrete logarithm problem.  This represents the culmination of about 25 years of work in computational group theory.  I was all pumped to announce an important consequence of this result not noted in the abstract—that the problem is therefore solvable in quantum polynomial time, because of Shor’s discrete log algorithm—but Laci, alas, scooped me on this highly nontrivial corollary in his talk.

5. Finally, I took the train up to Princeton, for a workshop on “Cryptography and Complexity: Status of Impagliazzo’s Worlds”.  (For the insufficiently nerdy: the worlds are Algorithmica, where P=NP; Heuristica, where P≠NP but the hard instances of NP-complete problems are hard to find; Pessiland, where the hard instances are easy to find but none of them can be used for cryptographic one-way functions; Minicrypt, where one-way functions do exist, enabling private-key cryptography, but not the trapdoor one-way functions needed for public-key cryptography; and Cryptomania, where trapdoor one-way functions exist, and cryptography can do pretty anything you could ask.)  I gave a talk on Impagliazzo’s worlds in arithmetic complexity, based on ongoing join work with Andy Drucker (where “ongoing” means we’re pretty sure more of our results are correct than would be expected by random guessing).

Tell you what: since it’s been a long time, feel free to ask whatever you feel like in the comments section, whether related to my journeys or not.  I’ll try to answer at least a constant fraction of questions.

56 Responses to “My long, complexity-theoretic journey”

  1. Sean Carroll Says:

    And yet, Scott has been awesome enough to find time to read a book draft for me and offer excellent comments. Which is to say, very awesome.

  2. Brian Says:

    Good to see you’re back, Scott. It’s always fun to see what you have to say – and occasionally I understand some of it!

  3. asterix Says:

    Why was Gentry’s work less significant than Peikert’s work? I don’t mean this in a competitive way. I am not a crypto expert, but to me they both sounds like astounding results, and I’m wondering why one is considered more of a breakthrough than the other. I assume Peikert’s may have some easy to explain idea behind it (like Moser’s) whereas Gentry’s is more technical? Is homomorphic encryption less important than using worst-case SVP problems to get crypto systems? Thanks.

  4. John Sidles Says:

    Scott asserts: Any nonlinearity [in the Schrödinger equation], no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time.

    Gee whiz … Scott …. for this to be true … don’t you have add a pretty lengthy conditional clause: “in the Schrödinger equation on vector state-spaces having exponentially large dimension.”

    One reason for focusing on this stipulation is that in the world of practical calculations (meaning, PSPACE and PTIME resources) and also in the world of practical experiments (meaning, finite-temperature and/or noisy laboratories), it is commonplace to compute on tensor network state-spaces … which definitely are not vector spaces, but rather are Kähler manifolds.

    In working through this manifold-oriented framework for QIS/QIT, our QSE Group translated chapters 2 and 8 of Nielsen and Chuang’s textbook into the language of Kähler manifolds as contrasted with Hilbert space. This was a fun exercise (in which Ashtekar and Schilling preceded us by a decade, I will mention).

    The resulting QIS/QIT mathematical framework proved sufficiently compact that we could summarize it on one page …

    … and this compactness proves to be very convenient for organizing large-scale QIS/QIT calculations.

    Now, it is true that when QIS/QIT is formulated as a manifold theory, the mathematical focus naturally shifts from the (linear) Hamiltonian dynamics of superposition to the (nonlinear) concentration dynamics that is generic to Lindbladians. But isn’t this a good thing? … when it helps us to efficiently simulate large-scale quantum systems?

    As far as our QSE Group knows, there is nothing in orthodox Lindbladian quantum dynamics that permits “sending superluminal signals and solving NP-complete problems.” Because isn’t Lindbladian dynamics so constructed as to rule-out these possibilities? Even when the Lindbladian dynamics concentrates quantum trajectories onto a non-vector manifold?

    The point being, noise and nonlinearity are valuable QIS/QIT resources—to be treasured, not scorned! 🙂

  5. Domenic Denicola Says:

    Wow, thanks for enlightening us bystanders; those results from point #4 are really cool!

  6. Arne Peeters Says:

    Unrelated (but you said it’s ok): It’s now 4 years since, so what’s your updated opinion on “waste papers”? Would you still write that post (given a sufficient reason to procrastinate 😉 ) and if not, what would you write instead?

  7. Carl Says:

    “On the other hand, what with the beautiful architecture, excellent public transportation, and wonderful hosts, it was a struggle to maintain my neutrality.”

    If I die… Tell my wife… “Hello…”.

  8. Scott Says:

    Why was Gentry’s work less significant than Peikert’s work?

    asterix: Please be assured, you’re not the only one to have asked that question! Since I wasn’t privy to the decision, and have no strong personal feelings about it, I suppose it’s OK for me to say what I know (and if it isn’t, and I end up using this blog to blab about something I shouldn’t one more time … well, who’s counting? 🙂 ). My understanding is that the PC had some concerns about the correctness of at least part of Gentry’s paper (and maybe about the underlying assumptions—though I’m just guessing there), and didn’t want to risk looking foolish by (e.g.) giving a Best Paper Award for a cryptosystem that might be broken half a year later. The trouble, of course, is that in a situation like this one, the PC runs that risk no matter what it does! 🙂

    What I can say with confidence is the following:

    (1) Both Peikert’s and Moser’s papers would be clear contenders for the Best Paper Award in an ordinary year.

    (2) Gentry’s might someday be seen as the best paper ever to have been passed over for the Best Paper Award.

  9. Scott Says:

    Gee whiz … Scott …. for this to be true … don’t you have add a pretty lengthy conditional clause: “in the Schrödinger equation on vector state-spaces having exponentially large dimension.”

    John: You’re right, of course; I was talking about adding a nonlinear term to the Schrödinger equation while keeping the “rest” of QM (the state space, the measurement rule, etc.) unchanged. In reality, though, my view is that any nonlinear term (no matter how small) would amount to a complete collapse of QM—so that conditioned on finding such a term, the state space and everything else would seem like fair game as well.

  10. anon Says:

    Yeah, what’s the deal? Could some crypto experts give their opinion on the extent to which the Gentry paper is correct? If it’s 100% correct then it’s completely amazing, right?

  11. harrison Says:

    Scott, I haven’t read either of the papers, so forgive me if I’m missing something trivial, but it seems like if you used a sufficiently strong PRNG with Gentry’s cryptosystem, wouldn’t it then be (quantum) breakable by van Dam et al.? And therefore, either no PRNG exists which can fool a quantum computer, or Gentry’s argument is flawed? What am I missing here?

  12. Aspiring Blogger Says:

    Scott, I love your blog and have been following it for a long time. Sometimes I entertain the thought of creating a forum as entertaining and intellectually rich as this one, but I ask myself — how much effort would it take?

    Since you opened the floor for questions, can you comment on this? How much time do you spend (or how much sleep do you lose!) on your blog? Any tips or admonitions for aspiring bloggers?


  13. Scott Says:

    Harrison, as I understand it, the main issue is whether you can efficiently recognize an encryption of the all-0 string. Van Dam et al. assume you can (as would be the case, in particular, for any deterministic encryption scheme), while in Gentry’s scheme you presumably can’t. In which case, if you replaced the randomness in Gentry’s scheme by the output of a cryptographic PRG (with random seed), it would still be hard to recognize an encryption of the all-0 string (since otherwise, you’d get a polynomial-time algorithm for distinguishing the PRG output from true randomness, contrary to assumption). I trust others will correct me if I’m wrong.

  14. Scott Says:

    Aspiring Blogger: Like many questions I’ve gotten over the years and never answered, yours really deserves a post of its own! Briefly: right now the blog doesn’t take much of my time, since I hardly ever update it. Back when I updated it every other day, it took maybe half my time. But that’s a statement more about me than about blogging: I understand many other bloggers are able to dash off a decent entry in 20 minutes; I’m more than an order of magnitude less efficient.

    Now to watch Colbert…

  15. Bram Cohen Says:

    That paper should be called ‘Walksat finds the Lovasz Local Minima in polynomial time’. That result has of course been known for the special case of 2-clauses for a long time.

    There’s another possible world – in which P=NP in circuit complexity but there’s no finite TM solves arbitrarily large NP complete problems in polynomial time.

  16. Martin Schwarz Says:

    Hi Scott,

    I’m glad you’re back blogging! By the way, why didn’t you make it to your Vienna, Austria, talk this week? I would have enjoyed watching one of your terrific talks live and meeting you in person.

    best regards,

  17. Scott Says:

    Bram: There’s actually a huge number of possible worlds not covered by Russell’s classification (the one where P=NP but the algorithm takes n10000 time; the one where there’s a uniform algorithm that solves SAT in polynomial time on particular input lengths only; the one where P≠NP but NP⊆BQP; the one where public-key crypto is possible using “lossy” trapdoor one-way functions, but ordinary TDOWFs don’t exist…). Indeed, Russell pointed out at the workshop that for the foreseeable future, the worlds are in far more danger of proliferating than they are of collapsing!

    Incidentally, I can easily imagine an alternative history of theoretical computer science, where instead of using complexity classes as our basic concept, we directly used the Impagliazzo-worlds (which are basically possibilities for collapses of complexity classes). Of course it might get cumbersome, as in principle there could be exponentially more of the latter than of the former. So maybe people who complain about the size of the current Complexity Zoo should count their blessings! On the other hand, I conjecture that Cryptomania, Pessiland, etc. would be a much easier public-relations sell than BQP and PSPACE.

  18. Nagesh Says:

    Hi Scott,

    Nice to hear a summary of your recent travels. I myself wanted to summarize my own travels I have been doing lately even though most of it for family reasons 🙂

    So as for the questions can I ask what did you propose to do in your NSF career proposal? Can you share the title (and proposal) if it’s ok?


  19. Scott Says:

    Nagesh, the title of my CAREER proposal was “Limits on Efficient Computation in the Physical World” (same as my thesis title). The main things I proposed to work on, besides education, outreach, and diversifying, were (1) BQP vs. the polynomial hierarchy, (2) the need for structure in quantum speedups (e.g., quantum query complexity lower bounds for almost-total Boolean functions), and (3) non-relativizing techniques in quantum complexity theory. I understand these things are generally not made public, but email me if you want a copy.

  20. Aspiring Complexity Researcher Says:

    Dear Scott,

    I am an advanced doctoral student in a systems area at small school but have fallen in love with topics in complexity, tcs, discrete math and such. (And I am not a natural genius, but I am creative and enthusiastic about tcs!)

    What is your best suggestion for me as to ways and means by which I could make valuable contributions to any of the areas I mentioned above? So far, I have made tiny contributions and have been busy trying to squeeze time to read more about these things. But, for example, isn’t having a theorist to interact with / mentor essential? After graduation, what could I do to achieve my goal best? How could I, say, become a postdoc with a theory group without having done a lot of theory work?

    Right now, with the economy and what not, the future seems bleak for my love interest.

    Any thoughtful (as usual) guidance would be welcome.
    Of course, I enjoy your blog and all the remarkable work you are doing for tcs. I have also heard you talk at Harvard once. You inspire many like me. Please know that I am always wishing the very best for you.

    Thank you!

  21. Scott Says:

    Martin: My apologies! I’d never been to Vienna, and had really wanted to go. Alas, I ended up having back-to-back travel in the two weeks prior, and urgently needed to get back to MIT as I had five summer students to meet and get started on their projects. I hope the workshop went well!

    Warning to All Workshop Organizers: For whatever reason, I have an extraordinarily hard time saying ‘no’ to anyone, until I’m forced to by circumstances.

  22. Scott Says:

    Aspiring Complexity Researcher: Given how far you’ve already gone in your studies, it sounds to me like your best bet is to pursue whatever career you were going to pursue in systems (whether that’s in industry or academia), and then look for connections with theory and for theorists to talk with. Quite a few scientists gradually change areas over time, so that what they eventually end up doing might be completely unrelated to what they got their PhD in. But they usually start by doing what they got their PhD in. 🙂 And the paths between different parts of CS happen to be particularly well-trodden ones: we really do talk to each other!

  23. Nagesh Says:

    Thanks a lot for responding! I will email you now 🙂

  24. Bobby Says:

    Regarding result #4, how is that different than having the input data encrypted with a public key, and the “computation” being the process of appending a message that encapsulates the computation to the end of the input data?

    The decryption process in this case would include both decrypting the input data and applying the computation message.

    I can see that it’s possible that this asymmetric encryption + messages process would be susceptible to “easy” quantum cracking, or that it may be provable that the method given in paper #4 is quicker to decrypt. Possibly that’s all that’s new that the paper is illuminating.

    However, given that the computation can easily add information to the encrypted message, it must be the case that the output data after the computation can be larger than the input data, so the process of computation must be able to lengthen the time it takes to decrypt the data. Even worse, if you can’t derive information about the encrypted data by the computation process, if the computation conditionally would add information to the encrypted data depending on the encrypted values, it must always add information to the encrypted data *even if the computation has no actual effect on the original data*.

    I.e. if the computation is “f(y) is y if the y

  25. Bobby Says:

    Hrm, last post got truncated. Continuation:

    I.e. if the computation is “f(y) is y if y < 40, otherwise f(y) is the result of a lookup in some arbitrary table using y as the index”, then the process of applying the computation to the encrypted data must encode all the information in the table into the encrypted result, even if the encrypted y is 39.

    Did I misunderstand what the paper is demonstrating?

    (BTW, I think the truncation was because I included a < that I didn’t encode as &lt;)

  26. komponisto Says:

    Would you consider doing a Bloggingheads diavlog with Eliezer Yudkowsky?

  27. Scott Says:

    Komponisto: LOL! By exploiting my backwards-in-time causal powers, I just did a diavlog with Eliezer this afternoon. It will be up shortly.

  28. Responder Says:

    Hi Bobby,

    The point is that you don’t trust the person doing the computation, so you don’t want him to be able to decrypt the input, compute, then append the answer, because then he’d see the input. Imagine cloud computing. I have confidential data that I’d like to process, but I have very little computing power. I could pay, send them the data encrypted, and have them process it on their million computers and send me the results, while being assured that amazon hasn’t learned anything about my confidential data.

  29. d Says:

    Ha ha funny:

  30. Anon Says:

    How many of the “ten most annoying questions in quantum computing” ( are still unsolved?

  31. Scott Says:

    Anon, here’s the status of questions 1-9 (10 not being a real question):

    1. Given an n-qubit pure state, is there always a way to apply Hadamard gates to some subset of the qubits, so as to make all 2n computational basis states have nonzero amplitudes?

    Solved by Montenaro and Shepherd.

    2. Can we get any upper bound on QMIP (quantum multi-prover interactive proofs with unlimited prior entanglement)?

    Under the conjecture that the provers only need to share a finite-dimensional Hilbert space, Doherty et al. prove a recursive upper bound, which is already highly nontrivial. Without that conjecture, no upper bound is known.

    3. Can any QMA(2) (QMA with two unentangled yes-provers) protocol be amplified to exponentially small error probability?

    In my paper with Beigi et al., we solved this problem assuming a weak version of the Additivity Conjecture from quantum information theory (the general Additivity Conjecture is now known to be false, but our version still seems plausible). No unconditional result is known.

    4. If a unitary operation U can be applied in polynomial time, then can some square root of U also be applied in polynomial time?

    Solved by Sheridan, Maslov, and Mosca (contrary to my conjecture, the answer is yes)

    5. Suppose Alice and Bob are playing n parallel CHSH games, with no communication or entanglement. Is the probability that they’ll win all n games at most pn, for some p bounded below 0.853?

    I’m not sure whether this particular question is still open or not—does anyone else? What I know is that Anup Rao solved a closely related problem, by proving a concentration bound for parallel repetition of the CHSH game.

    6. Forget about an oracle relative to which BQP is not in PH. Forget about an oracle relative to which BQP is not in AM. Is there an oracle relative to which BQP is not in SZK?

    I realized shortly after posing this problem that an affirmative answer follows from, e.g., the Childs et al. conjoined tree problem.

    7. Given any n-qubit unitary operation U, does there exist an oracle relative to which U can be (approximately) applied in polynomial time?

    Still open

    8. How many mutually unbiased bases are there in non-prime-power dimensions?

    Still open, as far as I know

    9. Is there an n-qubit pure state that can be prepared by a circuit of size n3, and that can’t be distinguished from the maximally mixed state by any circuit of size n2?

    Still open, though I have a plausible strategy for closing it that I haven’t pursued—thanks for reminding me! 🙂

  32. milkshake Says:

    Nuclear reactors: an awesome one-stop-source for all questions nuclear is Garwin’s Archive. He explains why re-processing currently does not make any economic sense (it saves less than 20% of uranium at a ridiculous cost, and it actually increases the volume of radioactive waste without much reducing its radioactivity, the possibility of separated reactor plutonium theft is a serious proliferation risk etc). France and Japan got into the re-processing business in anticipation of breeder reactors which never really took off. Even if you have pure, already separated weapons-grade plutonium surplus that you want to dispose off, it is actually cheaper and less problematic to mix it with some highly radioactive waste and burry it in a mine depository rather than blend it into a reactor fuel to save uranium.

    Retro-looking reactors: Freeman Dyson has a wonderful reminiscence in Disturbing the Universe about his time in 50s with Teller and Freddy deHoffman at General Atomics, designing the reactors. I think the NIST facility uses a different (heavy-water) design than the TRIGA and the graphite-moderated reactors that Dyson worked on but I think his fond memories of the little red-brick schoolhouse where they run reactor calculations over the summer captures well the initial momentum, the enthusiasm which gradually evaporated as the accountants, MBAs and government regulators took over the industry.

    Nuclear accidents: safety is expensive and private companies operating nuclear reactors in US and Japan were cutting corners and skimping on upgrades in the past. Plus there is a normal human stupidity and complacency. There were few horribly close calls, not just the Three Mile Island accident.

  33. Anon Says:

    Thanks for the update!

    What does Question 7 mean? What sort of oracle are we looking for?

  34. computational simplicity Says:

    Hi, a general question:

    Can you give us a list of 10 weblogs that you truly enjoy and read regularly ? They don’t have to be CS-oriented, just what you enjoy the most. Also, they don’t have to be *blogs* (a news site, webcomic, or anything like that is okay). You can give more that 10 if you feel like it.

  35. Scott Says:

    Anon, we’re looking for a classical oracle: that is, one that maps each classical basis state |x⟩ to (-1)f(x)|x⟩, for some Boolean function f. There’s certainly a quantum oracle, namely U itself!

  36. Scott Says:

    Computational Simplicity: Look at the blogroll to the right!

    A few others that I occassionally read: Andrew Sullivan, FiveThirtyEight, Lubos Motl, Bitch PhD, I Blame the Patriarchy.

  37. John Sidles Says:

    Scott, your above-linked Zurich talk How Much Information Is In A Quantum State? is really excellent, and the numerical experiments you are doing with Eyal Dechter provide a good striking example of Terry Tao’s maxim that “progress generally starts in the concrete and then flows to the abstract.”

    This inspires us to flex the narrative to be even more concrete … without (AFAICT) changing any of the fundamental mathematics. We can accomplish this by (1) altering Alice’s motive from conveying information to concealing her activities (which is always more fun!) and (2) altering the quantum informatic framework from informatic/algebraic to stochastic/geometric (which provides a broader perspective on how these ideas work).

    We suppose that Bob has in his laboratory an ion-trap containing (say) 100 trapped ions. Every morning, Bob performs just one tomographic measurement on these ions (it’s a long-running experiment). And every afternoon, Bob prepares (the same) quantum state for the following day’s tomographic measurements. Thus Bob’s life is pretty boring — every afternoon the (identical) state preparation, every morning a tomographic measurement of the previous day’s state.

    Alice is spying on Bob. Every night, Alice sneaks into Bob’s lab and measures his carefully-prepared state (thereby destroying it). To conceal her activities, Alice performs covert measurement-and-control operations on Bob’s ions, leaving behind a state ρ that Bob will measure the following morning. Alice’s goal is that Bob’s daytime tomographic measurements reliably yield (as your lecture puts it) “Tr(Eρ) for most measurements E drawn from some probability measure D.”

    So Alice is secretly “dry-labbing” Bob’s experiment … leaving behind states that are informatically indistinguishable (by Bob) from undisturbed states.

    Of course, Alice has finite resources in information and time … she has to be done preparing the ions before Bob arrives the following morning. Obviously it’s a challenging task — she has 100 ions to entangle! To make Alice’s life harder, we assume that she does not know Bob’s experimental protocol (otherwise she could just duplicate it), but instead only knows the specified outcome distribution D.

    How can Alice achieve her quantum deception goal? Or is it impossible?

    Equally interesting—and not addressed in the lecture—does Alice really have to physically restore Bob’s quantum state? Could Alice instead install a (classical) “root kit” on Bob’s tomographic measurement software; a root kit that could reliably simulate every morning (with classical resources) any tomographic measurement that Bob might specify that morning?

    —- End of Part 1 —-

  38. Bobby Says:

    Responder: I apologize, I meant to mention in my post that it seemed theoretically possible that the method given in the paper would allow the decryption process to perform better, in that the computation work would have been done by the other computers.

    However, without something explicit in the paper which asserts the complexity of the decryption is bounded, and what’s more that the increase in size of the encrypted package by the computation process is bounded, it seems to me that there are no guarantees.

    Also, honestly, my initial reaction to seeing the paper’s description is that it would have some profound implications, beyond letting you play performance games. After I recognized that you could do the same thing modulo performance by conventional methods, I thought I would see if someone saw a flaw in my reasoning.

  39. Bram Cohen Says:

    If you really want your head to explode, go read up on liquid flouride thorium reactors. There are vastly safer and cheaper ways of getting nuclear energy than we have now, which have the unfortunate downsides of being radically different than what’s used currently, so noone’s an expert in them, and far too inherently safe to have any weapons use at all, so the military won’t fund them, and politics has basically killed them for the last fifty years.

  40. Jonathan Vos Post Says:

    “If you really want your head to explode” — and who wouldn’t want that?

  41. Jr Says:

    What do you think is the most important open mathematical problem outside of TCS?

    In science, outside of math and computer science?

  42. Jr Says:

    Also, why do you think religion exists?

  43. Scott Says:


    What do you think is the most important open mathematical problem outside of TCS?

    I was going to say the Riemann hypothesis, but we all know that’s just a derandomization problem. 🙂 So maybe the Langlands conjectures? But those, too, could conceivably turn out to be relevant to circuit lower bounds via Mulmuley’s program…

    Whatever the answer is, “important” presumably means that a significant fraction of mathematicians would need to agree. So old standbys like 3x+1, twin primes, and the transcendence of π+e are presumably out… 🙂

    In science, outside of math and computer science?

    In fundamental science, here are the first four things that popped into my head:

    1. Why sex, sleep, and homosexuality exist

    2. Extraterrestrial life (or even “earth-like” extrasolar planets, or non-DNA/RNA-based life on earth)

    3. Physics beyond the Standard Model (wherever progress turns out to happen—electroweak symmetry breaking, Λ, ultra-high-energy cosmic rays, the Pioneer anomaly?)

    4. Not clear whether there’s anything new and compelling with actual technical content to say about consciousness, free will, the anthropic principle, or the quantum measurement problem, but if there were, that would certainly count

    In applied science (similarly, first four things that popped into my head):

    1. Cheaper, more efficient solar cells (likewise, cheaper, safer nuclear reactors)

    2. Mass manufacturing of wacky materials like carbon nanotubes, so we can haz SPACE ELEVATORS!

    3. The ability to google and edit your own genome

    4. Batteries that last

  44. Scott Says:

    Also, why do you think religion exists?

    Once you accept that for almost all of history, and in most of the world today, the “purpose of life” has been to maintain cohesive tribes in which the men valiantly fight the rival tribes, the women stay faithful and raise children, etc.—and that uncovering the true nature of the physical universe only ever enters the picture insofar as it directly advances those goals—the question becomes, why shouldn’t religion exist?

  45. John Sidles Says:

    A Google search for “space elevator elastic energy” will find a literature replete with sobering engineering realities …

    … that’s why I work in quantum spin microscopy instead … where the realities are sobering, but not as sobering.

    As Pope put it: “Shallow draughts intoxicate the brain, but drinking largely sobers us again.”

    Here “largely” has the seventeenth century meaning given in Samuel Johnson’s dictionary: “amply, widely, copiously”

  46. Jonathan Vos Post Says:

    I like Scott’s Comment #43 (half of a twin prime pair).

    Massively compressing my impressions:

    1. Why sex, sleep, and homosexuality exist — as problems in reconstructing path-dependent models embedded in Evolution by Natural Selection. Otherwise we can take the myth that humans were once 4-armed, 4-legged, unisexual, and were bifurctaed, always seeking our other halves.

    2. Extraterrestrial life (or even “earth-like” extrasolar planets, or non-DNA/RNA-based life on earth) — interesting recent publications extrapolating back to BEFORE the RNA World. And a Strong Gaia hypothesis suggests doubling the Drake Equation approximation.

    3. Physics beyond the Standard Model — and the Cosmology that dervives from that, via Quantum Cosmology arguments.

    4. consciousness — I’ve been speaking with Cristoph Koch, whose 20 years of work with Francis Crick has yielded some amazing experimental results, in multiple processes competing in the human brain’s visual/semantic subsystems, with the interference nicely measurable, but below conscious awareness. Again, old Bayesian rationalists deny the demonstrable circuitry of the human brain.

    Crick & Koch asked what are the neural Correlates of Consciousness, including: is there a minimum complexity of a system for it to be able to be a substrate for consciousness? And why, if our immune system, or entereic nervous system, or genome exceeds that threshhold, is the immune, gut, or genetic network NOT conscious?

  47. Jonathan Vos Post Says:

    Likewise, educated first impressions:

    In applied science (similarly, first four things that popped into my head):

    1. Cheaper, more efficient solar cells [I keep in touch with Dr. Geoffrey Landis, a real expert; and with the IdeaLab solar companies] (likewise, cheaper, safer nuclear reactors [also: smaller, down to scale of individual business or home])

    2. Mass manufacturing of wacky materials like carbon nanotubes, so we can haz SPACE ELEVATORS! [some stranger molecules being investigated; meanwhile Space Elevators already feasible for the Moon]

    3. The ability to google and edit your own genome [The humanist ethic begins with the belief that humans
    are an essential part of nature. Through human minds
    the biosphere has acquired the capacity to steer its
    own evolution, and now we are in charge. Humans have
    the right and the duty to reconstruct nature so that
    humans and biosphere can both survive and prosper. For
    humanists, the highest value is harmonious coexistence
    between humans and nature. The greatest evils are
    poverty, underdevelopment, unemployment, disease and
    hunger, all the conditions that deprive people of
    opportunities and limit their freedoms. — HERETICAL THOUGHTS ABOUT SCIENCE AND SOCIETY
    By Freeman Dyson]

    4. Batteries that last [Cowan’s Heinlein Concordance: Shipstone
    1. Common power source. It involved intensive solar collection and energy storage but was not otherwise described. It apparently replaced almost all other sources of energy. The name also applied to the conglomerate that apparently owned most of the corporations on and off Earth…. In effect, Shipstone controlled the entire economy. A feud among different factions resulted in the overthrow and disruption of many Earth governments, particularly in North America. (Friday)
    2. [mentioned in passing] Power source used for automobiles (and probably other devices).
    (To Sail Beyond the Sunset)
    [Compare D. D. Harriman’s extensive holdings and economic influence in earlier stories, and the more benevolent depiction of an unlimited power source in “Let There Be Light”.]

  48. Bram Cohen Says:

    Scott, does survey propogation constitute a full-blown exhaustive search algorithm, as opposed to just a stochastic search problem (I think the answer is ‘yes’, but just checking) and if so, would it apply naturally to algorithm X type problems, and if it does do you think it would on some instances be faster than dancing links in practice?

  49. coder Says:

    the homomorphic cryptosystem reminds me of McEliece and other coding cryptosystems. these too resist quantum attacks but require copious entropy.

    poor entropy sources are the bane of a cryptographer’s existence; the concerns in #11 are relevant but well understood. many computers have hardware entropy sources these days…

  50. Zack Says:

    Is there a way to construct quantum money that allows one to make change? That is, I have a quantum banknote worth $A, I want to be able to convert it into two banknotes worth $X and $Y, where X + Y = A is enforced, without communicating with the issuing authority. Conversely, I would also like to be able to merge banknotes worth $X and $Y into a single banknote worth $(X+Y), again without communication.

  51. Raoul Ohio Says:

    While of course your religion is the one true religion, it is interesting to speculate about what’s the deal with all the wrong ones.

    One of the weisenheimer columnists in Scientific American has an interesting model: The standard human mental kit fails to include a working BS detector. If true, this explains lots of other curious things. He fleshes the model into a mini theory by speculating how evolution produced this state of affairs. His guess is that higher brain functions are largely pattern recognition, and overreacting to any plausible threat pays off when there are lions about. This is an advantage, natural selection wise, leading to most people not thinking critically about whatever.

    The virus theory, which I first read about in “Godel, Escher, and Bach”, is also good for a few laughs. An entertaining variation has religion as kind of a Ponzi scheme, the various priesthoods may or may not be in on the joke. Putting on your optimization hat, can anyone think of a better scam than trading infinite bliss in the next life for money and obedience right now?

  52. Scott Says:

    Zack: That’s an extremely interesting question!

    “Merging” two banknotes can in some sense be trivially done, by just putting the banknotes side-by-side. It’s “splitting” a banknote that’s the problem—at least, assuming you don’t want the number of qubits in a banknote to grow linearly with its value (in which case we could just let an $n banknote consist of n $1 notes).

    Another simple observation is that ordinary cash does not provide the functionality you ask for, and yet we seem to make do anyway, mostly by choosing denominations ($100, $20, $10, $5, $1…) in such a way that if there are enough people at the restaurant table, then w.h.p. it’s possible to make change.

    Probably the first step should be to find one quantum money scheme with reasonable evidence for its security, that at least provides the same functionalities as ordinary cash! Then we can worry about additional functionalities like the one you ask for.

    [Note: Using the ideas from my CCC paper, it shouldn’t be hard to construct a quantum oracle relative to which a quantum money scheme with the “splitting” and “merging” functionalities exists. The hard part, as usual, is to find an explicit scheme, one that works even with no oracle.]

  53. Scott Says:

    Bram: Survey propagation is basically a local search algorithm; it doesn’t find refutations for unsatisfiable instances. I’m sorry I don’t know the answers to your other questions.

  54. Bram Cohen Says:

    Hrmph, my reading of survey propogation was way off. Is there an intro to it for non-mathematicians? I can read reference code, but not speak math.

  55. Zack Says:

    It’s true that regular old cash is not splittable, but I observe that cash is falling into disuse (replaced by debit and credit cards) and speculate that not having to deal with change is a major reason for this. Debit and credit cards, of course, do require communication with the bank.

    If quantum banknotes could be split and merged, then they would solve a practical problem with cash (unsplittability) as well as one that’s not really a problem in practice (unforgeability)…

  56. Bobby Says:

    Can someone answer my question from comment #24?

    In short:
    Regarding result #4, how is that different than having the input data encrypted with a public key, and the “computation” being the process of appending a message that encapsulates the computation to the end of the input data?

    I understand that the paper does demonstrate a system with at least one property that the classical system I give above doesn’t have. Excerpted from comment 24:
    I can see that it’s possible that this asymmetric encryption + messages process would be susceptible to “easy” quantum cracking, or that it may be provable that the method given in paper #4 is quicker to decrypt. Possibly that’s all that’s new that the paper is illuminating.

    My question is, does the method discussed in the paper provide anything else that a classical system of concatenating messages listing the operations to be performed wouldn’t provide?