Black-hole complexity theory: from joke to science
One of the most fun arXiv preprints I’ve read in a while:
Black holes as mirrors: quantum information in random subsystems
by Patrick Hayden and John PreskillAbstract: We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the “half-way” point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole’s information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis.
Many of this paper’s arguments depend on speculative assumptions about quantum gravity, and might very well be wrong. What’s nice is simply that they’re not not even wrong! This is an 18-page paper about the Planck-scale dynamics of black hole horizons where I never once found myself wondering what the authors were trying to say, or how their ideas would in principle be tested.
When I used to spout off about the complexity of retrieving information from a black hole as a function of the Schwarzschild radius rS, people assumed I was just fishing for laughs. And they were basically right. But even then, I felt sure that actual physicists would eventually say something real about this question, unapologetically using the language of quantum computing and information. Like Alice with her k qubits, I didn’t have to wait as long as I’d thought.
Comment #1 September 3rd, 2007 at 8:34 am
Many of this paper’s arguments depend on speculative assumptions about quantum gravity, and might very well be wrong. What’s nice is simply that they’re not not even wrong! This is an 18-page paper about the Planck-scale dynamics of black hole horizons where I never once found myself wondering what the authors were trying to say, or how their ideas would in principle be tested.
If what Scott said is true, (sorry that my physics is not good enough to understand their paper) then it is in the same spirit as some papers which talk about the Schrodinger equation which can include some nonlinear terms or quantum mechanics that take into account nonlinear effects. I don’t know how quantum mechanics can take up nonlinear effects. Their results include that the hybrid nonlinear quantum mechanics enables us to make a quantum computer that solves NP problems in polynomial time.
Their assumptions are wrong or even absurb. So, is it that bad? They may stimulate other thoughts that lead to other breakthroughs. I really can’t say it must be so bad!
Comment #2 September 3rd, 2007 at 10:44 am
Hayden is a computer scientist, no?
Comment #3 September 3rd, 2007 at 11:49 am
Hayden is a computer scientist, no?
Yes, honorarily. But he studied physics in undergrad and grad school.
Comment #4 September 3rd, 2007 at 2:20 pm
But he studied physics in undergrad.. And math! Which I’m told, is kind of like CS theory 🙂
Comment #5 September 3rd, 2007 at 3:24 pm
Dave Bacon says: But he studied physics in undergrad. And math! Which I’m told, is kind of like CS theory. 🙂
LOL! Very funny, Dave! It put me in mind of a quote (pretty much everything puts me in mind of a quote).
Here is John von Neumann on the need to periodically reinvigorate mathematics via the reinjection of what von Neumann called “empirical ideas”, which IMHO includes the invigorating Hayden/Preskill idea that a black hole might usefully be regarded as a coding device:
IMHO, the Hayden/Preskill article positively contributes (in an exceptionally enjoyable way) to the conjoined “freshness and vitality” of physics, QIT, and mathematics.
Comment #6 September 3rd, 2007 at 4:54 pm
Hey Scott –
Thanks for the positive comments. This is better than an invitation to appear on Oprah!
Now, as for your kickback, I assume you’re still accepting payment in backdated D-Wave stock options?
Comment #7 September 3rd, 2007 at 7:02 pm
Patrick: Nah, the only kickback I need is for you to keep up the good work (plus cite me whenever possible). The Oprah reference reminds me that I should start a monthly Scott’s Paper Club [note: unaffiliated with the toilet paper brand]. “This paper touched my heart … and it will touch yours as well.”
John: Thanks for the John quote — I don’t think I’d ever seen it in its entirety, and it’s terrific!
Comment #8 September 3rd, 2007 at 8:01 pm
Scott says: Thanks for the John quote — I don’t think I’d ever seen it in its entirety, and it’s terrific!
Thanks Scott … it is indeed a wonderful essay … except (puts on Jack Nicholson voice) … “Entirety? You can’t handle the entirety!”
Here is another passage from the same essay, which was written by von Neumann in 1947. It seems pretty likely that in the following statistics, von Neumann is referring to himself. Because, who else? 🙂
What are the fractions today? I dunno, but I am pretty sure, that exponential notation is needed! 🙂
Comment #9 September 3rd, 2007 at 10:39 pm
I’m wondering … Maybe it would be better to be on Oprah. I’ll have to think about it.
Comment #10 September 4th, 2007 at 6:54 am
I think number theory has done alright without any infusion of empirical ideas, for say the last 2000 years.
Comment #11 September 4th, 2007 at 7:11 am
Hasn’t number theory had a shot in the arm from the development of digital computing – e.g. from its relation to algorithmics and cryptography?
Comment #12 September 4th, 2007 at 7:33 am
Johan Richter Says: I think number theory has done alright without any infusion of empirical ideas, for say the last 2000 years.
The historical record seems to indicate that number theory is the most empirical of the mathematical disciplines; hence its enduring vitality. See (e.g.) Polya’s Heuristic Reasoning in the Theory of Numbers.
Purely on the evidence, isn’t it true that number theory is far more of an experimental discipline than string theory? 🙂
Comment #13 September 4th, 2007 at 10:26 am
Does somebody give me prise if I proof, what quantum computing is inposible?
Comment #14 September 4th, 2007 at 11:32 am
At least for grover algorithm based on gate model.
Comment #15 September 4th, 2007 at 11:46 am
yes I prise you $2 if you proof what quantum computing is inposible.
Comment #16 September 4th, 2007 at 12:07 pm
That’s pretty funny, but aren’t those LOL expressions supposed to be superimposed on a picture of (Schrödinger’s) cat?
Comment #17 September 4th, 2007 at 12:38 pm
csrster, even if that were the case it still would be 2000 years without any input from the empirical sciences. And I don’t think it is. As far as I can tell, cryptography uses number-theory results but hasn’t given any ideas in return. And algorithmic number theory is a fairly peripheral branch of number theory. (I’m not saying it should be, just that it is.)
Comment #18 September 4th, 2007 at 12:50 pm
By reading this article
http://www.ddj.com/architect/184404559;jsessionid=1CCHAIO1NPDTSQSNDLRCKH0CJUNN2JVN?_requestid=180738
I find, where Lov Grover claims, what amplitudes has squere root probability, and this is becouse amplitudes constructively and destructively interference. Grover says, what if we want mark one of searching state, then we must cgange sign of amplitude, in another words, we have to change phase of amplitude (phase shift). So to mark state need to change phase. How we can change phase? Rotating qubit… So to rotate qubits (to change phase) we it do with classical computer.
Does can classical computer rotate phase with 100% precision? NO! This is inposible, becouse phase is analog. So we can rotate phase (to change sign of qubit), only with some precision under 100% (for example 99.9% precision). So in quantum computing now is error when we change phase. This errors constructively and destructively interfernce! So if your quantum computer is with 1000 qubits and error rate is 0.1 (10%), then square root of 0.1 is about 0.316 (31.6%). Errors grows quadraticaly with number of qubits! This is becouse you never build useful quantum computer, witch use grover algorithm. If you try to “cure” errors with error corection (which describe Peter Shor), then you must number qubits increase 5-10 times. Errors grow then 25-100 times. This is why nothing can help to quantum computer, which runing grover algorithm.
So does I receive my $2?
Comment #19 September 4th, 2007 at 1:43 pm
Nope. Threshold theorem, look it up. If the error rate is less than some threshold, error correction is a win.
Try better next time.
Comment #20 September 4th, 2007 at 2:02 pm
How error corection is calculated, with sense, what error rate grows quadraticaly or with sense, what error rate grows linearly?
Becouse in my opinion error rate (noise) grows quadraticaly with number of qubits. And how you suggest, how you say, error rate growing linearly or quadraticaly with number (N) of searching states ?
Comment #21 September 4th, 2007 at 2:52 pm
Jonh, a textbook that IMHO you ought to read, is Michael Nielsen and Isaac Chuang’s Quantum Computation and Quantum Information.
If you read it carefully, this textbook will answer most of the questions that you are asking.
But don’t worry! This fine textbook will inspire in you a large number of even better questions! 🙂
Comment #22 September 5th, 2007 at 5:19 am
Now I will corect a little bit my previous proof. Supose, what your quantum computer with 1000 qubits is far (10^10000000000000000000000 light-year) away from center of Universe. So in this way any decoherence can’t be, becouse any foton even don’t fly so far from big-bang begining. So (supose) you don’t need use Error Corection for your quantum computer, which runing Grover’s algorithm (becouse there no noise and decoherence).
So you start runing your quantum computer and you now want mark some state with negative amplitude (sign), you want to do phase shift 180 degrees. So what you then have to do? You with CLASSICAL computer do this phase shift 180 degrees. But becouse phase nature is analog, you with your classical computer doing phase shifting to do small error. Say, you cahnge phase with 179,9999 degree precision. Becouse you can’t change phase with 180 degrees precision, error rapidly occure in quantum computing. After first grover iteration rate of errors increase quadraticaly and so on. And, what is bad new, what you even can repair this errors. This is diferent error type, than, which becames from noise, decoherence. This error you can’t “cure” and those errors grows quadraticaly… And I reaped one more time, this errors ocures becouse, you can’t do phase shift with 100% precision.
Comment #23 September 5th, 2007 at 5:37 am
Please study the threshold theorem before continuing to make a fool of yourself.
Comment #24 September 5th, 2007 at 8:08 am
Scott: A kickback would have been easier, but I’ll see what I can do on the research front. No guarantees.
John S.: Great von Neumann quote. I think the jury will be out for a long time on whether our ideas are “empirical” or merely “aesthetic” but I appreciate your vote for the former!
Comment #25 September 5th, 2007 at 8:54 am
Scott,
I think the new blog arrangements are just fine. Registering was very easy, the ads are not too intrusive and the layout seems clear enough. As always, the content will be what matters 🙂
Comment #26 September 5th, 2007 at 9:02 am
Thershold teorem:
“In particular, quantum computers would resemble analog rather than digital
devices if the number of bits of precision scaled as an inverse polynomial in L, since then physical
parameters such as rotation angles or pulse lengths determining the accuracy of each quantum gate
would have to be specified to precision that increased as an exponential in L.” ( http://arxiv.org/pdf/quant-ph/0703230 )
I don’t understood. What increase? Errors increase or good answer increase exponentionaly?
If errors increase exponentionaly, then that is what I’m saying. And if errors increase exponentionaly (or quadraticaly), then how you can correct this errors? It is imposible! Then quantum computing is imposible!
Comment #27 September 5th, 2007 at 10:32 am
In any physical implementation of a quantum computation, the elementary quantum gates will be
executed with some limited accuracy; e.g., the rotation angle in a beam splitter or the length of
a pulse can be specified only to some finite precision. In order for the quantum circuit model to be realistic, it is important to show that any desired accuracy for the computation outcome can be
achieved if the accuracy of each gate in the quantum circuit scales at most as an inverse polynomial
in the size, L, of the computation. Equivalently, the number of bits of precision specifying the matrix
elements of each quantum gate will need to scale “slowly” as at most an inverse polynomial in the
logarithm of L. In particular, quantum computers would resemble analog rather than digital
devices if the number of bits of precision scaled as an inverse polynomial in L, since then physical
parameters such as rotation angles or pulse lengths determining the accuracy of each quantum gate
would have to be specified to precision that increased as an exponential in L.”
Loosly speaking, does errors increase exponentionaly or linearly or decrease exponentionaly in L?
Comment #28 September 9th, 2007 at 7:17 am
OK, seams like such kind of errors decrease exponentionaly with number of qubits…
Just thought: Does somethere in Nature exist entagled qubits and in superposition, that it have 2^1000 states? I think NO. Then my thought is, that if you wanna do superposition of 2^1000 states, then you need exponentional amount of energy. In optical implementation this energy taken from beam splitters, beam splitter double amount of energy taking energy from beam splitter atoms.