Archive for the ‘Metaphysical Spouting’ Category

Quantum Computing Since Democritus Lecture 10.5: Penrose

Thursday, March 15th, 2007

You’ve eaten your polynomial-time meatloaf and your BQP brussels sprouts. So now please enjoy a special dessert lecture, which I didn’t even deliver in class except as a brief coda to Lecture 10. Watch me squander any remaining credibility, as I pontificate about Roger Penrose’s Gödel argument, strong AI, the No-Cloning Theorem, and whether or not the brain is a quantum computer. So gravitationally collapse your microtubules to the basis state |fun〉, because even a Turing machine could assent to the proposition that you’re in for a wild ride!

(Important Note: If you belong to a computer science department hiring committee, there is nothing whatsoever in this lecture that could possibly interest you.)

Long-awaited God post

Tuesday, January 16th, 2007

This morning, a reader named Bill emailed me the following:

I stumbled upon [Quantum Computing Since Democritus Lecture 9] by accident and it seemed quite interesting but I was ultimately put off (I stopped reading it) by all the references to god. As a scientist (and athiest) I think personal religious beliefs should be left out of scientific papers/lectures, you shouldn’t assume your readers/listeners have the same beliefs as yourself…..it just alienates them.

Dear Bill,

I’m impressed — you seem to know more about my personal religious beliefs than I do! If you’d asked, I would’ve told you that I, like yourself, am what most people would call a disbelieving atheist infidel heretic. I became one around age fourteen, shortly after my bar mitzvah, and have remained one ever since.

Admittedly, though, “atheist” isn’t exactly the right word for me, nor even is “agnostic.” I don’t have any stance toward the question of God’s existence or nonexistence that involves the concept of belief. For me, beliefs are for things that might eventually have some sort of observable consequence for someone. So for example, I believe P is different from NP. I believe I’d like some delicious Peanut Chews today. I believe the weather this January isn’t normal for planet Earth over the last 10,000 years, and that we and our Ford Escorts are not entirely unimplicated. I believe eating babies and voting for Republicans is wrong. I believe neo-Darwinism and the SU(3)xSU(2)xU(1) Standard Model (though not its supersymmetric extensions, at least until I see the evidence). I believe that if the God of prayer couldn’t get off His lazy ass during the Holocaust, or the Rwandan or Cambodian genocides, then He must not be planning to do so anytime soon — and hence, “trusting in faith” is utter futility.

But when it comes to the more ethereal questions — the nature of consciousness and free will, the resolution of the quantum measurement problem, the validity of the cosmological anthropic principle or the Continuum Hypothesis, the existence of some sort of intentionality behind the laws of physics, etc. — I don’t have any beliefs whatsoever. I’m not even unsure about these questions, in the same Bayesian sense that I’m unsure about next week’s Dow Jones average (or for that matter, this week’s Dow Jones average). All I have regarding the metaphysical questions is a long list of arguments and counterarguments — together with a vague hope that someone, someday, will manage to clarify what the questions even mean.

To me, the most remarkable thing you said was that, despite being otherwise interested in my lecture, you literally stopped reading it because of some tongue-in-cheek references to an Einsteinian God. That reminds me of a funny story. When I was a student at Berkeley, my mom kept pestering me to go to the campus Hillel for Friday night dinners. And to be honest, despite all the pestering, I was tempted to go. My temptation was largely driven by two factors that, for want of more refined terminology, I will call “free food” and “females.” For some reason, both factors, but particularly the second, were in short supply in the computer science department.

And yet, I couldn’t bring myself to go. Every time I passed the Hillel, I had this vision of a translucent Richard Dawkins (sometimes joined by Bertrand Russell) floating before me on the front steps, demanding that I justify the absurd Bronze Age myths that, by entering the Hillel building, I would implicitly be endorsing. “Come now, Scott,” Richard and Bertrand would say, with their elegant Oxbridge accents. “You don’t really believe that tosh, do you?”

“No, most assuredly not, good Sirs,” I would reply, and shuffle back to the dorm to work on my problem set. (The thought of spending Friday night at, say, a beer party never even occurred to me.)

Then, one Friday, I had a revelation: if God doesn’t exist, then in particular, He doesn’t give a shit where I go tonight. There’s no vengeful sky-Dawkins, measuring my every word and deed against some cosmic code of atheism. There’s no Secular-Humanist Yahweh who commanded His infidel flock at Sci-nai not to believe in Him. So if I want to go to the Hillel, then as long as I’m not hurting anyone or lying about my beliefs, I should go. If I don’t want to go, I shouldn’t go. To do otherwise wouldn’t merely be silly; it would actually be irrational.

(Incidentally, once I went, I found that the other secularists there greatly outnumbered the believers. I did stop going after a year or two, but only because I’d gotten bored with it.)

What I’m trying to say, Bill, is this: you can go ahead and indulge yourself. If some of the most brilliant unbelievers in history — Einstein, Erdös, Twain — could refer to a being of dubious ontological status as they would to a smelly old uncle, then why not the rest of us? For me, the whole point of scientific rationalism is that you’re free to ask any question, debate any argument, read anything that interests you, use whatever phrase most colorfully conveys your meaning, all without having to worry about violating some taboo. You won’t endanger your immortal soul, since you don’t have one.

If the trouble is just that the G-word leaves a bad taste in your mouth, then I invite you to try the following experiment. Every time you encounter the word “God” in my lecture, mentally substitute “Flying Spaghetti Monster.” So for example: “why would the Flying Spaghetti Monster, praise be to His infinite noodly appendages, have made the quantum-mechanical amplitudes complex numbers instead of reals or quaternions?”

Well, why would He? Any ideas?

RAmen, and may angel-hair watch over you,
Scott

Quantum Computing Since Democritus Lecture 4: Minds and Machines

Tuesday, October 3rd, 2006

Bigger, longer, wackier. The topic: “Minds and Machines.”

Quantum Computing Since Democritus Lecture 3: Gödel, Turing, and Friends

Wednesday, September 27th, 2006

Gödel, Turing, and Friends. Another whole course compressed into one handwaving lecture. (This will be a recurring theme.)

Quantum Computing Since Democritus Lecture 2: Sets

Tuesday, September 19th, 2006

Cardinals, ordinals, and more. A whole math course compressed into one handwaving lecture, and a piping-hot story that’s only a century old.

PHYS771 Quantum Computing Since Democritus

Thursday, September 14th, 2006

That, for better or worse, is the name of a course I’m teaching this semester at the University of Waterloo. I’m going to post all of the lecture notes online, so that you too can enjoy an e-learning cyber-experience in my virtual classroom, even if you live as far away as Toronto. I’ve already posted Lecture 1, “Atoms and the Void.” Coming up next: Lecture 2.

A Euclidean theater of misery

Monday, February 13th, 2006

As winner of the Best Umeshism Contest (remember that?), Peter Brooke earned the right to ask me any question and have me answer it on this blog. Without further ado, here is Peter’s question:

If it is assumed that God exists, what further, reasonable, conclusions can be made, or is that where logical inquiry must end? Reasonable means in the light and inclusive of present scientific understanding. Defend any assumptions and conclusions you make.

At least Peter was kind enough not to spring “Is there a God?” on me. Instead, like a true complexity theorist, he asks what consequences follow if God’s existence is assumed.

Alas, Peter didn’t say which God he has in mind. If it were Allah, or Adonai, or Zeus, or the Flying Spaghetti Monster, then I’d simply refer Peter to the requisite book (or in the case of the Spaghetti Monster, website) and be done. As it is, though, I can’t assume anything about God, except that

  1. He exists,
  2. He created the universe (if He didn’t, then it’s not He we’re talking about), and
  3. He’s a He.

(Note for Miss HT Psych: the third assumption is a joke.)

So the only way I see to proceed is to start from known facts, and then ask what sort of God would be compatible with those facts. Though others might make different choices, the following facts seem particularly relevant to me.

  • About 700,000 children each year die of malaria, which can easily be prevented by such means as mosquito nets and the spraying of DDT. That number will almost certainly grow as global warming increases the mosquitoes’ range. As with most diseases, praying to God doesn’t seem to lower one’s susceptibility or improve one’s prognosis.
  • According to our best theories of the physical world, it’s not enough to talk about the probability of some future event happening. Instead you have to talk about the amplitude, which could be positive, negative, or even complex. To find the probability of a system ending up in some state, first you add the amplitudes for all the ways the system “could” reach that state. Then you take the absolute value of the sum, and lastly you take the square of the absolute value. For example, if a photon could reach a detector one way with amplitude i/2, and another way with amplitude -i/2, then the probability of it reaching the detector is |i/2 + (-i/2)|2 = 0. In other words, it never reaches the detector, since the two ways it could have reached it “interfere destructively” and cancel each other out. If we required the amplitudes to be positive or negative reals rather than complex numbers, there would be some subtle differences — for example, we could just square to get probabilities, instead of taking the absolute value first. But in most respects the story would be the same.
  • From 1942 to 1945, over a million men, women, and children died in one of four extermination complexes at Birkenau, or “Auschwitz II” (Auschwitz I was the smaller labor camp). Each complex could process about 2,500 prisoners at a time. The prisoners were ordered to strip and leave their belongings in a place where they could find them later. They were then led to an adjacent “shower room,” containing shower heads that were never connected to any water supply. Once they were locked inside, guards dropped pellets from small openings in the ceiling or walls. The pellets contained Zyklon B, a cyanide-based nerve agent invented in the 1920’s by the German Jewish chemist Fritz Haber. The guards then waited for the screams to stop, which took 3-15 minutes, depending on humidity and other factors. Finally, Sonderkommandos (prisoners who were sent to the gas chambers themselves at regular intervals) disposed of the bodies in the adjacent crematoria. With the arrival of 438,000 Hungarian Jews in 1944, the crematoria could no longer keep up, so the bodies were burned in open pits instead. Besides those killed at Auschwitz, another 1.6 million were killed at the four other death camps (Sobibor, Belzec, Treblinka, and Chelmno). In the USSR and Poland, another 1.4 million were shot in front of outdoor pits by the Einsatzgruppen; still others died through forced starvation and other means. Judged on its own terms, the extermination program was a spectacular success: it wiped out at least 2/3 of Russian and European Jewry and changed the demography of Europe. The Americans and British declined numerous opportunities to take in refugees, or to bomb the camps or the train tracks leading to them. Most of the perpetrators, except for a few top ones, returned to civilian life afterward and never faced trial. Millions of people today remain committed to the goal of a Judenrein planet; some, like my friend Mahmoud, are working to acquire nuclear weapons.
  • According to our best description of space and time, the faster an object is moving relative to you, the shorter that object will look in its direction of motion, and the slower time will pass for it as observed by you. In particular, if the object is moving at a fraction f of the speed of light, then it will contract, and time will slow down for it, by a factor of 1/(1-f2)1/2. This does not mean, as some people think, that concepts like “distance” have no observer-independent meaning — only that we were using the wrong definition of distance. In particular, suppose an observer judges two events to happen r light-years apart in space and t years apart in time. Then the interval between the events, defined as r2-t2, is something that all other observers will agree on, even they disagree about r and t themselves. The interval can also be defined as r2+(it)2: in other words, as the squared Euclidean distance in spacetime between the events, provided we reinterpret time as an imaginary coordinate. (This is known as “Wick rotation.”)
  • When I was younger, my brother and I went to an orthodontist named Jon Kraut. Dr. Kraut was a jovial guy, who often saw me on weekends when I was home from college even though his office was officially closed. He was also an aviation enthusiast and licensed pilot. About a week ago, Kraut was flying a twin-engine plane to South Carolina with his wife, Robin, their three kids (ages 2, 6, and 8), and the kids’ babysitter. Kraut reported to the control tower that he was having problems with his left engine. The plane made one approach to the airport and was coming back to try to land again when it crashed short of the runway, killing the whole family along with the babysitter. On the scale of history, this wasn’t a remarkable event; I only mention it because I knew and liked some of the victims.

Now, based on the facts above, plus many others I didn’t mention, and “in the light … of present scientific understanding,” what can we say about God, assuming He exists? I think we can say the following.

First, that He’s created Himself a vale of tears, a theater of misery beyond the imagination of any horror writer. That He’s either unaware of all the undeserved suffering He’s wrought, or else unable or unwilling to prevent it. That in times of greatest need, He’s nowhere to be found. That He doesn’t answer the prayers of the afflicted, or punish evildoers in any discernible way. That He most likely doesn’t intervene in human affairs at all — though I wouldn’t want to argue with those who say He does intervene, but only for the worse.

Second, that He apparently prefers complex numbers to real numbers, and the L2 norm to the L1 norm.

Dude, it’s like you read my mind

Friday, November 11th, 2005

Newcomb’s Problem, for those of you with social lives, is this. A superintelligent “Predictor” puts two opaque boxes on a table. The first contains either $1,000,000 or nothing, while the second contains $1,000. You have a choice: you can either open the first box or both boxes. Either way, you get to keep whatever you find.

But (duhhh…) there’s a catch: the Predictor has already predicted what you’ll do. If he predicted you’ll open both boxes, then he left the first box empty; if he predicted you’ll open the first box only, then he put $1,000,000 in the first box. Furthermore, the Predictor has played this game hundreds of times before, with you and other people, and has never once been wrong.

So what do you do? As Robert Nozick wrote, in a famous 1969 paper:

“To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

Actually, people confronted with Newcomb’s Problem tend to split into three camps: the one-boxers, the two-boxers, and the Wittgensteins.

The one-boxers figure they might as well trust the Predictor: after all, he’s never been wrong. According to the prediction, if you open the first box you’ll get $1,000,000, while if you open both you’ll only get $1,000. So it’s a no-brainer: you should open only the first box.

“But that’s stupid!” say the two-boxers. “By the time you’re making the choice, the $1,000,000 is either in the first box or it isn’t. Your choice can’t possibly change the past. And whatever you’d get by opening the first box, you’ll get $1,000 more by opening both. So obviously you should open both boxes.”

(Incidentally, don’t imagine you can wiggle out of this by basing your decision on a coin flip! For suppose the Predictor predicts you’ll open only the first box with probability p. Then he’ll put the $1,000,000 in that box with the same probability p. So your expected payoff is 1,000,000p2 + 1,001,000p(1-p) + 1,000(1-p)2 = 1,000,000p + 1,000(1-p), and you’re stuck with the same paradox as before.)

The Wittgensteins take a third, boring way out. “The whole setup is contradictory!” they say. “It’s like asking what happens if an irresistable force hits an immovable object. If the ‘Predictor’ actually existed, then you wouldn’t have free will, so you wouldn’t be making a choice to begin with. Your very choice implies that the Predictor can’t exist.”

I myself once belonged to the Wittgenstein camp. Recently, however, I came up with a new solution to Newcomb’s Problem — one that I don’t think has ever been discussed in the literature. (Please correct me if I’m wrong.) As I see it, my solution lets me be an intellectually-fulfilled one-boxer: someone who can pocket the $1,000,000, yet still believe the future doesn’t affect the past. I was going to write up my solution for a philosophy journal, but what fun is that? Instead, I hereby offer it for the enlightenment and edification of Shtetl-Optimized readers.

We’ll start with a definition:

“You” are anything that suffices to predict your future behavior.

I know this definition seems circular, but it has an important consequence: that if some external entity could predict your future behavior as well as you could, then we’d have to regard that entity as “instantiating” another copy of you. In other words, just as a perfect simulation of multiplication is multiplication, I’m asserting that a perfect simulation of you is you.

Now imagine you’re standing in front of the boxes, agonizing over what to do. As the minutes pass, your mind wanders:

I wonder what the Predictor thinks I’ll decide? “Predictor”! What a pompous asshole. Thinks he knows me better than I do. He’s like that idiot counselor at Camp Kirkville — what was his name again? Andrew. I can still hear his patronizing voice: “You may not believe me now, but someday you’ll realize you were wrong to hide those candy bars under the bed. And I don’t care if you hate the cafeteria food! What about the other kids, who don’t have candy bars? Didn’t you ever think of them?” Well, you know what, Predictor? Let’s see how well you can track my thoughts. Opening only one box would be rather odd, wouldn’t you say? Camp Kirkville, Andrew, candy bar – that’s 27 letters in total. An odd number. So then that settles it: one box.

What’s my point? That reliably predicting whether you’ll take one or both boxes is “you-complete,” in the sense that anyone who can do it should be able to predict anything else about you as well. So by definition, the Predictor must be running a simulation of you so detailed that it’s literally a copy of you. But in that case, how can you possibly know whether you’re the “real” you, or a simulated version running inside the Predictor’s mind?

“But that’s silly!” you interject. “Here, I’ll prove I’m the ‘real’ me by pinching myself!” But of course, your simulated doppelganger says and does exactly the same thing. Let’s face it: the two of you are like IP and PSPACE, water and H2O, Mark Twain and Samuel Clemens.

If you accept that, then the optimal strategy is clear: open the first box only. Sure, you could make an extra $1,000 by opening both boxes if you didn’t lead a double life inside the Predictor’s head, but you do. That, and not “backwards-in-time causation,” is what explains how your decision can affect whether or not there’s $1,000,000 in the first box.

An important point about my solution is that it completely sidesteps the “mystery” of free will and determinism, in much the same way that an NP-completeness proof sidesteps the mystery of P versus NP. What I mean is that, while it is mysterious how your “free will” could influence the output of the Predictor’s simulation, it doesn’t seem more mysterious than how your free will could influence the output of your own brain! It’s six of one, half a dozen of the other. Or at least, that’s what the neural firings in my own brain have inexorably led me to believe.