Archive for the ‘Nerd Interest’ Category

What every math talk should be like

Friday, October 19th, 2007

Watch a sphere get turned inside out with no cuts or creases. Hat tip: John Baez.

On Deutsches, Shors, and Vaziranis

Friday, September 28th, 2007

My friend Robin Hanson kvetches that scientific contrarians don’t get no respect: even if they ultimately turn out to be right, it’s the more cautious, Johnny-come-lately “conservative radicals” who get the lion’s share of the credit. (Like many of Robin’s posts, this one is written in a purely descriptive mode that nevertheless leaves no doubt where his sympathies lie.) And so, as a firmly-entrenched pillar of the hidebound scientific establishment, I thought I’d tell you our side of the story.

In the US, there are companies whose business it is to patent (or buy patents for) every obvious idea they can think of, then sit around and do nothing with those ideas, wait for some other company to build a successful business around one of them, and sue that company for patent infringement. (The textbook example is NTP’s lawsuit against Research In Motion.)

In science, one occasionally sees the intellectual equivalent of these patent-holding companies: people who publish one flaky idea after another with no data or calculations to back them up; and then if, after years of painstaking research, one of their speculations turns out to be right (or even 10% right), scream “they stole my idea!”

But in my experience — and happily for all concerned — the truth usually lies between this extreme and the opposite extreme described in Robin’s post. To illustrate, let’s consider the example of Robin’s that I know best: that of David Deutsch and quantum computing.

Unlike the patent-holding firms, David Deutsch really was a scientific pioneer, thinking deeply about quantum physics and the Church-Turing Thesis back when basically no one else was. His philosophical insights led him to define the quantum Turing machine model, prove its universality, and realize it might have implications for complexity theory. But his one concrete example of a quantum algorithm — how shall I say? — sucked. In particular, he gave an algorithm to compute the XOR of two bits (and know one has done so) using one quantum query and with success probability 1/2. (Later it was realized that success probability 1 is achievable, but that’s still only a factor-2 speedup compared to classical computers.) If this was all you’d seen of quantum computing, you would rightly file it away with dozens of other promising ideas that hadn’t led anywhere.

Unless, that is, you were Ethan Bernstein and Umesh Vazirani. These “conservative radicals” from Berkeley decided to put quantum computing under the microscope of theoretical computer science. The result of their labor — besides a bounteous harvest of complexity theorems like BPP ⊆ BQP ⊆ P#P — was the first example of a black-box problem for which quantum computers gave a superpolynomial speedup over classical randomized ones. Shortly afterward, another conservative, Dan Simon, set out to prove that the speedup of quantum computing was illusory — and ended up with strong evidence (now called Simon’s algorithm) for exactly the opposite conclusion. A year later, yet another conservative — an expert on combinatorics and discrete geometry by the name of Peter Shor — took a close look at Simon’s algorithm, and realized that if you changed the underlying group from (Z2)n to the cyclic group ZN, then you could efficiently compute the period of a black-box function, and thereby factor integers, and thereby break the RSA cryptosystem, and thereby change the world.

A Hansonian might downplay these later achievements — arguing that, were it not for Shor, some other “mainstream mathematician” (a strange description of him!) would’ve sooner or later discovered the factoring algorithm. But it’s equally true that, were it not for Deutsch, some other “renegade physicist” would have come up with quantum Turing machines (and indeed Feynman and Benioff were close). My own judgment is that Deutsch and Shor both made creative scientific contributions of the highest order, and are both deservedly celebrated for them. Indeed, if anyone gets short-shrifted in the usual popular accounts, I think it’s the people in between — like Bernstein, Vazirani, and Simon.

So yes, let’s remember the first person who struck gold, but also the first to realize it wasn’t fools’ gold and the first to figure out how to mine it. Science is a big place; there’s plenty of room for Deutsches, Shors, and even a Vazirani or two.

Checkers solved

Friday, July 20th, 2007

From Science and NYT.

FOCS’36 notification

Wednesday, July 4th, 2007

Dear Mr. Turing,

We regret to inform you that your submission

"On Computable Numbers, With an Application to the Entscheidungsproblem"

was not accepted to appear in FOCS 1936. The Program Committee received a record 4 submissions this year, many of them of high quality, and scheduling constraints unfortunately made it impossible to accept all of them.

Below please find some reviews on your submission. The reviews are *not* intended as an explanation for why your paper was rejected. This decision depended on many factors, including discussions at the PC meeting and competition from other papers.

Best wishes,
FOCS 1936 Program Committee

---------------------------------------- review 1 ----------------------------------------

seems like a trivial modification of godel's result from STOC'31

---------------------------------------- review 2 ----------------------------------------

The author shows that Hilbert's Entscheidungsproblem (given a mathematical statement, decide whether it admits a formal proof) is unsolvable by any finite means. While this seems like an important result, I have several concerns/criticisms:

1. The author defines a new "Turing machine" model for the specific purpose of proving his result. This model was not defined in any previous papers; thus, the motivation is unclear.

2. I doubt Hilbert's goal of "automating mathematical thought" was ever really taken seriously by anyone (including Hilbert himself). Given this, the negative result comes as no surprise -- a positive result would have been much more interesting.

3. It's hard to find any technical "meat" in this paper. Once the author sets up the problem, the main result follows immediately by a standard diagonalization argument.

4. The whole philosophical discussion in Section 9, about what it means to compute something, is out of place (even slightly embarrassing) and should be deleted entirely.

Summary: While this paper deserves to be published somewhere -- SODA? ICALP? FSTTCS? -- it certainly isn't FOCS caliber.

---------------------------------------- review 3 ----------------------------------------

merge with alonzo church's submission?

---------------------------------------- review 4 ----------------------------------------

while i agree with the other reviewers' concerns about triviality, i confess to liking this paper anyway. one reason is that, along the way to the main result, the author proves a lemma stating that there exists a "universal machine" (a machine able to simulate any other machine given a suitable choice of input). the claim that this lemma could have "practical" applications is clearly exaggerated -- but even so, it seems like it could be a useful ingredient for other results.

Recommendation: Borderline Accept.

America’s nerdiest cities

Sunday, June 24th, 2007

From Money Magazine, a list of American cities with the highest percentage of residents with graduate degrees. Cambridge, MA (26.3%) narrowly edges out Palo Alto, CA (25.4%) and Berkeley, CA (24.5%) — but beating them both by a long shot is Arlington, VA, the winner at 35.7%. It takes a much more educated crowd to unify Iraq and 9/11 than to unify relativity and quantum mechanics.

The groupies of science

Tuesday, June 5th, 2007

A friend sent me this Stanford Daily article about the strange tale of Elizabeth Okazaki, who

[f]or the last four years … has attended graduate physics seminars, used the offices reserved for doctoral and post-doctoral physics students and for all intents and purposes made the Varian Physics Lab her home. The only problem is that Okazaki appears to have no affiliation with Stanford and, according to physics professors and students, no real reason to be there.

The article quotes two people I know: Lenny Susskind (“as far as I can tell, she has a very limited knowledge of physics itself”) and Alessandro Tomasiello (“I feel really bad for her … I don’t want to have a conversation with her that will actually hurt her”). From both the article and the many impassioned comments, it’s clear that opinions in the physics department were mixed. Of course, by now Stanford has predictably reacted by banning Okazaki from campus.

Here’s the thing: while Okazaki is admittedly an extreme case, she reminds me of people I’ve known throughout my academic career. These are the groupies of science: those non-scientists who, for one reason or another, choose to build their whole social lives around science and scientists. When asked about their “research,” such people usually mention some vague interdisciplinary project that never seems to come to fruition.

After long deliberation, I’ve reached the following conclusion: generally speaking, SCIENCE NEEDS MORE GROUPIES, NOT LESS.

And no, not just for the obvious reason. At their best, groupies perform a vital role in the socially-impoverished scientific ecosystem, by serving as the conveyors of gossip, the organizers of parties, the dispensers of advice, and the matchmakers of lonely nerds with eligible humanists.

Furthermore, science needs a freewheeling culture to function, a point that seems lost on many of the Stanford Daily commenters. There we find enraged alumni wondering how anyone could possibly get away with this, and declaring that they certainly won’t be sending their kids to any school that tolerates such inanity. We find bigots comparing Okazaki to the Virginia Tech shooter Cho Seung-Hui (the common thread being, apparently, that both of them are Asian). And we find people asking rhetorically whether any corporation or government agency would tolerate a freeloader hanging around its offices for years. (My answer: probably not, and that’s one reason why I’m happy not to work at such places!)

On the other hand, we also find commenters denouncing the spoiled bourgeoisie capitalists at Stanford, who would deny a poor homeless woman the right to sleep in their physics building. Unless the critics are Mother Teresas themselves, that doesn’t seem fair to me either.

I have no desire to pass judgment on someone I’ve never met; any decision on Okazaki ought to rest with the people who actually work in Varian and know the specifics of her case. But I’d like to offer a general suggestion to any department that finds itself in a similar situation in the future: unless the groupie is insane or incompetent, find her some low-paying job as a lab assistant or “social programming director” or something like that. When we discover a stowaway on the great Ship of Science, why throw her overboard when we could make her swab the decks?

Update (6/6): Peter Woit now has his own post on this affair, with several entertaining comments. I’m skeptical of the idea that Okazaki had no real interest in science or scientists and only wanted free digs. Even in the insane housing market of Palo Alto, surely there must be ways to get a roof over your head that don’t require sitting in on theoretical physics seminars?

I also found the following comment priceless:

I think Scott Aaronson’s opinion is quite shallow … Scott wants groupies, and he wants to hire them to “swab the decks”. Only someone who thinks he is so special he should have serfs to serve him would think that way. College Professors already have a bunch of poorly paid workers(graduate students) who write papers for them. Do these aristocrats need an additional class of poorly paid servants

It always amuses me when those looking for an “elite” to rail against pick people who strive for a decade against staggering odds to have ideas that no one in the history of the world ever had before, in order that they might possibly qualify for a stressful, ~90-hour-a-week job offering the same money, power, and prestige that would accrue automatically to a mid-level insurance salesman.

Wanna bet?

Sunday, June 3rd, 2007

A commenter on my previous post writes:

What all these scientists who are crying about the teaching of evolution should do is propose bets to creationists based on the outcomes of experiments … You think that these D-wave guys won’t be able to do something they’re claiming to be able to do? It might be a good exercise to make that statement precise … If someone has a conjecture of the form “There should exist a theory that explains X”, people roll their eyes, essentially because there’s no way of deciding the implicit bet.

Alright, imagine the following conversation:

Layperson: I just heard on the radio about this new Yood d’Shnood Theory of the Universe. What do you think the odds are that it’ll turn out to be true?

Scientist: Well, so far I haven’t seen any good evidence that…

Layperson: Sure, but what’s your prediction?

Scientist: As I said, the evidence seems to be explained a lot more easily by…

Layperson: But what if you had to bet?

Scientist: Well, there are two ways to think about this. What the Yood d’Shnood proponents argue is that…

Layperson: No, don’t give me a dissertation, just give me a number!

Here’s the thing: when my PhD diploma arrived in the mail, it didn’t imbue me with some sort of supernatural power to predict the outcomes of future quantum computing experiments, unmediated by the evidence and arguments of the temporal world. (This despite the fact that my diploma was signed by a time-travelling cyborg, in his official capacity as Governor of California and Regent of the UC system.)

Of course, the reason scientists worry about evidence is that ultimately, we want our theories to cohere with reality and our predictions to come out right. The experience of the last four centuries suggests this hope is far from futile. The trouble is that, once you’ve decided to adopt the evidence-centric strategy that’s worked so well in the past, you have to forget temporarily about betting odds. For the mindset of the scientist toying with rival explanations, and that of the Bayesian handicapping horses in a race, are (at least in my experience) simply too incompatible to inhabit the same brain at the same time.

If you’ll forgive the metaphor, asking for gambling odds on every scientific question is like asking a woman to sleep with you on the first date. Of course it’s in the back of your mind (and possibly not only yours), but it tends to be counterproductive even to bring it up. If you’re ever going to reach the summit, then you have to act like all that really matters to you is the climb, and the only reliable way to act like it is to remake yourself into the sort of person for whom it’s true. Such is the paradox of science and of life.

So, did D-Wave succeed in using the quantum adiabatic algorithm to solve Sudoku puzzles in fewer steps than those same puzzles would be solved with classical simulated annealing? I don’t know. To repeat, I don’t know. What I know is that I haven’t seen the evidence, and that the burden of providing such evidence rests with the people making the claim.

The Myth of the Ivory Tower

Wednesday, May 30th, 2007

I know I promised no more posts about D-Wave and its “commercial” “quantum” computer for a while. But will you look at the bait that D-Wave founder Geordie Rose has been dangling in front of me on his blog?

People tend to approach problems and form opinions through the lens of their expertise. This happens all the time when disciplines are close … but it also happens in wierder [sic] situations, where the area of expertise is entirely disjoint from the situation being analyzed — like when theoretical computer scientists have opinions about real computers for example.

In Geordie’s comments section, the message is clearer still. One commenter writes that “the Professors didn’t get there first and they are angry; all truth must first come from them.” Another imagines “the Aaronsons of the world” fervently hoping that “their fragile self-created self-contained ecosystem can be re-built just the way they like it.”

For commenters like these, it would seem that the issue has nothing to do with decoherence rates or scalability, or with what the evidence is that D-Wave is actually harnessing quantum effects to obtain a computational speedup. So in this post, I want to step back and try to understand what the real issue is.

I propose that more than a few technology enthusiasts — not just the D-Wave supporters quoted above — are in the thrall of The Myth of the Ivory Tower. According to this Myth, the basic function of academic scientists is to sit around in their armchairs, pompously declaring to be impossible what plucky inventors like Thomas Edison or the Wright Brothers then roll up their sleeves and do. Now, I might be an academic myself, but I’m also a proud American (currently residing in the 51st state), and I won’t deny that this most American of myths has a certain resonance even for me. In the end, though, I believe that the Myth tells us more about our Zeitgeist, or our collective psyche, or something like that, than it does about the actual history of technology.

The “evidence” for the Myth (when such is offered) usually consists of famous last words from distinguished scientific authorities. You know the sort of thing I’m talking about:

Heavier-than-air flying machines are impossible.
Radio has no future.
X-rays will prove to be a hoax.
-William Thomson (Lord Kelvin)

I think there is a world market for maybe five computers.
-Thomas Watson

There is no reason anyone would want a computer in their home.
-Ken Olsen

(Watson and Olsen were of course CEO’s, but for the purposes of the Myth they stand in here as “academics.”)

However, as soon as we think about these predictions and what they’re supposed to demonstrate, we notice some glaring problems. The first one is confirmation bias. No one compiles lists of pessimistic technological forecasts made by experts that turned out to be right — where would you even start?

The second problem is that many of the juiciest predictions come from a single individual: Lord Kelvin. Furthermore, they come from the twilight of his career, when he was considered to have lost his vortices even by most of his colleagues. Seeking to better understand this great physicist of the 19th century who was so wrong about the technologies of the 20th, I just read an excellent biography called Degrees Kelvin. One thing I learned is that, if the selective historians chose to focus on the first half of Kelvin’s career rather than the second, they could find equally exquisite anecdotes illustrating the reliability of academic opinions.

In the laying of the first transatlantic telegraph cable in the 1850’s, there were two colorful personalities: Kelvin and Wildman Whitehouse. Whitehouse, the “practical” man, detested any math or physics he couldn’t understand, and insisted that a transatlantic cable would just be a longer version of existing cables. Kelvin, the “theorist,” said that while a transatlantic cable was certainly possible, it would need thicker insulation, a different kind of receiver, etc. than previous cables to work reliably, and that more testing and research was needed. As it happened, after laying a cable that was every bit as unreliable as Kelvin said it would be, Whitehouse (1) had to use Kelvin’s receiver to get any signal through at all, (2) faked the transcripts to make it look like he used his own receiver, (3) fatally damaged the cable by sending 2,000 volts through it in a desperate attempt to get it to work properly, and then (4) insisted the cable was still fine after it had permanently gone silent. Eventually the cable companies learned their lesson.

Despite this and other successes (e.g., the Second Law of Thermodynamics), Kelvin’s doofus predictions in later life do illustrate two important points. The first is that, if you’re going to make skeptical pronouncements, you’d better distinguish clearly between the provably impossible, the presumably impossible, and the merely difficult and not yet achieved. The second is that, if you’re going to claim something’s impossible, you’d better have an argument, and you’d better understand what assumptions it rests on.

Alright, so let’s move on to Watson and Olsen’s predictions about the computer industry. The funny thing is, these predictions weren’t nearly as stupid as they sound! Why? Because there’s nothing inevitable about the concept of a personal computer. Instead of billions of home PC’s, we could just as easily imagine most of the world’s computing power concentrated in a few servers, accessible remotely to anyone who wanted it. In this alternate universe, your desktop PC would be little more than a glorified information portal — a “browser,” if you will — while most of the actual application software (email, calendars, maps, etc.) ran elsewhere. I admit that this is just a fanciful, hypothetical scenario, but what does that matter to a theorist like me?

Speaking of which, the Internet was of course the child of DARPA and NSF, raised to adolescence in university CS departments. (DARPA has since reoriented itself toward projects with shorter-term payoff, its previous funding model having failed so disastrously.) The Web was created by Tim Berners-Lee at CERN, and the first popular web browser by Marc Andreessen at the University of Illinois. (And yes, Al Gore had a nontrivial role in funding this work.) R, S, and A were all at MIT. If you’re going to argue for the irrelevance of academic research, the Internet is not the place to start.

But what about some of the other spectacular inventions of the last fifty years: the laser, the transistor, the fiber-optic cable, the communications satellite? Didn’t those come from the private sector? As it happens, they came from Bell Labs, which is interesting as the sort of mammoth exception that proves the rule. Because of AT&T’s government-sanctioned monopoly, for much of the 20th century Bell Labs was able to function like the world’s largest university, devoting billions of dollars to “irrelevant” research. So in the 1980’s, when Congress decided to deregulate the phone system, many people predicted that Bell Labs would die a slow, agonizing death — a prediction that’s been borne out over the last 25 years.

But surely other companies must have picked up the slack? No, not really. While Microsoft, IBM, NEC, Xerox, and a few others all provide welcome support for basic research, none of them do so on the old Ma Bell’s scale. From a CEO’s perspective, the problem with basic research is obvious: a rising tide lifts all boats, your competitors’ as well as yours. (The famous cautionary example here is Xerox PARC, which made the “mistake” of giving the world the windowing system, the mouse, and the laser printer.)

For those who adhere to the religion of capitalism, have the Arrow-Debreu Theorem tattoed across their chests, etc., it might be difficult to understand how a system based on peer review rather than the free market could lead so consistently to technological breakthroughs. I mean, all those ivory-tower academics growing fat off government grants: what incentive could they possibly have to get the right answers? Without picky customers or venture capitalists breathing down their necks, what’s the penalty for being wrong?

I’m lucky enough to be friends with Robin Hanson, a brilliant economist and futurist who starts where Ayn Rand would’ve suffered a loss of nerve and keeps going from there. Robin has long argued that the scientific peer review process is broken, and ought to be supplanted by a futures market that would reward scientists for making correct predictions. As he writes:

The pace of scientific progress may be hindered by the tendency of our academic institutions to reward being popular, rather than being right … Academia is still largely a medieval guild, with a few powerful elites, many slave-like apprentices, and members who hold a monopoly on the research patronage of princes and the teaching of their sons …

Imagine that academics are expected to “put up or shut up” and accompany claims with at least token bets, and that statistics are collected on how well people do. Imagine that funding agencies subsidize pools on questions of interest to them, and that research labs pay for much of their research with winnings from previous pools. And imagine that anyone could play, either to take a stand on an important issue, or to insure against technological risk.

Personally, I hope that Robin’s science futures market gets tried on a significant scale, and I can’t wait to see the results. (Naturally, even the marketplace of ideas has to compete in the marketplace of ideas!) I agree with Robin that academic science is often tradition-bound to the point of absurdity, and that its institutions ought to be as open to scrutiny and replacement as its theories. But I don’t go as far as he apparently does in the direction of the Myth of the Ivory Tower. For me, the interesting thing about science is not that it’s broken, but rather that it’s about the least broken enterprise in the whole sorry history of our species.

A Woitian links, links, links post (slightly stale but still edible)

Sunday, May 20th, 2007

Razborov and Rudich won the Gödel Prize for “Natural Proofs”, which probably did as much as any single paper to elucidate the nature of the P vs. NP problem. (More from the Bearded One and the Pontiff.) Loosely speaking, R&R showed that any circuit lower bound satisfying certain extremely broad criteria would “bite its own tail,” and lead to efficient algorithms to distinguish random from pseudorandom functions — the very sort of thing that we wanted to prove was hard. This doesn’t by any means imply that a P≠NP proof is impossible, but it does show how the problem has a strange, self-referential character that’s not quite like anything previously encountered in mathematics, including in the work of Gödel and Turing. Technically simple but conceptually profound, the paper is also a masterpiece of clear, forceful exposition. When I first came across it as an undergrad at Cornell, I knew complexity was my subject.

Following on the heels of the New Yorker, the New York Times ran its own epic on the Large Hadron Collider. So science writers can do a decent job when they feel like it. Why can’t they write about P vs. NP the same way? Oh, right … them big machines …

Andy Drucker poses the following problem: suppose there are n blog posts, and for each post bi, you’re told only that it was posted during the time interval [ti,ui]. Is there an efficient algorithm to count how many orderings of the blog posts are compatible with that information? Alternatively, is the problem #P-complete? Let me stress that Andy doesn’t know the answer to this question, and neither do I.

A certain MIT undergrad of my acquaintance sent the following letter to MIT’s DMCA enforcement office.

Dear MIT DMCA Agent,

After viewing Scoop and receiving your notice, I was more than happy to comply with NBC’s request to destroy it. Rest assured that I will no longer be downloading or sharing any post-Manhattan Woody Allen films.

Religion’s rules of inference

Saturday, May 12th, 2007

Besides defending quantum computing day and night, having drinks with Cosmic Variance‘s Sean Carroll, and being taken out to dinner at lots of restaurants with tablecloths, the other highlight of my job interview tour was meeting a friendly, interesting, articulate divinity student on the flight from San Francisco to Philadelphia, who tried to save my soul from damnation.

Here’s how it happened: the student (call him Kurt) was reading a Christian theological tract, while I, sitting next to him, was reading Russell on Religion. (This is true.) I sheepishly covered the spine of my book, trying to delay the inevitable conversation — but it finally happened, when Kurt asked me how I was liking ole’ Bert. I said I was liking him just fine, thank you very much.

Kurt then made some comment about the inadequacy of a materialistic worldview, and how, without God as the basis of morality, the whole planet would degenerate into what we saw at Virginia Tech. I replied that the prevention of suffering seemed like a pretty good basis for morality to me.

“Oh!” said Kurt. “So then suffering is bad. How do you know it’s bad?”

“How do you know it’s bad?”

“Because I believe the word of God.”

“So if God said that suffering was good, that would make it good?”

I can’t remember Kurt’s response, but I’m sure it was eloquent and well-practiced — nothing I said really tripped him up, nor did I expect it to. Wanting to change the subject, I asked him about his family, his studies, his job, what he’d been doing in the vipers’ den of San Francisco, etc. I told him a little about quantum computing and my job search. I mused that, different though we were, we both valued something in life more than money, and that alone probably set us apart from most people on the plane. Kurt said it was fitting that I’d gone to grad school at Berkeley. I replied that, as a mere Democrat, I was one of the most conservative people there.

Finally I blurted out the question I really wanted to ask. In his gentle, compassionate, way, Kurt made it clear to me that yes, I was going to roast in hell, and yes, I’d still roast in hell even if I returned to the religion of my ancestors (that, of course, being at best a beta version of the true religion). In response, I told Kurt that when I read Dante’s Inferno in freshman English, I decided that the place in the afterlife I really wanted to go was the topmost layer of hell: the place where Dante put the “righteous unbaptized” such as Euclid, Plato, and Aristotle. There, these pre-Christian luminaries could carry on an eternal intellectual conversation — cut off from God’s love to be sure, but also safe from the flames and pitchforks. How could angels and harps possibly compete with infinite tenure at Righteous Unbaptized University? If God wanted to lure me away from that, He’d probably have to throw in the Islamic martyr package.

San Francisco to Philadelphia is a five-hour flight, and the conversation ranged over everything you might expect: the age of the earth (Kurt was undecided but leaning toward 6,000 years), whether the universe needs a reason for its existence external to itself, etc. With every issue, I resolved not to use the strongest arguments at my disposal, since I was more interested in understanding my adversary’s reasoning process — and ideally, in getting him to notice inconsistencies within his own frame of reference. Alas, in that I was to be mostly disappointed.

Here’s an example. I got Kurt to admit that certain Bible passages — in particular, the ones about whipping your slaves — reflected a faulty, limited understanding of God’s will, and could only be understood in the historical context in which they were written. I then asked him how he knew that other passages — for example, the ones condemning homosexuality — didn’t also reflect a limited understanding of God’s will. He replied that, in the case of homosexuality, he didn’t need the Bible to tell him it was immoral: he knew it was immoral because it contradicted human beings’ biological nature, gay couples being unable to procreate. I then asked whether he thought that infertile straight couples should similarly be banned from getting married. Of course not, he replied, since marriage is about more than procreation — it’s also about love, bonding, and so on. I then pointed out that gay and lesbian couples also experience love and bonding. Kurt agreed that this was true, but then said the reason homosexuality was wrong went back to the Bible.

What fascinated me was that, with every single issue we discussed, we went around in a similar circle — and Kurt didn’t seem to see any problem with this, just so long as the number of 2SAT clauses that he had to resolve to get a contradiction was large enough.

In the study of rationality, there’s a well-known party game: the one where everyone throws a number from 0 to 100 into a hat, and that player wins whose number was closest to two-thirds of the average of everyone’s numbers. It’s easy to see that the only Nash equilibrium of this game — that is, the only possible outcome if everyone is rational, knows that everyone is rational, knows everyone knows everyone is rational, etc. — is for everyone to throw in 0. Why? For simplicity, consider the case of two people: one can show that I should throw in 1/2 of what I think your number will be, which is 1/2 of what you think my number will be, and so on ad infinitum until we reason ourselves down to 0.

On the other hand, how should you play if you actually want to win this game? The answer, apparently, is that you should throw in about 20. Most people, when faced with a long chain of logical inferences, will follow the chain for one or two steps and then stop. And, here as elsewhere in life, “being rational” is just a question of adjusting yourself to everyone else’s irrationalities. “Two-thirds of 50 is 33, and two-thirds of that is 22, and … OK, good enough for me!”

I’ve heard it said that the creationists are actually perfectly rational Bayesians; they just have prior probabilities that the scientifically-minded see as perverse. Inspired by conversations with Kurt and others, I hereby wish to propose a different theory of fundamentalist psychology. My theory is this: fundamentalists use a system of logical inference wherein you only have to apply the inference rules two or three times before you stop. (The exact number of inferences can vary, depending on how much you like the conclusion.) Furthermore, this system of “bounded inference” is actually the natural one from an evolutionary standpoint. It’s we — the scientists, mathematicians, and other nerdly folk — who insist on a bizzarre, unnatural system of inference, one where you have to keep turning the modus ponens crank whether you like where it’s taking you or not.

Kurt, who looked only slightly older than I am, is already married with two kids, and presumably more on the way. In strict Darwinian terms, he’s clearly been more successful than I’ve been. Are those of us who can live with A→B or B→C or C→not(A) but not all of them at once simply evolutionary oddities, like people who have twelve fingers or can’t stand sunlight?