Archive for the ‘Rage Against Doofosity’ Category

The Toaster-Enhanced Turing Machine

Thursday, August 30th, 2012

Over at Theoretical Computer Science StackExchange, an entertaining debate has erupted about the meaning and validity of the Church-Turing Thesis.  The prompt for this debate was a question asking for opinions about Peter Wegner and Dina Goldin’s repetitive diatribes claiming to refute “the myth of the Church-Turing Thesis”—on the grounds that, you see, Turing machines can only handle computations with static inputs and outputs, not interactivity, or programs like operating systems that run continuously.  For a demolition of this simple misunderstanding, see Lance Fortnow’s CACM article.  Anyway, I wrote my own parodic response to the question, which generated so many comments that the moderators started shooing people away.  So I decided to repost my answer on my blog.  That way, after you’re done upvoting my answer over at CS Theory StackExchange :-), you can come back here and continue the discussion in the comments section.


Here’s my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science’s dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can’t toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a “point”, but it’s a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a Turing machine to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster’s internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn’t have been obvious to Turing himself.

So, I’d say the reason why there’s been no “followup” to Wegner and Goldin’s diatribes is that theoretical computer science has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing’s notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages existin the first place, that ensures they can always be compiled down to machine code? Could it be … err … THE CHURCH-TURING THESIS, the very same one you’ve been ragging on? To clarify, the Church-Turing Thesis is not the claim that “TURING MACHINEZ RULE!!” Rather, it’s the claim that any reasonable programming language will be equivalent in expressive power to Turing machines — and as a consequence, that you might as well think in terms of the higher-level languages if it’s more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Update (Sept. 6): Check out this awesome comment by Lou Scheffer, describing his own tale of conversion from a Church-Turing skeptic to believer, and making an extremely apt comparison to the experience of conversion to the belief that R, R2, and so on all have the same cardinality (an experience I also underwent!).

I was wrong about Joy Christian

Thursday, May 10th, 2012

Update: I decided to close comments on this post and the previous Joy Christian post, because they simply became too depressing for me.

I’ve further decided to impose a moratorium, on this blog, on all discussions about the validity of quantum mechanics in the microscopic realm, the reality of quantum entanglement, or the correctness of theorems such as Bell’s Theorem.  I might lift the moratorium at some future time.  For now, though, life simply feels too short to me, and the actually-interesting questions too numerous.  Imagine, for example, that there existed a devoted band of crackpots who believed, for complicated, impossible-to-pin-down reasons of topology and geometric algebra, that triangles actually have five corners.  These crackpots couldn’t be persuaded by rational argument—indeed, they didn’t even use words and sentences the same way you do, to convey definite meaning.  And crucially, they had infinite energy: you could argue with them for weeks, and they would happily argue back, until you finally threw up your hands in despair for all humanity, at which point the crackpots would gleefully declare, “haha, we won!  the silly ‘triangles have 3 corners’ establishment cabal has admitted defeat!”  And, in a sense, they would have won: with one or two exceptions, the vast majority who know full well how many corners a triangle has simply never showed up to the debate, thereby conceding to the 5-cornerists by default.

What would you in such a situation?  What would you do?  If you figure it out, please let me know (but by email, not by blog comment).


In response to my post criticizing his “disproof” of Bell’s Theorem, Joy Christian taunted me that “all I knew was words.”  By this, he meant that my criticisms were entirely based on circumstantial evidence, for example that (1) Joy clearly didn’t understand what the word “theorem” even meant, (2) every other sentence he uttered contained howling misconceptions, (3) his papers were written in an obscure, “crackpot” way, and (4) several people had written very clear papers pointing out mathematical errors in his work, to which Joy had responded only with bluster.  But I hadn’t actually studied Joy’s “work” at a technical level.  Well, yesterday I finally did, and I confess that I was astonished by what I found.  Before, I’d actually given Joy some tiny benefit of the doubt—possibly misled by the length and semi-respectful tone of the papers refuting his claims.  I had assumed that Joy’s errors, though ultimately trivial (how could they not be, when he’s claiming to contradict such a well-understood fact provable with a few lines of arithmetic?), would nevertheless be artfully concealed, and would require some expertise in geometric algebra to spot.  I’d also assumed that of course Joy would have some well-defined hidden-variable model that reproduced the quantum-mechanical predictions for the Bell/CHSH experiment (how could he not?), and that the “only” problem would be that, due to cleverly-hidden mistakes, his model would be subtly nonlocal.

What I actually found was a thousand times worse: closer to the stuff freshmen scrawl on an exam when they have no clue what they’re talking about but are hoping for a few pity points.  It’s so bad that I don’t understand how even Joy’s fellow crackpots haven’t laughed this off the stage.  Look, Joy has a hidden variable λ, which is either 1 or -1 uniformly at random.  He also has a measurement choice a of Alice, and a measurement choice b of Bob.  He then defines Alice and Bob’s measurement outcomes A and B via the following functions:

A(a,λ) = something complicated = (as Joy correctly observes) λ

B(b,λ) = something complicated = (as Joy correctly observes) -λ

I shit you not.  A(a,λ) = λ, and B(b,λ) = -λ.  Neither A nor B has any dependence on the choices of measurement a and b, and the complicated definitions that he gives for them turn out to be completely superfluous.  No matter what measurements are made, A and B are always perfectly anticorrelated with each other.

You might wonder: what could lead anyone—no matter how deluded—even to think such a thing could violate the Bell/CHSH inequalities?  Aha, Joy says you only ask such a naïve question because, lacking his deep topological insight, you make the rookie mistake of looking at the actual outcomes that his model actually predicts for the actual measurements that are actually made.  What you should do, instead, is compute a “correlation function” E(a,b) that’s defined by dividing A(a,λ)B(b,λ) by a “normalizing factor” that’s a product of the quaternions a and b, with a divided on the left and b divided on the right.  Joy seems to have obtained this “normalizing factor” via the technique of pulling it out of his rear end.  Now, as Gill shows, Joy actually makes an algebra mistake while computing his nonsensical “correlation function.”  The answer should be -a.b-a×b, not -a.b.  But that’s truthfully beside the point.  It’s as if someone announced his revolutionary discovery that P=NP implies N=1, and then critics soberly replied that, no, the equation P=NP can also be solved by P=0.

So, after 400+ comments on my previous thread—including heady speculations about M-theory, the topology of spacetime, the Copenhagen interpretation, continuity versus discreteness, etc., as well numerous comparisons to Einstein—this is what it boils down to.  A(a,λ) = λ and B(b,λ) = -λ.

I call on FQXi, in the strongest possible terms, to stop lending its legitimacy to this now completely-unmasked charlatan.  If it fails to do so, then I will resign from FQXi, and will encourage fellow FQXi members to do the same.

While I don’t know the exact nature of Joy’s relationship to Oxford University or to the Perimeter Institute, I also call on those institutions to sever any connections they still have with him.

Finally, with this post I’m going to try a new experiment.  I will allow comments through the moderation filter if, and only if, they exceed a minimum threshold of sanity and comprehensibility, and do not randomly throw around terms like “M-theory” with no apparent understanding of what they mean.  Comments below the sanity threshold can continue to appear freely in the previous Joy Christian thread (which already has a record-setting number of comments…).

Update (May 11): A commenter pointed me to a beautiful preprint by James Owen Weatherall, which tries sympathetically to make as much sense as possible out of Joy Christian’s ideas, and then carefully explains why the attempt fails (long story short: because of Bell’s theorem!).  Notice the contrast between the precision and clarity of Weatherall’s prose—the way he defines and justifies each concept before using it—and the obscurity of Christian’s prose.

Another Update: Over on the previous Joy Christian thread, some commenters are now using an extremely amusing term for people who believe that theories in physics ought to say something comprehensible about the predicted outcomes of physics experiments.  The term: “computer nerd.”

Third Update: Quite a few commenters seem to assume that I inappropriately used my blog to “pick a fight” with poor defenseless Joy Christian, who was minding his own business disproving and re-disproving Bell’s Theorem.  So let me reiterate that I wasn’t looking for this confrontation, and in fact took great pains to avoid it for six years, even as Joy became more and more vocal.  It was Joy, not me, who finally forced matters to a head through his absurd demand that I pay him $100,000 “with interest,” and then his subsequent attacks.

Bell’s-inequality-denialist Joy Christian offers me $200K if scalable quantum computers are built

Wednesday, May 2nd, 2012

Joy Christian is the author of numerous papers claiming to disprove Bell’s theorem.  Yes, that Bell’s theorem: the famous result from the 1960s showing that no local hidden variable theory can reproduce all predictions of quantum mechanics for entangled states of two particles.  Here a “local hidden variable theory” means—and has always meant—a theory where Alice gets some classical information x, Bob gets some other classical information y (generally correlated with x), then Alice and Bob choose which respective experiments to perform, and finally Alice sees a measurement outcome that’s a function only of her choice and of x (not of Bob’s choice or his measurement outcome), and Bob sees a measurement outcome that’s a function only of his choice and of y.  In modern terms, Bell, with simplifications by Clauser et al., gave an example of a game that Alice and Bob can win at most 75% of the time under any local hidden variable theory (that’s the Bell inequality), but can win 85% of the time by measuring their respective halves of an entangled state (that’s the Bell inequality violation).  The proofs are quite easy, both for the inequality and for its violation by quantum mechanics.  Check out this problem set for the undergrad course I’m currently teaching if you’d like to be led through the proof yourself (it’s problem 7).

In case you’re wondering: no, Bell’s Theorem has no more been “disproved” than the Cauchy-Schwarz Inequality, and it will never be, even if papers claiming otherwise are stacked to the moon.  Like Gödel’s and Cantor’s Theorems, Bell’s Theorem has long been a lightning rod for incomprehension and even anger; I saw another “disproof” at a conference in 2003, and will doubtless see more in the future.  The disproofs invariably rely on personal reinterpretations of the perfectly-clear concept of “local hidden variables,” to smuggle in what would normally be called non-local variables.  That smuggling is accompanied by mathematical sleight-of-hand (the more, the better) to disguise the ultimately trivial error.

While I’d say the above—loudly, even—to anyone who asked, I also declined several requests to write a blog post about Joy Christian and his mistakes.  His papers had already been refuted ad nauseam by others (incidentally, I find myself in complete agreement with Luboš Motl on this one!), and I saw no need to pile on the poor dude.  Having met him, at the Perimeter Institute and at several conferences, I found something poignant and even touching about Joy’s joyless quest.  I mean, picture a guy who made up his mind at some point that, let’s say, √2 is actually a rational number, all the mathematicians having been grievously wrong for millennia—and then unironically held to that belief his entire life, heroically withstanding the batterings of reason.  Show him why 2=A2/B2 has no solution in positive integers A,B, and he’ll answer that you haven’t understood the very concept of rational number as deeply as him.  Ask him what he means by “rational number,” and you’ll quickly enter the territory of the Monty Python dead parrot sketch.  So why not just leave this dead parrot where it lies?

Anyway, that’s what I was perfectly content to do, until Monday, when Joy left the following comment on my “Whether or not God plays dice, I do” post:

Scott,
You owe me 100,000 US Dollars plus five years of interest. In 2007, right under your nose (when you and I were both visiting Perimeter Institute), I demonstrated, convincing to me, that scalable quantum computing is impossible in the physical world.

He included a link to his book, in case I wanted to review his arguments against the reality of entanglement.  I have to confess I had no idea that, besides disproving Bell’s theorem, Joy had also proved the impossibility of scalable quantum computing.  Based on his previous work, I would have expected him to say that, sure, quantum computers could quickly factor 10,000-digit numbers, but nothing about that would go beyond ordinary, classical, polynomial-time Turing machines—because Turing himself got the very definition of Turing machines wrong, by neglecting topological octonion bivectors or something.

Be that as it may, Joy then explained that the purpose of his comment was to show that

there is absolutely nothing that would convince you to part with your 100,000. You know that, and everyone else knows that … The whole thing is just a smug scam to look smarter than the rest of us without having to do the hard work. Good luck with that.

In response, I clarified what it would take to win my bet:

As I’ve said over and over, what would be necessary and sufficient would be to convince the majority of the physics community. Do you hope and expect to do that? If so, then you can expect my $100,000; if not, then not. If a scientific revolution has taken place only inside the revolutionary’s head, then let the monetary rewards be likewise confined to his head.

Joy replied:

[L]et us forget about my work. It is not for you. Instead, let me make a counter offer to you. I will give you 200,000 US dollars the day someone produces an actual, working, quantum computer in a laboratory recognizable by me. If I am still alive, I will send you 200,000 US Dollars, multiplied by an appropriate inflation factor. Go build a quantum computer.

I’m grateful to Joy for his exceedingly generous offer.  But let’s forget about money for now.  Over the past few months, I’ve had a real insight: the most exciting potential application of scalable quantum computers is neither breaking RSA, nor simulating quantum physics, nor Grover’s algorithm, nor adiabatic optimization.  Instead, it’s watching the people who said it was impossible try to explain themselves.  That prospect, alone, would more than justify a Manhattan-project-scale investment in this field.

Postscript. If you want something about quantum foundations and hidden-variable theories of a bit more scientific interest, check out this MathOverflow question I asked on Monday, which was answered within one day by George Lowther (I then carefully wrote up the solution he sketched).

Updates (May 6). Depending on what sort of entertainment you enjoy, you might want to check out the comments section, where you can witness Joy Christian becoming increasingly unhinged in his personal attacks on me and others (“our very own FQXi genius” – “biased and closed-minded” – “incompetent” – “Scott’s reaction is a textbook case for the sociologists” – “As for Richard Gill, he is evidently an incompetent mathematician” – “I question your own intellectual abilities” – “your entire world view is based on an experimentally unsupported (albeit lucrative) belief and nothing else” – “You have been caught with your pants down and still refusing to see what is below your belly” – “let me point out that you are the lesser brain among the two of us. The pitiful flatness of your brain would be all too painful for everyone to see when my proposed experiment is finally done” – etc., etc).  To which I respond: the flatness of my brain?  Also notable is Joy’s Tourette’s-like repetition of the sentence, “I will accept judgement from no man but Nature.”  Nature is a man?

I just posted a comment explaining the Bell/CHSH inequality in the simplest terms I know, which I’ll repost here for convenience:

Look everyone, consider the following game. Two players, Alice and Bob, can agree on a strategy in advance, but from that point forward, are out of communication with each other (and don’t share quantum entanglement or anything like that). After they’re separated, Alice receives a uniformly-random bit A, and Bob receives another uniformly-random bit B (uncorrelated with A). Their joint goal is for Alice to output a bit X, and Bob to output a bit Y, such that

X + Y = AB (mod 2)

or equivalently,

X XOR Y = A AND B.

They want to succeed with the largest possible probability. It’s clear that one strategy they can follow is always to output X=Y=0, in which case they’ll win 75% of the time (namely, in all four of the cases except A=B=1).

Furthermore, by enumerating all of Alice and Bob’s possible pure strategies and then appealing to convexity, one can check that there’s no strategy that lets them win more than 75% of the time.  In other words, no matter what they do, they lose for one of the four possible (A,B) pairs.

Do you agree with the previous paragraph? If so, then you accept the Bell/CHSH inequality, end of story.

Of all the papers pointing out the errors in Joy Christian’s attempted refutations of the simple arithmetic above, my favorite is Richard Gill’s.  Let me quote from Gill’s eloquent conclusion:

There remains a psychological question, why so strong a need is felt by so many researchers to “disprove Bell” in one way or another? At a rough guess, at least one new proposal comes up per year. Many pass by unnoticed, but from time to time one of them attracts some interest and even media attention. Having studied a number of these proposals in depth, I see two main strategies of would-be Bell-deniers.

The first strategy (the strategy, I would guess, in the case in question) is to build elaborate mathematical models of such complexity and exotic nature that the author him or herself is the probably the only person who ever worked through all the details. Somewhere in the midst of the complexity a simple mistake is made, usually resulting from suppression of an important index or variable. There is a hidden and non-local hidden variable.

The second strategy is to simply build elaborate versions of detection loophole models. Sometimes the same proposal can be interpreted in both ways at the same time, since of course either the mistake or the interpretation as a detection loophole model are both interpretations of the reader, not of the writer.

According to the Anna Karenina principle of evolutionary biology, in order for things to succeed, everything has to go exactly right, while for failure, it suffices if any one of a myriad factors is wrong. Since errors are typically accidental and not recognized, an apparently logical deduction which leads to a manifestly incorrect conclusion does not need to allow a unique diagnosis. If every apparently logical step had been taken with explicit citation of the mathematical rule which was being used, and in a specifi ed context, one could say where the first misstep was taken. But mathematics is almost never written like that, and for good reasons. The writer and the reader, coming from the same scienti c community, share a host of “hidden assumptions” which can safely be taken for granted, as long as no self-contradiction occurs. Saying that the error actually occurred in such-and-such an equation at such-and-such a substitution depends on various assumptions.

The author who still believes in his result will therefore claim that the diagnosis is wrong because the wrong context has been assumed.

We can be grateful for Christian that he has had the generosity to write his one page paper with a more or less complete derivation of his key result in a more or less completely explicit context, without distraction from the author’s intended physical interpretation of the mathematics. The mathematics should stand on its own, the interpretation is “free”.  My fi nding is that in this case, the mathematics does not stand on its own.

Update (5/7): I can’t think of any better illustration than the comment thread below for my maxim that computation is clarity.  In other words, if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!  The following comment of mine expands on this point:

The central concept that I find missing from the comments of David Brown, James Putnam, and Thomas Ray is that of the sanity check.

Math and computation are simply the tools of clear thought. For example, if someone tells me that a 4-by-4 array of zorks contains 25 zorks in total, and I respond that 4 times 4 is 16, not 25, I’m not going to be impressed if the person then starts waxing poetic about how much more profound the physics of zorks is than my narrow and restricted notions of “arithmetic”. There must be a way to explain the discrepancy even at a purely arithmetical level. If there isn’t, then the zork theory has failed a basic sanity check, and there’s absolutely no reason to study its details further.

Likewise, the fact that Joy can’t explain how to code a computer simulation of (say) his exploding toy ball experiment that would reproduce his predicted Bell/CHSH violation is extremely revealing. This is also a sanity check, and it’s one that Joy flunks. Granted, if he were able to explain his model clearly enough for well-intentioned people to understand how to program it on a computer, then almost certainly there would be no need to actually run the program! We could probably just calculate what the program did using pencil and paper. Nevertheless, Bram, John Sidles, and others were entirely right to harp on this simulation question, because its real role is as a sanity check. If Joy’s ideas are not meaningless nonsense, then there’s no reason at all why we shouldn’t be able to simulate his experiment on a computer and get exactly the outcome that he predicts. Until Joy passes this minimal sanity check—which he hasn’t—there’s simply no need to engage in deep ruminations like the ones above about physics or philosophy or Joy’s “Theorema Egregious.”

U. of Florida CS department: let it be destroyed by rising sea levels 100 years from now, not reckless administrators today

Monday, April 23rd, 2012

Update (4/27): A famous joke concerns an airplane delivered to the US Defense Department in the 1950s, which included a punch-card computer on board.  By regulation, the contractor had to provide a list of all the components of the plane—engine, wings, fuselage, etc.—along with the weight of each component.  One item in the list read, “Computer software: 0.0 kg.”

“That must be a mistake—it can’t weigh 0 kg!” exclaimed the government inspector.  “Here, show me where the software is.”  So the contractor pointed to a stack of punched cards.  “OK, fine,” said the government inspector.  “So just weigh those cards, and that’s the weight of the software.”

“No, sir, you don’t understand,” replied the contractor.  “The software is the holes.”

If the Abernathy saga proves anything, it’s the continuing relevance of this joke even in 2012.  Abernathy is the government inspector who hears that software weighs nothing, and concludes that it does nothing—or, at least, that whatever division is responsible for punching the holes in the cards, can simply be folded into the division that cuts the card paper into rectangles.


As many of you have heard by now, Cammy Abernathy, Dean of Engineering at the University of Florida, has targeted her school’s Computer and Information Science and Engineering (CISE) department for disembowelment: moving most faculty to other departments, and shunting any who remain into non-research positions.  Though CISE is by all accounts one of UF’s strongest engineering departments, no other department faces similar cuts, and the move comes just as UF is increasing its sports budget by more than would be saved by killing computer science. (For more, see Lance’s blog, or letters from Eric Grimson and Zvi Galil. Also, click here to add your name to the already 7000+ petitioning UF to reconsider.)

On its face, this decision seems so boneheadedly perverse that it immediately raises the suspicion that the real reasons for it, whatever they are, have not been publicly stated. The closest I could find to a comprehensible rationale came from this comment, which speculates that the UF administration might be sabotaging its CS department as a threat to the Florida State legislature: “see, keep slashing our budget, and this is the sort of thing we’ll be forced to do!”  But I don’t find that theory very plausible; UF must realize that the Republican-controlled legislature’s likely reaction would be “go ahead, knock yourselves out!”

On a personal note, my parents live part-time in beautiful Sarasota, FL, home of the Mote Marine Laboratory, which does amazing work rehabilitating dolphins, manatees, and sea turtles.  Having visited Sarasota just a few weeks ago, I can testify that, despite frequent hurricanes, a proven inability to hold democratic elections, and its reputation as a giant retirement compound, Florida has definite potential as a state.

Academic computer science as a whole will be fine.  As for Florida, may the state prove greater than its Katherine Harrises, Rick Scotts, and Cammy Abernathys.

Update: See this document for more of the backstory on Abernathy’s underhanded tactics in dismantling the UF CISE department.  Based on the evidence presented there, she really does deserve the scorn now being heaped on her by much of the academic world.

Another Update: UF’s president issued a rather mealy-mouthed statement saying that they’re going to set aside their original evisceration proposal and find a compromise, though who knows what the compromise will look like.

In another news, Greg Kuperberg posted a comment that not only says everything I was trying to say more eloquently, but also explains why I and other CS folks care so much about this issue: because what’s really at stake is the concept of Turing-universality itself.  Let me repost Greg’s comment in its entirety.

It looks like Dean Abernathy hasn’t explained herself all that well, which is not surprising if what she is doing makes no sense. Reading the tea leaves, in particular the back-story document that Scott posted, it looks like she had it in for the CS department from the beginning of her tenure as Dean at Florida. In her interview with Stanford when she had just been appointed as dean, she already said then that “we” wanted to bring EE and CS closer together, even though at the time, there had been no discussion and there was no “we”. Then during discussions with the CS department, she refused to take no for an answer, even though she sometimes pretended to, and as time went on the actual plan looked more and more punitive. She appointed an outside chair to the department, and then in the final plan she terminated the graduate program, moved half of the department to EE, and left the other half to do teaching only. The CS department was apparently very concerned about its NRC ranking, but this ranking only came out when Abernathy’s wheels were already in motion. In any case everyone knows that the NRC rankings were notoriously shabby across all disciplines and the US News rankings, although hardly deep, are much less ridiculous.

So what gives? Apparently from Abernathy’s Stanford interview, and from her actions, she simply takes computer science to be a special case of electrical engineering. Ultimately, it’s a rejection of the fundamental concept of Turing universality. In this world view, there is no such thing as an abstract computer, or at best who really cares if there is one; all that really exists is electronic devices.

Scott points out that those departments that are combined EECS are really combined in name only. This is not just empirical happenstance; it comes from Turing universality and the abstract concept of a computer. Yes, in practice modern computers are electronic. However, if someone does research in compilers, much less CS theory, then really nothing at all is said about electricity. To most people in computer science, it’s completely peripheral that computers are electronic. Nor is this just a matter of theoretical vs applied computer science. CS theory may be theoretical, but compiler research isn’t, much less other topics such as user interfaces or digital libraries.

Abernathy herself works in materials engineering and has a PhD from Stanford. I’m left wondering at what point she failed to understand, or began to misunderstand or dismiss, the abstract concept of a computer. If she were dean of letters of sciences, then I could imagine an attempt to dump half of the literature department into a department of paper and printing technology, and leave the other half only to teach grammar. It would be exactly the same mistake.

The battle against Elsevier gains momentum

Wednesday, February 8th, 2012

Check out this statement on “The Cost of Knowledge” released today, which (besides your humble blogger) has been signed by Ingrid Daubechies (President of the International Mathematical Union), Timothy Gowers, Terence Tao, László Lovász, and 29 others.  The statement carefully explains the rationale for the current Elsevier boycott, and answers common questions like “why single out Elsevier?” and “what comes next?”

Also check out Timothy Gowers’ blog post announcing the statement.  The post includes a hilarious report by investment firm Exane Paribas, explaining that the current boycott has caused Reed Elsevier’s stock price to fall, but presenting that as a great investment opportunity, since they fully expect the price to rebound once this boycott fails like all the previous ones.  I ask you: does that not want to make you boycott Elsevier, for no other reason than to see the people who follow Exane Paribas’ cynical advice lose their money?

In related news, the boycott petition now has 4600+ signatures and counting.  If you’ve already signed, great!  If you haven’t, why not?

Update (Feb. 9): There’s now a great editorial by Gareth Cook in the Boston Globe supporting the Elsevier boycott (and analogizing it to both the Tahrir Square uprising and the Boston Tea Party!).

Boycott Elsevier!

Thursday, January 26th, 2012

If you’re in academia and haven’t done so yet, please take a moment to sign this online petition organized by Tyler Neylon, and pledge that you won’t publish, referee, or do editorial work for any Elsevier journals.  I’ve been boycotting Elsevier (and most other commercial journal publishers—Elsevier is merely the worst) since 2004, when I first learned about their rapacious pricing policies.  I couldn’t possibly be happier with my choice: unlike most idealistic principles, this one gets you out of onerous work rather than committing you to it!  Sure, Elsevier is huge and we’re tiny, but the fight against them is finally gathering steam (possibly because of Elsevier’s support for the “Research Works Act”), years after the case against them became inarguable.  Since their entire business model depends on our donating free labor to them, all it will take to bring them down is for enough of us to decide we’re through being had.  We can actually win this one … Yes We Can.

For more information, see this wonderful recent post by Fields medalist and Shtetl-Optimized commenter Timothy Gowers, entitled “Elsevier — my part in its downfall.”  (Added: also check out this great post by Aram Harrow.)  You might also enjoy a parody piece I wrote years ago, trying to imagine how Elsevier’s “squeeze those dupes for all they’ve got” business model would work in any other industry.

The Alternative to Resentment

Friday, December 2nd, 2011

A year ago, in a post entitled Anti-Complexitism, I tried to grapple with the strange phenomenon—one we’ve seen in force this past week—of anonymous commenters getting angry about the mere fact of announcements, on theoretical computer science blogs, of progress on longstanding open problems in theoretical computer science.  When I post something about global warming, Osama Bin Laden, or (of course) the  interpretation of quantum mechanics, I expect a groundswell of anger … but a lowering of the matrix-multiplication exponent ω?  Huh?  What was that about?

Well, in this case, some commenters were upset about attribution issues (which hopefully we can put behind us now, everyone agreeing about the importance of both Stothers’ and Vassilevska Williams’ contributions), while others honestly but mistakenly believed that a small improvement to ω isn’t a big deal (I tried to explain why they’re wrong here).  What interests me in this post is the commenters who went further, positing the existence of a powerful “clique” of complexity bloggers that’s doing something reprehensible by “hyping” progress in complexity theory, or by exceeding some quota (what, exactly?) on the use of the word “breakthrough.”

One of the sharpest responses to that paranoid worldview came (ironically) from a wonderful anonymous comment on my Anti-Complexitism post, which I recommend everyone read.  Here was my favorite paragraph:

The final criticism [by the anti-complexites] seems to be: complexity theory makes too much noise which people in other areas do not like.  I really don’t understand this one, I mean what is wrong with people in an area being excited about their area?  Is that wrong?  And where do we make those noise?  On complexity blogs!  If you don’t like complexity theorists being excited about their area why are you reading these blogs?  The metaphor would be an outsider going to a wedding and asking the people in the wedding with a very serious tone: “why is everyone happy here?”

Yesterday, in response to my reposting the above comment on Lance and Bill’s blog, another anonymous commenter had something extremely illuminating to say:

Scott, you are missing the larger socio-economical context: it’s not about excitement.  It’s about researchers competing for scarce resources, primarily funding.  The work involved in funding acquisition is generally loathed, and directly reduces the time scientists have for research and teaching.  If some researchers ramp up their hype-level vis-a-vis the rest of the community, as the complexity community is believed to be doing (what with all them Goedel awards?), they are forcing (or are seen as forcing) the rest either to accept a lower level of funding with all the concomitant disadvantages, or invest more time in hype themselves.  In other words, hypers are defecting in the prisoners dilemma type game scientists are playing, the objective of which is to minimise the labour involved in funding acquisition.

This is similar to teeth-whitening: in the past, it was perfectly possible to be considered attractive with natural, slightly yellowish teeth. Then some defected by bleaching, then more and more, and today natural teeth are socially hardly acceptable, certainly not if you want to be good-looking.  Is that progress?

I posted a response on Lance and Bill’s blog, but then decided it was important enough to repost here.  So:

Dear Anonymous 2:47,

Let me see whether I understand you correctly.  On the view you propose, other scientists shouldn’t have praised (say) Carl Sagan for getting millions of people around the world excited about science.  Rather, they should have despised him, for using hype to divert scarce funding dollars from their own fields to the fields Sagan favored (like astronomy, or Sagan’s preferred parts of astronomy).  Sagan forced all those other scientists to accept a terrible choice: either accept reduced funding, or else sink to Sagan’s level, and perform the loathed task of communicating their own excitement about their own fields to the public.

Actually, there were other scientists who drew essentially that conclusion.  As an example, Sagan was famously denied membership in the National Academy of Sciences, apparently because of a few vocal NAS members who were jealous and resentful of Sagan’s outreach activities.  The view we’re now being asked to accept is that those NAS members are the ones who emerge from the story the moral victors.

So let me thank you, Anonymous 2:47: it’s rare for anyone to explain the motivation behind angry TCS blog comments with that much candor.

Now that the real motivation has (apparently) crawled out from underneath its rock, I can examine it and refute it.  The central point is simply that science isn’t a Prisoner’s-Dilemma-type game.   What you describe as the “socially optimal equilibrium,” where no scientists need to be bothered to communicate their excitement about their fields, is not socially optimal at all—neither from the public’s standpoint nor from science’s.

At the crudest level, science funding is not a fixed-size pie.  For example, when Congress was debating the cancellation of the Superconducting Supercollider, a few physicists from other fields eagerly jumped on the anti-SSC bandwagon, hoping that the SSC money might then get diverted to their own fields.  Ultimately, of course, the SSC was cancelled, and none of the money ever found its way to other areas of physics.

So, if you see people using blogs to talk about research results that excite them, then instead of resenting it, consider starting your own blog to talk about the research results that excite YOU.  If your blog is well-written and interesting, I’ll even add you to my blogroll, game-theoretic funding considerations be damned.  Just go to WordPress.com—it’s free, and it takes only a few minutes to set one up.

Alex Halderman, and India’s assault on academic freedom

Friday, December 17th, 2010

Five years ago, not long after the founding of Shtetl-Optimized, I blogged about Alex Halderman: my best friend since seventh grade at Newtown Junior High School, now a famous security researcher and a computer science professor at the University of Michigan, and someone whose exploits seem to be worrying at least one government as much as Julian Assange’s.

In the past, Alex has demonstrated the futility of copy-protection schemes for music CDs, helped force the state of California to change its standards for electronic voting machines, and led a spectacular attack against an Internet voting pilot in Washington DC.  But Alex’s latest project is probably his most important and politically-riskiest yet.  Alex, Hari Prasad of India, and Rop Gonggrijp of the Netherlands demonstrated massive security problems with electronic voting machines in India (which are used by about 400 million people in each election, making them the most widely-used voting system on earth).  As a result of this work, Hari was arrested in his home and jailed by the Indian authorities, who threatened not to release him until he revealed the source of the voting machine that he, Alex, and Rop had analyzed.  After finally being released by a sympathetic judge, Hari flew to the United States, where he received the Electronic Frontier Foundation’s 2010 Pioneer Award.  I had the honor of meeting Hari at MIT during his and Alex’s subsequent US lecture tour.

But the story continues.  Earlier this week, after flying into India to give a talk at the International Conference on Information Systems Security (ICISS’2010) in Gandhinagar, Alex and Rop were detained at the New Delhi airport and threatened with deportation from India.  No explanation was given, even though the story became front-page news in India.  Finally, after refusing to board planes out of New Delhi without being given a reason in writing for their deportation, Alex and Rop were allowed to enter India, but only on the condition that they did so as “tourists.”  In particular, they were banned from presenting their research on electronic voting machines, and the relevant conference session was cancelled.

To those in the Indian government responsible for the harassment of Alex Halderman and Rop Gonggrijp and (more seriously) the imprisonment of Hari Prasad: shame on you!  And to Alex, Hari, and Rop: let the well-wishes of this blog be like a small, nerdy wind beneath your wings.

QCut

Thursday, December 16th, 2010

WARNING: This post makes (what turned out in retrospect to be) advanced use of sarcasm, irony, and absurdism.  Indeed, even after I added a disclaimer explaining the sarcasm, many commenters still responded as if I actually favored gutting the National Science Foundation.  (Unless, of course, those commenters were also being sarcastic—in which case, touche!)

The confusion is completely my fault.  When I write a post, I have in my mind a reader who’s read this blog for a while, and knows that obviously I don’t favor gutting the fraction of a percentage of the Federal budget devoted to the progress of human understanding and American leadership thereof; obviously the NSF wastes plenty of money, but if it didn’t, then it would be doing a terrible job, because research is all about trying stuff that has a good chance of failure; obviously if you were seriously looking for waste, you could find orders of magnitude more of it in the military and elsewhere.  So then the only remaining question is: how can we best have fun with a disgusting and contemptible situation?  I forgot how many people come to this blog not having any idea who I am or why I’m writing—and for that, I sincerely apologize.

Now, if you’d like a sarcasm-detection challenge, I did leave lots of hints in the following post that I didn’t actually agree with Congressman Smith.  See how many of them you can find!


As some of you may have heard, the incoming Republican majority in Congress has a new initiative called YouCut, which lets ordinary Americans like me propose government programs for termination.  So imagine how excited I was to learn that YouCut’s first target—yes, its first target—was that notoriously bloated white elephant, the National Science Foundation.  Admittedly, I’ve already tried to save NSF from some wasteful expenditures, in my occasional role as an NSF panel member.  But this is my first chance to join in as a plain US citizen.

In a video explaining the new initiative, Congressman Adrian Smith concedes that the NSF supports “worthy research in the hard sciences,” but then gives two examples of NSF grants that strike him as wasteful: one involving collaboration among soccer players, the other involving modeling the sound of breaking objects.  This article gives some more detail about the projects in question.

While I can’t wait to participate, I have a few questions before I start:

  1. Exactly which sciences count as “hard”?  Once the pitchforks are raised, how far do we go?  Is math fair game?  What about economics, cosmology, evolutionary biology?
  2. Has there ever been a research project that couldn’t be described in such a way as to sound absurd?  (“Even in the middle of a war, university academics in Chicago are spending taxpayer dollars in a quixotic attempt to smash teeny-tiny uranium atoms underneath a football field…”)
  3. Years ago, several commenters on my and Lance’s blogs eloquently argued that science funding isn’t a traditional left vs. right issue, that Republicans are at least as friendly to science as Democrats, and that viewing the modern GOP as the “party of ignorance” is inaccurate, simplistic, and offensive.  Would any of those commenters kindly help us understand what’s going on?

Let me end this post with a request: I want all of my readers to visit the YouCut page, and propose that quantum computing and theoretical computer science research be completely eliminated.  Here’s my own CAREER Award; go ahead and cite it by number as a particularly egregious example of government waste.

See, I’m hungry for the honor (not to mention the free publicity) of seeing my own favorite research topics attacked on the floor of the House.  As we all know, it’s child’s play to make fun of theoretical computer science: its abstruseness, its obvious irrelevance to national goals—however infinitesimal the cost is compared to (say) corn subsidies or defense contracts for stuff the military doesn’t want, however gargantuan the payoffs of such research have been in the past.  So what are Reps. Eric Cantor and Adrian Smith waiting for?  I dare them to do it!

Obviously, though, before the House Republicans end American participation in theoretical computer science, they’ll want to familiarize themselves with what our tiny little field actually is.  To that end, let me humbly offer the links on the sidebar to the right as one place to get started.

Update (12/18):  When a friend read this post, his first reaction was that the sarcasm would be lost on most readers.  I didn’t believe him.  See, I exist in a frame of reference wherein, when the mob shows up at your house with torches, you don’t argue with them.  Instead you say: “Oh, so you’re the ones here to burn me?  Then please, let’s get started!  There’s plenty of flammable fat around my torso area.  Do you prefer rare, medium, or well done?”  That way, at least history will record you as having gone down with your middle finger proudly aloft, rather than cowering in a corner.  However, it’s now obvious that my friend was right.  So, for the literal-minded: I think reacting to our country’s debt crisis by looking for NSF grants to ridicule is a really terrible idea, for reasons that are so self-evident I’ll simply provide some blank space for you to fill them in yourself: _______________________________.   And, having devoted my whole career to quantum computing and theoretical computer science research, I don’t wish to see them eliminated.  On the other hand, if science in United States were going to be dismantled (which, despite the efforts of some politicians, I don’t think it will be), then I’d consider it an honor for theoretical computer science to be the first in the crosshairs.

NRC: Nonsensical Ranking Clowns

Thursday, September 30th, 2010

As those of you in American academia have probably heard by now, this week the National Research Council finally released its 2005 rankings of American PhD programs, only five years behind schedule.  This time, the rankings have been made 80% more scientific by the addition of error bars.  Among the startling findings:

  • In electrical and computer engineering, UCLA and Purdue are ahead of Carnegie Mellon.
  • In computer science, UNC Chapel Hill is ahead of the University of Washington.
  • In statistics, Iowa State is ahead of Berkeley.

However, before you base any major decisions on these findings, you should know that a few … irregularities have emerged in the data used to generate them.

  • According to the NRC data set, 0% of graduates of the University of Washington’s Computer Science and Engineering Department had “academic plans” for 2001-2005.  (In reality, 40% of their graduates took faculty positions during that period.)  NRC also reports that UW CSE has 91 faculty (the real number is about 40).  Most of the illusory “faculty,” it turned out, were industrial colleagues who don’t supervise students, and who thereby drastically and artificially brought down the average number of students supervised.  See here and here for more from UW itself.
  • According to the NRC, 0% of MIT electrical engineering faculty engage in interdisciplinary work.  NRC also reports that 24.62% of MIT computer science PhDs found academic employment; the actual number is twice that (49%).
  • The more foreign PhD students a department had, the higher it scored.  This had the strange effect that the top departments were punished for managing to recruit more domestic students, who are the ones in much shorter supply these days.
  • The complicated regression analysis used to generate the scoring formula led to the percentage of female faculty in a given department actually counting against that department’s reputation score (!).

Ever since the NRC data were released from the parallel universe in which they were gathered, bloggers have been having a field day with them—see for example Dave Bacon and Peter Woit, and especially Sariel Har-Peled’s Computer Science Deranker (which ranks CS departments by a combined formula, consisting of 0% the NRC scores and 100% a random permutation of departments).

Yet despite the fact that many MIT departments (for some reason not CS) took a drubbing, I actually heard some of my colleagues defend the rankings, on the following grounds:

  1. A committee of good people put a lot of hard work into generating them.
  2. The NRC is a prestigious body that can’t be dismissed out of hand.
  3. Now that the rankings are out, everyone should just be quiet and deal with them.

But while the Forces of Doofosity usually win, my guess is that they’re going to lose this round.  Deans and department heads—and even the Computing Research Association—have been livid enough about the NRC rankings that they’ve denounced them with unusual candor, and the rankings have already been thoroughly eviscerated elsewhere on the web.

Look: if I really needed to know what (say) the best-regarded PhD programs in computer science were, I could post my question to a site like MathOverflow—and in the half hour before the question was closed for being off-topic, I’d get vastly more reliable answers than the ones the NRC took fifteen years and more than four million dollars to generate.

So the interesting questions here have nothing to do with the “rankings” themselves, and everything to do with the process and organization that produced them.  How does Charlotte Kuh, study director of the NRC’s Assessment of Research Doctorate Programs, defend the study against what now looks like overwhelming evidence of Three-Stooges-level incompetence?  How will the NRC recover from this massive embarrassment, and in what form should it continue to exist?

The NRC, as I had to look up, is an outfit jointly overseen by the National Academy of Sciences (NAS), the National Academy of Engineering (NAE), and the Institute of Medicine (IOM).  Which reminded me of the celebrated story about Richard Feynman resigning his membership in the NAS.  When asked why, Feynman explained that, when he was in high school, there was an “honor club” whose only significant activity was debating who was worthy of joining the honor club. After years in the NAS, he decided it was no different.

Now that I write that, though, an alternative explanation for the hilarious problems with the NRC study occurs to me.  The alternative theory was inspired by this striking sentence from an Inside Higher Ed article:

When one of the reporters on a telephone briefing about the rankings asked Ostriker [the chairman of the NRC project committee] and his fellow panelists if any of them would “defend the rankings,” none did so.

So, were these joke rankings an elaborate ruse by the NRC, meant to discredit the whole idea of a strict linear order on departments and universities?  If so, then I applaud the NRC for its deviousness and ingenuity in performing a much-needed public service.