Archive for the ‘Nerd Interest’ Category

Force multiplier

Thursday, July 28th, 2011

We live in perilous times.  Within a few days, the United States might default on its debt, plunging the country into an unprecedented catastrophe.  Meanwhile, the tragedy in Norway (a country I’ll visit for the first time next month) reminds us that the civilized world faces threats from extremists of every ideology.  All this news, of course, occurs against the backdrop of record-breaking heatwaves, the decimation of worldwide fish stocks, the dwindling supply of accessible oil, and the failure of the Large Hadron Collider to find supersymmetry.

But although the future may have seldom seemed bleaker, I want people to know that we in MIT’s complexity theory group are doing everything we can to respond to the most pressing global challenges.  And nothing illustrates that commitment better than a beautiful recent paper by my PhD student Andy Drucker (who many of you will recognize from his years of insightful contributions to Shtetl-Optimized: most recently, solving an open problem raised by my previous post).

Briefly, what Andy has done is to invent—and demonstrate—a breakthrough method by which anyone, including you, can easily learn to multiply ten-digit numbers in your head, using only a collection of stock photos from Flickr to jog your memory.

Now, you might object: “but isn’t it cheating to use a collection of photos to help you do mental math—just like it would be cheating to use pencil and paper?”  However, the crucial point is that you’re not allowed to modify or rearrange the photos, or otherwise use them to record any information about the computation while you’re performing it.  You can only use the photos as aids to your own memory.

By using his method, Andy—who has no special mental-math training or experience whatsoever—was able to calculate 9883603368 x 4288997768 = 42390752785149282624 in his head in a mere seven hours.  I haven’t tried the method myself yet, but hope to do so on my next long plane flight.

Crucially, the “Flickr method” isn’t limited to multiplication.  It works for any mental memorization or calculation task—in other words, for simulating an arbitrary Boolean circuit or Turing machine.  As I see it, this method provides probably the most convincing demonstration so far that the human brain, unaided by pencil and paper, can indeed solve arbitrary problems in the class P (albeit thousands of times more slowly than a pocket calculator).  In his paper, Andy discusses possible applications of the method for cognitive science: most notably, using it to test conjectures about the working of human memory.  If that or other applications pan out, then—like many other research projects that seem explicitly designed to be as useless as possible—Andy’s might end up failing at that goal.

Rosser’s Theorem via Turing machines

Thursday, July 21st, 2011

(Thanks to Amit Sahai for spurring me to write this post!)

The Background

We all remember Gödel’s First Incompleteness Theorem from kindergarten.  This is the thing that, given a formal system F, constructs a sentence G(F) that’s a mathematical encoding of

“This sentence is not provable in F.”

If F proves G(F), then F proves both that F proves G(F) and that F doesn’t prove G(F), so F is inconsistent (and hence also unsound).  Meanwhile, if F proves Not(G(F)), then it “believes” there’s a proof of G(F).  So either that proof exists (in which case it would render F inconsistent, by the previous argument), or else it doesn’t exist (in which case F is unsound).  The conclusion is that, assuming F is powerful enough to express sentences like G(F) in the first place, it can’t be both sound and complete (that is, it can’t prove all and only the true arithmetical statements).

OK, but the way most people like to state Gödel’s Theorem is stronger than that: no sufficiently-powerful formal system F can be both complete and consistent.  Note that soundness implies consistency, but not vice versa.  (If I believe that there’s a giant purple boogeyman on the moon, then presumably my belief is unsound, but it might be perfectly consistent with my various other beliefs about boogeymen.)

Unfortunately, neither Gödel’s original proof, nor computer scientists’ favorite alternative proofs, actually give you the stronger statement about completeness and consistency.  And this has been a persistent problem when I teach Gödel in my undergraduate computability and complexity class.

Where’s the catch in Gödel’s argument?  It’s in the case where F proves Not(G(F)) (i.e., that G(F) is provable), even though in reality G(F) is true (i.e., G(F) isn’t provable).  In that situation, F would clearly be unsound, but it might not contain any contradiction—basically because, no matter how long you looked, you could never rule out F’s (false) belief that G(F) is provable.  Indeed, F would be what I like to call a self-hating theory: a theory, like F+Not(Con(F)), that pessimistically believes in its own inconsistency, even though in fact it’s perfectly consistent.  (By contrast, if F arrogantly believes in its own consistency, then it can’t be consistent by the Second Incompleteness Theorem!  There’s a lesson there somewhere…)

The way Gödel himself solved this problem was by introducing a new concept called ω-consistency, which is intermediate between consistency and soundness.  He then showed that F can’t be both complete and ω-consistent.  (Why didn’t Gödel simply talk about soundness?  Because unlike consistency or ω-consistency, soundness is a “metatheoretic” concept that’s impossible to formalize in F.  So, if he used soundness, then the First Incompleteness Theorem couldn’t even be stated, let alone proved, in F itself, and that in turn would create problems for the proof of his Second Incompleteness Theorem.)

Anyway, from the standpoint of an undergrad class, the fear is that, once you start talking about “ω-consistency,” all the romance and self-referential magic of Gödel will vanish in a morass of technicality.

So surely we can dot the i’s here (or rather, put the umlauts on the ö’s), and prove the stronger, cleaner statement that F can’t be both complete and consistent?

Yes we can—but to do so we need Rosser’s Theorem: a strengthening of Gödel’s Theorem from 1936 that’s much less well-known among the nerd public than it ought to be.  In Rosser’s proof, we replace G(F) by a new sentence R(F), which is a mathematical encoding of the following:

“For every proof of this sentence in F, there’s a shorter disproof.”

If F proves R(F), then it also proves that there’s a disproof of R(F) that’s shorter than the proof of R(F) whose existence we just assumed.  So we can look for that disproof (since there are only finitely many strings of symbols to check), and either we’ll find it or we won’t—but in either case, we’ll have revealed F to be inconsistent.  Meanwhile, if F proves Not(R(F)), then it proves that there is a proof of R(F) with no shorter disproof.  So in particular, it proves that there’s a proof of R(F) that’s no longer than the proof of Not(R(F)) whose existence we just assumed.  But once again, we can look for that proof (there are only finitely many strings to check), and either we’ll find it or we won’t, and in either case, F is revealed to be inconsistent.

Notice what the Rosser sentence accomplishes: it creates a symmetry between the cases that R(F) is provable and R(F) is disprovable, correcting the asymmetry between the two cases in Gödel’s original argument.

Alright, so then why don’t I just teach Rosser’s Theorem in my undergrad class, instead of Gödel’s?

I’ll tell you why: because, when I teach Gödel to computer scientists, I like to sidestep the nasty details of how you formalize the concept of “provability in F.”  (From a modern computer-science perspective, Gödel numbering is a barf-inducingly ugly hack!)

Instead, I simply observe Gödel’s Theorem as a trivial corollary of what I see as its conceptually-prior (even though historically-later) cousin: Turing’s Theorem on the unsolvability of the halting problem.

For those of you who’ve never seen the connection between these two triumphs of human thought: suppose we had a sound and complete (and recursively-axiomatizable, yadda yadda yadda) formal system F, which was powerful enough to reason about Turing machines.  Then I claim that, using such an F, we could easily solve the halting problem.  For suppose we’re given a Turing machine M, and we want to know whether it halts on a blank tape.  Then we’d simply have to enumerate all possible proofs in F, until we found either a proof that M halts, or a proof that M runs forever.  Because F is complete, we’d eventually find one or the other, and because F is sound, the proof’s conclusion would be true.  So we’d decide whether M halts.  But since we already know (thanks to Mr. T) that the halting problem is undecidable, we conclude that F can’t have existed.

“Look ma, no Gödel numbers!”

The “New” Observation

The above discussion leads to an obvious question:

Is there also a proof of Rosser’s Theorem that uses only Turing machines—analogous to the computer-scientist-friendly proof we just gave for the “original” Incompleteness Theorem?

My initial worry was that the answer would be no.  For not only is Rosser’s sentence more complicated than Gödel’s, but it depends essentially on an ordering of proofs—and it’s not clear what such an ordering would correspond to in the world of Turing machines.

Why did this worry me?  Because it threatened my conviction that computer programs are “really” at the core of Gödel’s Theorem, whatever any tradition-minded philosopher or logician might claim to the contrary.  If even the modest step from Gödel to Rosser required abandoning the computability perspective, then my faith in the Almighty Turing Machine would be shaken.

But never fear!  A few months ago, I found a short, simple, Turing-machine-based proof of Rosser’s Theorem.  While this seemed too small to write up as a paper, I’d never seen it before (please let me know if you have!), so I thought I’d share it here.  (Update: Makoto Kanazawa points me to a basically-similar argument in Kleene’s 1967 textbook.  So, you can consider what follows to be a “popularization” of Kleene.)

The first step is to define the following variant of the halting problem:

The Consistent Guessing Problem

Given as input a description of a Turing machine M:

  1. If M accepts on a blank tape, then accept.
  2. If M rejects on a blank tape, then reject.
  3. If M runs forever on a blank tape, then either accept or reject, but in any case, halt!

It’s easy to show that there’s no Turing machine to solve the Consistent Guessing Problem, by a modification (arguably, even a simplification) of Turing’s original argument for the halting problem.  Indeed, I put the unsolvability of the Consistent Guessing Problem on last semester’s midterm, and at least half the students got it.  (Damn!  I guess I can’t use that one again.)

See it yet?  No?  Alright, so let P be a Turing machine that solves the Consistent Guessing Problem.  Then we can easily modify P to produce an new Turing machine Q that, given as input a description ⟨M⟩ of another Turing machine M:

  • Rejects if M(⟨M⟩) accepts.
  • Accepts if M(⟨M⟩) rejects.
  • Halts (either accepting or rejecting) if M(⟨M⟩) runs forever.

Now run Q on its own description ⟨Q⟩.  If Q(⟨Q⟩) accepts, then it rejects; if Q(⟨Q⟩) rejects, then it accepts.  So the only remaining option is that Q(⟨Q⟩) runs forever, violating the third condition.

From the unsolvability of the Consistent Guessing Problem, I claim that Rosser’s Theorem follows.  For suppose we had a complete and consistent (but not necessarily sound!) formal system F, which was powerful enough to talk about Turing machines.  Then using F, we could solve the Consistent Guessing Problem, as follows.  Given as input a Turing machine description ⟨M⟩, start enumerating all possible proofs and disproofs of the statement “M accepts on a blank tape.”  Accept as soon as you find a proof, or reject as soon as you find a disproof.

Because F is complete, you must eventually find either a proof or a disproof (and therefore halt, either accepting or rejecting).  Also, because F is consistent, if M really rejects then F can’t prove that M accepts, while if M really accepts then F can’t prove that M doesn’t accept (since in either case, the contradiction could be discovered in finite time).  So you’ll accept if M accepts and reject if M rejects.  But this means that you’re solving Consistent Guessing!  Since we already showed there’s no Turing machine to solve Consistent Guessing, we conclude that F can’t have existed.

QED: the moral order of the universe is restored, and the Turing machine’s exalted position at the base of all human thought reaffirmed.

(Incidentally, you might wonder whether Gödel’s Second Incompleteness Theorem, on the impossibility of a consistent F proving its own consistency, can also be proved in a Turing-machine-centric way.  To anticipate your question, the answer is yes—and better yet, it even involves Kolmogorov complexity!  See, for example, this beautiful recent paper by Shira Kritchman and Ran Raz.)

So, will Gödel’s Theorem always and forevermore be taught as a centerpiece of computability theory, and will the Gödel numbers get their much-deserved retirement?  I don’t see a reason why that shouldn’t happen—but alas, the consistency of my prediction isn’t enough to imply its metatheoretic truth.

What Alan T. did for his PhD

Tuesday, June 28th, 2011

We’ve all been there before: by the time you start graduate school in Princeton, you’ve already invented the Turing machine, pioneered the concept of computational universality, and proved the unsolvability of Hilbert’s Entscheidungsproblem.  A few years from now, you’re going to return to England to make decisive contributions to the breaking of the Enigma and the winning of World War II.  Your problem is, what do you do for the couple years in between?  (Keep in mind that you have a PhD thesis to submit, and the Turing machine is already old hat by now!)

The answer, apparently, is to tackle a neat problem in logic, one version of which was asked three weeks ago by a Shtetl-Optimized commenter named Schulz.  Not knowing the answer, I posted Schulz’s problem to MathOverflow.  There, François Dorais and Philip Welch quickly informed me that Turing had already studied the problem in 1939, and Timothy Chow pointed me to Torkel Franzen’s book Inexhaustability: A Non-Exhaustive Treatment, which explains Turing’s basic observation and the background leading up to it in a crystal-clear way.

The problem is this: given any formal system F that we might want to take as a foundation for mathematics (for example, Peano Arithmetic or Zermelo-Fraenkel set theory), Gödel tells us that there are Turing machines that run forever, but that can’t be proved to run forever in F.  An example is a Turing machine M that enumerates all the proofs in F one by one, and that halts if it ever encounters a proof of 0=1.  The claim that M doesn’t halt is equivalent to the claim that F is consistent—but if F is indeed consistent, then the Second Incompleteness Theorem says that it can’t prove its own consistency.

On the other hand, if we just add the reasonable axiom Con(F) (which asserts that F is consistent), then our new theory, F+Con(F), can prove that M runs forever.  Of course, we can then construct a new Turing machine M’, which runs forever if and only if F+Con(F) is consistent.  Then by the same argument, F+Con(F) won’t be able to prove that M’ runs forever: to prove that, we’ll need a yet stronger theory, F+Con(F)+Con(F+Con(F)).  This leads inevitably to considering an infinite tower of theories F0, F1, F2, …, where each theory asserts the consistency of the ones before it:

F0 = F

Fi = Fi-1 + Con(Fi-1) for all i≥1

But there’s no reason not to go further, and define another theory that asserts the consistency of every theory in the above list, and then another theory that asserts the consistency of that theory, and so on.  We can formalize this using ordinals:

Fω = F + Con(F0) + Con(F1) + Con(F2) + …

Fω+i = Fω+i-1 + Con(Fω+i-1) for all i≥1

F = Fω + Con(Fω) + Con(Fω+1) + Con(Fω+2) + …

and so on, for every ordinal α that we can define in the language of F.  For every such ordinal α, we can easily construct a Turing machine Mα that runs forever, but that can’t be proved to run forever in Fα (only in the later theories).  The interesting question is, what happens if we reverse the quantifiers? In other words:

Given a Turing machine M that runs forever, is there always an ordinal α such that Fα proves that M runs forever?

This is the question Turing studied, but I should warn you that his answer is disappointing.  It turns out that the theories Fα are not as well-defined as they look.  The trouble is that, even to define a theory with infinitely many axioms (like Fω or F), you need to encode the axioms in some systematic way: for example, by giving a Turing machine that spits out the axioms one by one.  But Turing observes that the power of Fα can depend strongly on which Turing machine you use to spit out its axioms!  Indeed, he proves the following theorem:

Given any Turing machine M that runs forever, there is some “version” of Fω+1 (i.e., some way of encoding its axioms) such that Fω+1 proves that M runs forever.

The proof is simple.  Assume for simplicity that F itself has only finitely many axioms (removing that assumption is straightforward).  Then consider the following Turing machine P for outputting the axioms of Fω, which gives rise to a “version” of Fω that we’ll call FP:

Output the axioms of F

For t=0,1,2,…

If M halts in t steps or fewer, then output “Con(FP)”; otherwise output “Con(Ft)”

Next t

You might notice that our description of P involves the very theory FP that we’re defining!  What lets us get away with this circularity is the Recursion Theorem, which says (informally) that when writing a program, we can always assume that the program has access to its own code.

Notice that, if P ever output the axiom “Con(FP)”, then FP would assert its own consistency, and would therefore be inconsistent, by the Second Incompleteness Theorem.  But by construction, P outputs “Con(FP)” if and only if M halts.  Therefore, if we assume FP‘s consistency as an axiom, then we can easily deduce that M doesn’t halt.  It follows that the theory Fω+1 := FP + Con(FP) proves that M runs forever.

One question that the above argument leaves open is whether there’s a Turing machine M that runs forever, as well as a system S of ordinal notations “extending as far as possible”, such that if we use S to define the theories Fα, then none of the Fα‘s prove that M runs forever.  If so, then there would be a clear sense in which iterated consistency axioms, by themselves, do not suffice to solve the halting problem.  Alas, I fear the answer might depend on exactly how we interpret the phrase “extending as far as possible” … elucidation welcome!

Update (June 29, 2011): In a comment, François Dorais comes to the rescue once again:

In connection with your last paragraph, Feferman has shown that there are paths through O such that the resulting theory proves all true ∏01 statements. [JSL 27 (1962), 259-316] Immediately after Feferman and Spector showed that not all paths through O do this. [JSL 27 (1962), 383-390] In particular, they show that any good path must be more complicated than O itself: the path cannot be ∏11. In other words, there is no simple way to form a wellordered iterated consistency extension that captures all true ∏01 statements.

A personal post

Sunday, June 5th, 2011

Here’s an interview with me by math grad student Samuel Hansen, as part of a podcast he runs called Strongly Connected Components.  (Also check out the interviews with Steven Rudich, Steven Rudich a second time, Lance Fortnow, Doron Zeilberger, and your other favorite stars of the nerdosphere!)  In the interview, I talk about my passion for baseball stats, what you don’t know about llama-breeding, the use of color in Matisse’s later works … oh all right, it’s mostly about quantum computing and P vs. NP.

Here’s a story I told for an event called Story Collider, which was back-to-back with a superb production of Breaking the Code (Hugh Whitemore’s acclaimed play about the life of Alan Turing) in Cambridge’s Central Square Theater.  I was honored to serve as a “scientific consultant” to the Breaking the Code production, and to do audience Q&A before and after a couple performances.  In the Story Collider, I talk about the “Turing phase” I went through as a teenager and Alan T.’s impact on my life.

(Note: For the past couple years, I’ve avoided talking much about my personal life on this blog, since I pride myself on being someone who learns from experience and adjusts his behavior accordingly.  But two months ago, something truly happy occurred in my life, and if you listen to the end of the Story Collider, you’ll find out what it was…)

One last personal note: I’m at the Federated Computing Research Conference in San Jose all week.  If you read Shtetl-Optimized, are here at FCRC, see me, and wouldn’t do so otherwise, come and say hi!

CS timeline voting: the results are in!

Thursday, April 28th, 2011

The top ten:

1. Euclid’s Elements: 116 votes
2. Turing’s “On Computable Numbers”: 110 votes
3. Gödel’s Incompleteness Theorem: 107 votes
4. Gödel’s P vs. NP Letter to von Neumann: 106 votes
5. George Boole’s Logic: 88 votes
6. Shor’s Algorithm: 88 votes
7. Wikipedia: 85 votes
8. Claude Shannon’s Digital Logic: 82 votes
9. PRIMES in P: 82 votes
10. Cook-Levin Theorem: 80 votes

The rest:

Al-Khwarizmi’s “On the Calculation with Hindu Numerals”: 79 votes
Bardeen, Brattain, and Shockley Invent Transistor: 79 votes
Babbage’s Analytical Engine: 77 votes
Tim Berners-Lee Invents WWW: 75 votes
Fast Fourier Transform: 73 votes
Brin and Page Create Google: 73 votes
von Neumann Architecture: 71 votes
RSA: 70 votes
Hilbert Calls for Mechanization of Mathematical Reasoning: 69 votes
Simplex Algorithm: 69 votes
Claude Shannon Formalizes Cryptography: 68 votes
Dijkstra’s Algorithm: 68 votes
Gaussian Elimination Described in Ancient China: 67 votes
Quicksort: 65 votes
UNIX and C: 65 votes
Newton’s Method: 64 votes
Leibniz Describes Binary Notation, Calculus Ratiocinator: 64 votes
First Program written by Ada Lovelace: 64 votes
Gauss’s Disquisitiones Arithmeticae: 62 votes
Monte Carlo Method: 62 votes
“Bit” Coined: 62 votes
TeX Typesetting: 62 votes
Ginsparg Creates arXiv: 61 votes
Kleene Invents Regular Expressions: 61 votes
McCarthy Invents LISP: 59 votes
“The Art of Computer Programming”: 59 votes
TCP/IP Protocol: 58 votes
Strassen’s Algorithm: 58 votes
PCP Theorem: 56 votes
Turing Test: 55 votes
Randomized Primality Testing: 55 votes
IP=PSPACE: 55 votes
Scott and Rabin’s Paper on Nondeterminism: 54 votes
Jacquard Loom: 54 votes
Colossus Begins Operation at Bletchley Park: 53 votes
Integrated Circuit: 53 votes
Chomsky Hierarchy: 52 votes
Pascal Builds Arithmetic Machine: 51 votes
First Genome Sequenced: 51 votes
Reed-Solomon Codes: 50 votes
Time Hierarchy Theorem: 50 votes
ARPAnet: 49 votes
Four Color Map Theorem Proved: 49 votes
Linux: 49 votes
Diophantine Equations Proved Undecidable: 46 votes
Feynman Suggests Quantum Computing: 46 votes
Deep Blue Defeats Kasparov: 46 votes
Solomonoff-Kolmogorov-Chaitin Complexity: 44 votes
Lempel-Ziv Data Compression: 43 votes
GPS: 42 votes
Marian Rejewski’s “Bombe” + Alan Turing’s Improvements: 41 votes
Diffie-Hellman Public Key Exchange Protocol: 41 votes
Zuse’s Z1: 40 votes
Viterbi Algorithm: 40 votes
First Email Message: 38 votes
Pseudorandom Generators: 37 votes
Oughtred Invents Slide Rule: 36 votes
FORTRAN: 36 votes
ENIAC: 35 votes
Semaphores: 35 votes
Gottlob Frege’s “Begriffsschrift”: 34 votes
Grace Murray Hopper Creates A-O Compiler: 34 votes
Conway’s Game of Life: 34 votes
Xerox Parc’s Alto With First GUI: 33 votes
Kuttaka Algorithm from Ancient India: 32 votes
Scientific Computing During Manhattan Project: 30 votes
Wilkes, Wheeler, and Gill Define Closed Subroutines: 29 votes
Stroustrup creates C++: 28 votes
Zimmermann creates PGP: 28 votes
Dartmouth Conference Popularizes Term “AI”: 27 votes
Moore’s Law: 27 votes
Boosting in Machine Learning: 27 votes
Codd Proposes Relational Databases: 26 votes
Ethernet Invented: 26 votes
Valiant Proposes PAC-Learning: 26 votes
Stallman Writes GNU Manifesto: 25 votes
Wiesner Proposes Quantum Money and Multiplexing: 24 votes
Antikythera Mechanism: 23 votes
BitTorrent: 23 votes
Low-Density Parity Check Codes: 23 votes
McCulloch and Pitts’ “A Logical Calculus Immanent in Nervous Activity”: 22 votes
Engelbart and English Invent Mouse: 22 votes
Dijkstra’s “Go To Statement Considered Harmful”: 22 votes
Back-Propagation: 22 votes
MIT SAGE Creates First Large-Scale Computer Network: 21 votes
Vannevar Bush Creates First Large-Scale Analog Calculator: 20 votes
IBM Introduces Hard Drive: 20 votes
Checkers Solved: 20 votes
First Packet-Switching Network: 20 votes
Atanasoff and Berry’s Vaccum-tube Computer: 19 votes
Vannevar Bush’s “As We May Think”: 19 votes
Hollerith’s Electromechanical Counting Machine: 18 votes
MIT Builds First Time-Sharing System: 18 votes
First Computer Virus: 18 votes
IEEE Floating-Point Standard: 18 votes
IBM PC: 18 votes
“Spacewar!”, First Computer Game: 17 votes
RISC Architecture: 17 votes
Intel’s 8086: 17 votes
al-Jazari’s Water Clocks and Musical Automata: 17 votes
Edward Lorenz (Re)discovers Chaos Theory: 16 votes
Apollo Guidance Computer: 16 votes
CAPTCHAs: 16 votes
VC Dimension: 16 votes
Macsyma    Computer Algebra System: 15 votes
Amazon.com: 15 votes
UNIVAC I: 13 votes
DaVinci Surgical Robot: 13 votes
Mark II Incident Popularizes Word “Bug”: 12 votes
Weizenbaum Creates ELIZA: 12 votes
ASCII: 11 votes
TI Handheld Calculator: 11 votes
Simula 67: 11 votes
MIT Whirlwind I Displays Graphics: 10 votes
Sketchpad, First CAD Software: 10 votes
NCSA Mosaic: 10 votes
Robert Morris’ Computer Worm: 9 votes
Pixar Releases “Toy Story”: 9 votes
Stuxnet Worm: 9 votes
IBM System/360: 8 votes
Mac Hack Chess Program: 7 votes
Microsoft Windows: 7 votes
Sojourner on Mars: 7 votes
BASIC: 6 votes
Apple Macintosh: 6 votes
SETI@home: 6 votes
IBM’s Watson Wins At Jeopardy!: 5 votes
Atari’s Pong: 4 votes
Atlas Computer in Manchester: 4 votes
Norbert Wiener Founds Cybernetics: 3 votes
First ATM in Tokyo: 3 votes
Youtube Launched: 3 votes
VisiCalc: 2 votes
Jevon’s Logic Piano: 1 vote
Apple II: 1 vote
Adobe PostScript: 1 vote
SABRE Travel Reservation System: 0 votes
Fischer-Lynch-Paterson Theorem: 0 votes
Facebook, Twitter Use in Egypt Revolution: 0 votes
First Machine Translation Demonstration: -1 vote
Usenet: -1 vote
Akamai: -2 votes
TX-0: -3 votes
CDC 6600: -3 votes
Compact Disc Invented: -3 votes
Aiken’s Mark I: -4 votes
CM-1 Connection Machine: -4 votes
Whirlwind I Displays Graphics: -5 votes
Floppy Disk Invented: -6 votes
MITS Altair Microcomputer and Microsoft BASIC: -6 votes
Axelrod’s “The Evolution of Cooperation”: -7 votes
Microsoft Office: -7 votes
Pentium FDIV Bug: -7 votes
EDSAC: -8 votes
UNIMATE, First Industrial Robot: -9 votes
CLU Programming Language: -9 votes
1ESS Switching System: -11 votes
UNIVAC Predicts Presidential Election: -12 votes
Stanford Arm: -13 votes
“2001 A Space Odyssey” Introduces HAL: -15 votes
“Spam” Coined: -16 votes
First Denial-of-Service Attack: -17 votes
Y2K Bug: -18 votes
Facebook Launched: -18 votes
Nintendo’s Donkey Kong: -19 votes
“Robot” Coined: -21 votes
CSIRAC    -21
Apple’s iPhone: -21 votes
Slashdot: -27 votes
Godwin’s Law: -29 votes
Asimov’s Three Laws of Robotics: -32 votes
Match.com: -34 votes
de Vaucanson’s Mechanical Duck: -39 votes
von Kempelen’s Mechanical Turk: -52 votes

A few comments:

  1. It’s (just-barely) conceivable that the results could have been slightly skewed by the quantum- and complexity-loving readership of this blog.
  2. Voters really didn’t like fiction/pop-culture references, mechanical contrivances, or anything that sounded like a publicity stunt.  They were much keener on conceptual advances (even to the extent of putting Gödel well ahead of the transistor).

I need to catch a plane to give the Buhl Lecture at Carnegie Mellon tomorrow, so I’ll leave you to draw any further conclusions.

Three museum reviews

Thursday, April 21st, 2011

The American Museum of Natural History has two temporary exhibits that are drawing large crowds.  One, Brain: The Inside Story, I can attest is worth a visit the next time you’re in NYC.  From the New York Times review, I’d been worried that the exhibit would be full of la-de-da generalities: “how marvelously complicated is the brain!  how little we understand about it!”  But it turned out that was just the review.   The exhibit itself does a pretty good job of summarizing what’s known about how the brain is organized, how it develops, how various drugs affect it, and more.  One highlight for me was a model brain that you can take apart to see how the brain stem, limbic system, and cerebral cortex fit together—something that 2D images had never successfully conveyed to me.  The other exhibit, The World’s Largest Dinosaurs, was sold out for the entire day when we tried to go there, so we had to content ourselves with the smaller dinosaurs in the rest of the museum.

The Mark Twain House and Museum in Hartford, Connecticut, should be avoided at all costs.  On a recent visit, I and my family of Twain fans were snidely turned away since we hadn’t booked a tour—a requirement buried in the website, which someone googling for the opening hours would almost certainly miss.  (This despite the fact that the museum wasn’t crowded, and we could have easily joined a tour that was starting as we arrived.)  So don’t suffer the petty bureaucrats who curate Twain’s legacy, and treat the town of Hartford the way they’d apparently like you to: as a bathroom stop along the highway from New York to Boston.  Twain would’ve been amused. Jeffrey Nichols, Executive Director of the Mark Twain House, left me a personal apology in comments section.  I thank him warmly for that, and maybe I will visit again sometime—though it will help if I have some way of knowing I won’t just be turned away again! 🙂

The Yad Vashem Holocaust History Museum in Jerusalem has been redesigned since the last time I was there, in 2002.  In the old Yad Vashem, you walked around more-or-less randomly looking at the exhibits; in the new one, you proceed in a more linear order (similar to the US Holocaust Memorial Museum in Washington DC): from the rise of Nazism to the first anti-Jewish laws to the ghettoes to the gas chambers and crematoria.  The tour ends powerfully, with the Hall of Names (a large circular room with photos of victims and bookshelves of data about 3.8 million of them), followed by a balcony with a spectacular view of West Jerusalem—as if the building itself is trying to explain why the country it’s in exists.  I recommend a visit, even if you’ve been to Yad Vashem before its redesign in 2005.  But be careful to check the opening hours: the first time my family and guests tried to visit, the museum was closing, we were turned away, and we ended up going instead to a rest stop full of Elvis statues, where people lined up to use the bathroom and bought Elvis t-shirts.  (I thought that belonged in some anthology of Jewish humor.)

Summary: While the world’s museums have a great deal to teach us, they ought to devote more of their attention to the fundamental tasks of being open and letting people in.  People turned away from a museum are not just lost customers: they’ve often spent hours getting to an unusual place, and may be so annoyed by the wasted trip that they won’t want to return, even if they have the opportunity to do so.  In two of the cases above, I checked the website beforehand and that didn’t suffice, since the key information I needed wasn’t there or was buried.  Yeah, I suppose I could call ahead before every museum visit, but I hate doing that.  If someone wants to start CanIActuallyGetInToTheMuseum.com, it could be a fantastic way to not make any money.

Top 150 computer science events to be decided once and for all

Monday, April 11th, 2011

Today I break Shtetl-Optimized‘s long radio silence with a relatively-exciting announcement: you remember my timeline of computer science history?  Well, MIT students Jason Zhu and Ammar Ammar have now kindly created a website where you can vote on each of the entries, as well as new entries suggested by commenters.  It’s pretty simple: you just register (by entering an email address, username, and password), then upvote each entry you like and downvote each entry you dislike (you can also abstain on any entry).

The voting site arrives just in time for the MIT symposium “Computation and the Transformation of Practically Everything”, which is happening today and tomorrow.

For reference, here are the 17 new contenders added by popular demand:

150BC Chinese text describes Gaussian elimination
499 Indian mathematician Aryabhata describes the “kuttaka” algorithm for solving Diophantine equations
1206 al-Jazari builds elaborate water clocks and musical automata
1801 The Jacquard loom uses punched cards to control textile manufacturing
1951 Wilkes, Wheeler, and Gill describe the concept of closed subroutines
1956 Stephen Kleene invents regular expressions
1962 The Atlas computer begins operation in Manchester
1962 Robert Gallager introduces low-density parity check codes
1968 First deployed packet-switching network
1969 Strassen’s algorithm for fast matrix multiplication
1969 Stephen Wiesner conceives of quantum money and multiplexing
1971 Vapnik and Chervonenkis introduce VC dimension
1982 PostScript
1992 The PCP Theorem
1999 SETI@home
2006 DaVinci surgical robot performs the first unaided operation
2007 Checkers solved

Update: A new feature has been added that lets you rank four randomly-selected entries—click “Done” on the bottom of the page to access it.

Update: You can now undo a vote by clicking twice on the same arrow.

Science journalism: good and hilarious

Saturday, March 5th, 2011

On Wednesday, Larry Hardesty of the MIT News Office published a nice article about my work with Alex Arkhipov on the computational complexity of linear optics.  Although the title—“The quantum singularity”—made me wince a little, I was impressed by the effort Larry put into getting the facts right, and especially laying out the problems that still need to be solved.

Less successful was a story in PC Magazine based on MIT’s press release, which contained the following sentence (let me know if you can decipher what the author meant—I couldn’t):

Aaronson says that he and Arkhipov have not successfully proven that designing a device capable of testing the theory is impossible—which is an important first step, whether to eventually building a quantum computer, or even just laying the initial framework for using the microscopic secrets of the universe to let humans better understand the world that surrounds them.

However, in the competition for Popular Science Article Sentence of the Year, the sentence above will have to contend with a now-classic sentence from the New York Times article about Watson:

More than anything, the contest was a vindication for the academic field of computer science, which began with great promise in the 1960s with the vision of creating a thinking machine and which became the laughingstock of Silicon Valley in the 1980s, when a series of heavily financed start-up companies went bankrupt.

To the NYT’s credit, they quickly posted a correction:

An article last Thursday about the I.B.M. computer Watson misidentified the academic field vindicated by Watson’s besting of two human opponents on “Jeopardy!” It is artificial intelligence — not computer science, a broader field that includes artificial intelligence.

Timeline of computer science

Friday, February 11th, 2011

Update (Feb. 14): Thanks so much to the many commenters who offered suggestions—I’ve implemented a large fraction of them!  In addition to many clarifications and corrections of existing entries, I’ve added entries for:

Al-Khwarizmi
The slide rule
William Stanley Jevons’s “logic piano”
The Atanasoff-Berry computer
Claude Shannon’s cryptographic work
Reed-Solomon codes (replacing the Hamming code)
Solomonoff-Kolmogorov-Chaitin complexity
The 1ESS switching system
Semaphores
Viterbi’s algorithm
Simula 67
Back-propagation (now sharing an entry with the book “Perceptrons”)
The Solovay-Strassen primality test
Lempel-Ziv compression
PAC-learning
Microsoft Office
Global Positioning System
Slashdot (replacing the entry for the word “blog”)
CAPTCHAs
BitTorrent
Egypt’s “Twitter revolution”

The trouble is that I now have 165 entries, whereas I was told to deliver 150.  So, after realizing the infeasibility of simply “choosing” items for deletion, the MIT150 folks and I reached a decision to set up a voting site for the top 150 entries, which should be available within a couple weeks.  In the meantime, if you want to suggest even more entries, you can go ahead and do so … thanks!


This year, MIT is celebrating its 150th anniversary—and as part of the birthday festivities, I somehow got roped into creating a timeline of “150 major events in computer science history” (i.e., in the world, not MIT).   I understand that the timeline will go up on a wall somewhere.

My first thought was that singling out the “most important events in CS history” was an exercise in futility, and not a particularly original one either.  But then, as I spent weekends reading about Konrad Zuse, the UNIVAC, and the IBM 360, I started really getting into my assigned exercise in futility.  Sure, there were great CS timelines already on the web—some of which I stole from shamelessly—but none of them had exactly the focus I was looking for.  For example, the founding of a company didn’t strike me as a milestone in itself: the question was, what qualitatively new things did the company do?  So I decided to take as a starting point the words and concepts that populate the mental landscape of CS: floating-point, compiler, graphics, network, bug.  Which events were decisive in loosing these concepts upon the Earth?  To clarify, I was just as interested in the demonstration, popularization, and commercialization of existing concepts as in the discovery of new ones: the release of Windows in 1985 brought few new ideas into the world, but it seems hard to argue that Windows hasn’t carved out its place on the mental map.  But so, in my opinion, has the Time Hierarchy Theorem, even if only 0.01% as many people have heard of it.

I’m including my current draft list of 152 events below, so that Shtetl-Optimized readers can critique it, nitpick it, and tell me all about the important events I’ve left out and the events I’ve included that shouldn’t be there.  I’ll carefully consider all of your suggestions, and will implement the ones that I like.

While you’re sharpening your nitpick-knives, though, let me explain the ground rules I decided to follow:

  1. I adopted a strict policy against listing general trends (“word processing gains widespread acceptance”).  To earn inclusion in the list, an event had to be something that happened at a particular time: for example, the release of a product, the publication of a book, or the beginning of a project.  Of course, I often chose a specific event to illustrate a trend.
  2. I also adopted a strict policy against arbitrary numerical milestones (“millionth website,” “first processor to break the 1GHz barrier”), since otherwise such milestones would proliferate endlessly.
  3. Even though this was a list of events, I cared a great deal about representing people who played major roles in CS.  When—as often happened—the same person was involved in too many significant events to list, I wasn’t shy about picking one or two events to highlight the person.
  4. I wasn’t interested in external “recognition” bestowed on computers, such as TIME Magazine naming the computer its 1982 “Machine of the Year.”  Nor was I interested in what you might call “internal housekeeping milestones”: the founding of the ACM or the first CS departments, the establishment of the Turing Award, etc.  I felt that, if we’re going to engage in self-congratulation, then at least let it not be self-congratulation over self-congratulation!

Before we get to the list itself, let me give you a breakdown of number of milestones per time period:

Pre-1600s: 2
1600s: 3
1700s: 2
1800s: 7
1900s (i.e., 1900-1909): 1
1910s: 0
1920s: 2
1930s: 4
1940s: 16
1950s: 20
1960s: 29
1970s: 23
1980s: 15
1990s: 19
2000s: 9

From the data, it would appear that the level of intellectual excitement in computer science peaked in the 1960s, and has steadily declined ever since, except for a bump in the 1990s coinciding with the Internet boom.  The past decade has been a particularly arid one, producing little of note besides Facebook, Wikipedia, YouTube, and the iPhone.  If you don’t like that message—and I’m guessing that many of the MIT150 organizers won’t—then hey, don’t shoot the messenger!  The biased, misleading, and unscientific data is what it is.

Without further ado:

300BC Euclid’s Elements describes nontrivial algorithms (for problems such as Greatest Common Divisor) that are still used today
150-100BC The Antikythera mechanism, an astronomical computer, is built in ancient Greece
825  Abu Abdallah Muhammad ibn Musa al-Khwarizmi writes “On the Calculation with Hindu Numerals,” the work primarily responsible for spreading decimal notation in the West.  The word “algorithm” will be named for al-Khwarizmi
1622 William Oughtred invents the slide rule, which will remain in use until the 1970s
1642 Blaise Pascal builds an addition and subtraction machine
1669 Isaac Newton describes Newton’s method, an early numerical algorithm for finding roots of equations
1679 Gottfried Wilhelm Leibniz develops binary notation; Leibniz’s writings about a “Calculus Ratiocinator” are among the first to envision a general-purpose computer
1737 Jacques de Vaucanson builds a mechanical duck able to flap its wings, eat grain, and “defecate”
1770 Wolfgang von Kempelen unveils the Mechanical Turk, a “chess-playing automaton” secretly operated by a human. The hoax is only revealed 50 years later
1801 In his “Disquisitiones Arithmeticae,” Carl Friedrich Gauss discusses the computational complexity of factoring and primality testing
1837 Charles Babbage first describes plans for the Analytical Engine
1842 In her notes on the Analytical Engine, Ada Lovelace writes what’s generally considered the first computer program, to calculate Bernoulli numbers
1847 George Boole proposes Boolean logic
1869 William Stanley Jevons recasts Boole’s ideas in algebraic terms, and builds a wooden “logic piano” to construct the truth tables of small Boolean formulas
1879 Gottlob Frege publishes his “Begriffsschrift,” introducing first-order logic as a mathematically-precise language of thought
1890 Herman Hollerith builds the first electromechanical counting machine; the US government buys it to complete the census
1900 In a lecture to the International Congress of Mathematicians, David Hilbert asks for way to mechanize all of mathematical reasoning
1921 Czech author Karel Capek popularizes the term ‘robot’ in his play “R.U.R. (Rossum’s Universal Robots)”
1925 Vannevar Bush and colleagues create the first large-scale analog calculator at MIT
1931 Kurt Gödel publishes his Incompleteness Theorem; the system of “Gödel numbering” used in the proof foreshadows computer programming
1936 Alan Turing publishes “On computable numbers,” often considered the founding document of computer science; the paper gives an explicit construction of a universal Turing machine. Alonzo Church and Emil Post arrive at similar ideas independently
1936 Working alone in Germany, Konrad Zuse builds the Z1, the first working stored-program computer
1937 In his MIT master’s thesis—considered possibly the most influential master’s thesis in history—Claude Shannon proposes the application of Boolean algebra to electrical circuit design
1940 Building on earlier breakthroughs by Polish mathematician Marian Rejewski, Alan Turing builds an improved “Bombe” at Bletchley Park, to break the German Enigma code and help the Allies win WWII
1942 Isaac Asimov introduces his Three Laws of Robotics
1942 At Iowa State College, John Atanasoff and Clifford Berry successfully test a special-purpose vaccum-tube computer able to solve up to 29 simultaneous linear equations; Atanasoff will later spend decades in legal disputes to establish his computer’s priority over Eckert and Mauchly’s ENIAC
1943 Colossus, the world’s first programmable electronic computer, begins operation at Bletchley Park
1943 During the Manhattan Project, Richard Feynman and others pioneer large-scale scientific computing, using humans and later mechanical calculators
1943 In their paper “A Logical Calculus Immanent in Nervous Activity,” Warren McCulloch and Walter Pitts propose neural networks and finite automata
1944 The Mark I, designed by Howard Aiken, begins operations at Harvard
1945 In a 100-page “draft report on the EDVAC”, John von Neumann describes the architecture of a stored-program computer (henceforth called “von Neumann architecture”)
1945 Vannevar Bush publishes “As We May Think” in the Atlantic Monthly, a now-famous article that foresees a global information network based on hypertext
1946 At the University of Pennsylvania, J. Presper Eckert and John Mauchly complete the ENIAC
1946 In Los Alamos, Stanislaw Ulam develops the Monte Carlo method to speed up calculations of neutron diffusion in nuclear weapons
1947 At Bell Labs, John Bardeen, Walter Brattain, and William Shockley invent the transistor
1947 Operators of the Mark II computer trace an error to a moth trapped in a relay. The incident, popularized by Grace Murray Hopper, stimulates wider adoption of the terms “bug” and “debugging”
1947 George Dantzig proposes the simplex algorithm for linear programming
1948 In his landmark paper “A Mathematical Theory of Communication,” Claude Shannon first uses the word “bit,” attributing it to John Tukey
1948 Norbert Wiener publishes “Cybernetics: Or Control and Communication in the Animal and the Machine”
1949 The EDSAC, built by Maurice Wilkes, begins operations at Cambridge University
1949 Claude Shannon initiates the rigorous mathematical study of cryptography, publishing his proof that the one-time pad is unbreakable and that any unbreakable encryption system has the same basic properties
1950 Alan Turing proposes the Turing Test for artificial intelligence; four years later, Turing will commit suicide after being prosecuted for homosexuality
1950 CSIRAC, in Australia, becomes the first computer to play music
1951 J. Presper Eckert and John Mauchly release UNIVAC I, the first commercial electronic computer
1951 Grace Murray Hopper creates A-O, considered the first compiler
1951 MIT’s Whirlwind I computer goes online. Designed by Jay Forrester, Whirlwind features magnetic core memory and vacuum tubes that last 1000 times longer than those previously available
1952 Based on analysis of early returns, a UNIVAC computer borrowed by CBS News predicts that Dwight Eisenhower will defeat Adlai Stevenson in the presidential election
1954 Researchers at Birkbeck College perform the first demonstration of machine translation, with a rudimentary translation of English into French
1955 MIT’s Whirlwind I becomes the first computer to display graphics on a video console
1956 Dartmouth hosts the first conference on artificial intelligence, bringing the term AI into use
1956 In a letter to John von Neumann, Kurt Gödel first poses what will later become known as the P versus NP problem
1956 Edsger Dijkstra conceives his shortest-path algorithm, the basis for modern trip-planning software (Dijkstra also may have been the first person to list his profession as “programmer”)
1956 Noam Chomsky proposes the Chomsky Hierarchy, linking the theory of computing to formal languages
1956 MIT Lincoln Laboratories builds the TX-0, the first general-purpose computer to be built with transistors
1956 Reynold Johnson at IBM introduces the hard drive
1957 A team led by John Backus at IBM delivers a compiler for FORTRAN, the first high-level programming language
1958 John McCarthy proposes the LISP family of functional programming languages
1958 As part of the air-defense project SAGE, MIT Lincoln Labs links hundreds of radar stations in the first large-scale computer network
1958 Jack St. Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor propose the integrated circuit
1959 In the seminal paper “Finite Automata and Their Decision Problems,” Dana Scott and Michael Rabin introduce the concept of nondeterminism into computer science
1960 Tony Hoare invents the Quicksort algorithm, while a visiting student at Moscow State University
1960 Irving Reed and Gustave Solomon give a systematic construction of error-correcting codes; later, Elwyn Berlekamp and James Massey will discover an efficient decoding procedure
1960 Ray Solomonoff proposes measuring the complexity of a string by the length of the shortest program that generates it; Andrey Kolmogorov and Gregory Chaitin will later arrive at similar ideas independently
1961 UNIMATE, the first industrial robot, begins work at General Motors
1961 A team led by MIT professor Fernando Corbato demonstrates the first computer time-sharing system
1961 A team led by MIT student Steve Russell creates Spacewar!, the first computer game
1962 While studying computer models of the weather, MIT professor Edward Lorentz discovers the phenomena that would later be popularized as “chaos theory”
1963 At MIT, Joseph Weizenbaum develops ELIZA, the now-famous program that simulates conversation between a Rogerian psychotherapist and a patient
1963 MIT graduate student Ivan Sutherland develops Sketchpad, the first Computer-Aided Design (CAD) software
1963 The first edition of ASCII (American Standard Code for Information Interchange) is published
1964 IBM releases the System/360, one of the first computers based on integrated circuits. Fred Brooks, the lead developer, later describes the lessons learned in his book “The Mythical Man-Month”
1964 At Dartmouth, John Kemeny and Thomas Kurtz create the BASIC programming language
1964 Seymour Cray releases the CDC 6600, the first of many record-breaking supercomputers
1964 American Airlines and IBM debut SABRE, the first computer-based travel reservation system
1965 AT&T debuts 1ESS, the first electronic telephone switching system, in Succasunna, New Jersey
1965 Intel cofounder Gordon Moore enunciates Moore’s Law, that the number of transistors per integrated circuit doubles roughly every two years
1965 At IBM, James Cooley and John Tukey rediscover and popularize the Fast Fourier Transform (versions of which were known to Gauss and others)
1965 Juris Hartmanis and Richard Stearns prove the existence of an infinite hierarchy of harder and harder computational problems
1965 MIT student Richard Greenblatt begins developing Mac Hack, the first computer chess program to succeed in tournament play. Mac Hack also defeats the AI skeptic Hubert Dreyfus, who had famously claimed that computers would never play high-quality chess
1965 Edsger Dijkstra introduces semaphores, which allow multiple concurrently-running programs to share the same resource
1966 Texas Instruments unveils the first electronic handheld calculator
1966 The first cash-dispensing ATM is installed in Tokyo
1967 At SRI, Douglas Engelbart and Bill English apply for a patent for the first computer mouse
1967 Andrew Viterbi invents the Viterbi algorithm for maximum-likelihood estimation in Hidden Markov Models; the algorithm will find major applications in speech recognition, bioinformatics, and both the CDMA and GSM cellular-phone standards
1967 In Norway, Ole-Johan Dahl and Kristen Nygaard develop Simula 67, considered the first object-oriented programming language and a major influence on Smalltalk and later C++
1968 Donald Knuth publishes Volume 1 of “The Art of Computer Programming”
1968 Edsger Dijkstra publishes his now-famous article “Go To Statement Considered Harmful,” igniting an acrimonious debate about programming practices
1968 The movie “2001: A Space Odyssey” introduces the world to HAL
1968 Work begins at MIT on Macsyma, the first computer algebra system
1969 Victor Scheinman builds the Stanford Arm, the first electronic computer-controlled robotic arm
1969 ARPAnet, the precursor of the Internet, links UCLA, SRI, Santa Barbara, and Utah (MIT joins in 1970)
1969 At Bell Labs, Ken Thompson and Dennis Ritchie create UNIX, and begin developing the C programming language
1969 Arthur Bryson and Yu-Chi Ho introduce back-propagation, a learning technique for neural networks that is largely ignored until the 1980s.  Meanwhile, Marvin Minsky and Seymour Papert publish “Perceptrons,” a book that introduces important mathematical techniques into computer science, but has also been accused of “killing” neural-net research for more than a decade
1969 The Apollo Guidance Computer plays a crucial role in steering Neil Armstrong and Buzz Aldrin to the lunar surface
1970 John Horton Conway invents the Game of Life cellular automaton; it is estimated that millions of dollars of computer time are wasted watching Lifeforms evolve
1970 E. F. Codd proposes the relational database management system (RDBMS)
1970 Yuri Matiyasevich, building on work of Julia Robinson, Martin Davis, and Hilary Putnam, shows the nonexistence of an algorithm to solve Diophantine equations, negatively resolving Hilbert’s Tenth Problem and demonstrating Turing-universality in one of the oldest parts of mathematics
1971 Stephen Cook and Leonid Levin independently prove that the Satisfiability problem is NP-complete; a paper by Richard Karp a year later demonstrates the pervasiveness of the NP-completeness phenomenon
1971 IBM commercially releases the floppy disk
1971 Ray Tomlinson sends the first email message on ARPANET
1971 Bob Thomas at BBN Technologies creates the first experimental computer virus
1972 Atari releases Pong
1973 At Xerox PARC, Alan Kay and collaborators create the Alto, featuring the first graphical user interface (GUI) with windows, icons, and menus
1973 Robert Metcalfe at Xerox PARC creates Ethernet
1974 MIT professor Barbara Liskov and students begin work on CLU, a predecessor of modern object-oriented programming languages
1976 Kenneth Appel and Wolfgang Haken prove the Four-Color Map Theorem, the first major theorem to be proved using a computer
1975 Bill Gates and Paul Allen adapt BASIC to the MITS Altair microcomputer
1975 Inspired by work of Ralph Merkle, Stanford researchers Whitfield Diffie and Martin Hellman announce the first protocol for public key exchange (which had been discovered previously at the British GCHQ, but kept classified)
1975 Robert Kahn and Vint Cerf test the new TCP/IP protocol between Stanford and University College London
1975 At IBM, John Cocke advocates RISC processor architecture, and begins work on the 801 to implement it
1977 Ronald Rivest, Adi Shamir, and Leonard Adleman develop the public-key encryption system that they call RSA, and announce it via Martin Gardner’s Mathematical Games column in Scientific American
1977 Robert Solovay and Volker Strassen publish an efficient randomized algorithm for primality testing, demonstrating both the feasibility of RSA and the power of randomized algorithm; shortly afterward, Gary Miller and Michael Rabin find a more efficient such algorithm
1977 Steve Jobs and Steve Wozniak release the Apple II
1977 William Kahan puts forward a draft proposal for the IEEE floating-point standard
1977 In Israel, Abraham Lempel and Jacob Ziv propose the data compression algorithm that (after improvements by Terry Welch and others) becomes the basis for PKZIP and most other general-purpose compression software
1978 Intel releases the 8086, the first in the line of x86 processors
1978 Donald Knuth begins developing the TeX computer typesetting system, which will eventually become the lingua franca of science
1979 Dan Bricklin and Bob Frankston release VisiCalc, the first personal spreadsheet program and “killer app” for the Apple II
1980 Duke University graduate students Tom Truscott and Jim Ellis create Usenet
1981 Nintendo achieves its first video game success with Donkey Kong, designed by Shigeru Miyamoto
1981 The IBM PC is released, running Microsoft’s MS-DOS
1981 Richard Feynman points out the exponential difficulty of simulating quantum physics with a conventional computer, and speculates that a quantum-mechanical computer would do better; a few years later; David Deutsch will publish his construction of a universal quantum Turing machine
1982 Work on pseudorandom generators by Manuel Blum, Silvio Micali, Shafi Goldwasser, and Andy Yao begins the modern era of complexity-based cryptography, which will lead over the next decade to notions such as zero-knowledge, interactive proofs, and probabilistically checkable proofs
1982 Sony and Philips commercially release the compact disc
1982 Michael Fischer, Nancy Lynch, and Michael Paterson prove that it’s impossible to reach consensus in an asynchronous distributed system if there’s even a single faulty processor
1983 Bjarne Stroustrup at Bell Labs develops C++, which is to become the dominant object-oriented programming language
1984 With its iconic Super Bowl commercial, Apple announces the Macintosh, the first consumer machine with a mouse/windows interface
1984 Robert Axelrod publishes “The Evolution of Cooperation,” a classic book that uses computer experiments with the Iterated Prisoners’ Dilemma to shed light on human nature
1984 Leslie Valiant at Harvard proposes PAC (Probably Approximately Correct) learning, which becomes the central mathematical framework for machine learning, and helps to explain why Occam’s Razor “works”
1985 Microsoft releases Windows
1985 Richard Stallman publishes his GNU Manifesto, setting out the principles of the free-software movement
1986 Thinking Machines begins shipping the CM-1 Connection Machine, considered the first massively-parallel supercomputer
1988 While a graduate student at Cornell, Robert Morris creates a computer worm that cripples the Internet (though that wasn’t Morris’s intention)
1989 Mike Godwin formulates Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1”
1990 At CERN in Geneva, Switzerland, Tim Berners-Lee creates the World Wide Web
1990 The IP=PSPACE Theorem, proved by Carsten Lund, Lance Fortnow, Howard Karloff, Noam Nisan, and Adi Shamir, shows the unexpected power of interactive protocols and opens a new era in the theory of computing
1990 The discovery of boosting—a technique for combining the predictions of different learning algorithms—by Michael Kearns, Rob Schapire, and Yoav Freund, starts a revolution in the theory and practice of machine learning
1990 Microsoft releases its first Office application suite, consisting of Word, Excel, and PowerPoint
1991 At Los Alamos National Laboratory, Paul Ginsparg founds the arXiv, ushering in the era of rapid online dissemination of scientific research
1991 The Linux kernel is designed by Finnish student Linus Torvalds
1991 Phil Zimmermann releases Pretty Good Privacy (PGP), which makes “military-grade” public-key encryption easily available. After the Customs Service opens a criminal investigation, Zimmermann becomes an Internet folk hero
1993 A team headed by Marc Andreessen at the University of Illinois Urbana-Champaign releases NCSA Mosaic, the browser credited with popularizing the Web
1993 Joel Furr first uses the word “spam” to mean “excessive multiple postings” (in that case, to Usenet newsgroups)
1993 With 24 satellites in orbit, the Global Positioning System achieves “initial operational capability.”  Originally designed by the US Department of Defense for military applications, GPS quickly finds numerous civilian uses including computerized car navigation
1994 Peter Shor proves that a quantum computer, if built, would be able to factor numbers efficiently, launching quantum computing as an active research field
1994 Thomas Nicely discovers the Pentium FDIV bug; the ensuing controversy and costly recall by Intel underscore the need for hardware verification
1995 Pixar releases Toy Story, the first feature film made entirely with computer-generated imagery
1995 Lee Zehrer launches Match.com, the first major online dating service
1995 Amazon.com is launched by Jeff Bezos and sells its first book
1995 The Institute for Genomic Research uses whole-genome shotgun sequencing—a technique dependent on massive computer power—to sequence the genome of the first free-living organism, the bacterium Haemophilus influenzae
1996 Stanford graduate students Larry Page and Sergey Brin begin developing Google
1997 IBM’s Deep Blue computer defeats human world champion Garry Kasparov
1997 Rob “Commander Taco” Malda founds Slashdot, which hosts freewheeling web-based conversations of the sort that would later be commonplace on blogs
1997 NASA’s Sojourner robotic rover moves semi-autonomously across the surface of Mars for 83 days, sending spectacular images back to Earth
1999 Widespread fear of the “Y2K millennium bug” underscores society’s dependence on legacy computer systems. Later, many will argue the fears were overblown
2000 The first major denial-of-service (DoS) attack is launched against CNN, Yahoo, and eBay
2000 Putting an unexpected practical twist on Alan Turing’s ideas from 1950, Manuel Blum, Luis von Ahn, and John Langford at Carnegie Mellon articulate the notion of CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the challenges—usually involving reading distorted text—that are used by numerous websites to discourage spam-bots
2001 Larry Sanger and Jimmy Wales launch Wikipedia
2001 Bram Cohen develops BitTorrent, the controversial peer-to-peer file sharing protocol now estimated to account for 27-55% of all Internet traffic
2001 In the wake of the 9/11 attacks, news websites continue running smoothly in part because of routing algorithms designed by Akamai Technologies. Meanwhile, former MIT student Danny Lewin, who cofounded Akamai with Professor Tom Leighton, is on American Airlines flight 11 when it crashes into the World Trade Center
2002 At IIT Kanpur, Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena announce a deterministic polynomial-time algorithm for primality testing
2004 Harvard sophomore Mark Zuckerberg launches Thefacebook.com
2005 YouTube is launched, beginning an era of online video-sharing
2007 Apple releases the iPhone
2010 Some of Iran’s centrifuges for uranium enrichment are destroyed by the Stuxnet worm, in the first known large-scale cyberattack to target physical infrastructure
2011 Making essential use of Facebook and Twitter as organizing tools, protesters in Egypt force the resignation of President Hosni Mubarak after 30 years of rule
2011 IBM’s Watson computer defeats all-time human champions Ken Jennings and Brad Rutter on Jeopardy

Alex Halderman, and India’s assault on academic freedom

Friday, December 17th, 2010

Five years ago, not long after the founding of Shtetl-Optimized, I blogged about Alex Halderman: my best friend since seventh grade at Newtown Junior High School, now a famous security researcher and a computer science professor at the University of Michigan, and someone whose exploits seem to be worrying at least one government as much as Julian Assange’s.

In the past, Alex has demonstrated the futility of copy-protection schemes for music CDs, helped force the state of California to change its standards for electronic voting machines, and led a spectacular attack against an Internet voting pilot in Washington DC.  But Alex’s latest project is probably his most important and politically-riskiest yet.  Alex, Hari Prasad of India, and Rop Gonggrijp of the Netherlands demonstrated massive security problems with electronic voting machines in India (which are used by about 400 million people in each election, making them the most widely-used voting system on earth).  As a result of this work, Hari was arrested in his home and jailed by the Indian authorities, who threatened not to release him until he revealed the source of the voting machine that he, Alex, and Rop had analyzed.  After finally being released by a sympathetic judge, Hari flew to the United States, where he received the Electronic Frontier Foundation’s 2010 Pioneer Award.  I had the honor of meeting Hari at MIT during his and Alex’s subsequent US lecture tour.

But the story continues.  Earlier this week, after flying into India to give a talk at the International Conference on Information Systems Security (ICISS’2010) in Gandhinagar, Alex and Rop were detained at the New Delhi airport and threatened with deportation from India.  No explanation was given, even though the story became front-page news in India.  Finally, after refusing to board planes out of New Delhi without being given a reason in writing for their deportation, Alex and Rop were allowed to enter India, but only on the condition that they did so as “tourists.”  In particular, they were banned from presenting their research on electronic voting machines, and the relevant conference session was cancelled.

To those in the Indian government responsible for the harassment of Alex Halderman and Rop Gonggrijp and (more seriously) the imprisonment of Hari Prasad: shame on you!  And to Alex, Hari, and Rop: let the well-wishes of this blog be like a small, nerdy wind beneath your wings.