Today I break Shtetl-Optimized‘s long radio silence with a relatively-exciting announcement: you remember my timeline of computer science history? Well, MIT students Jason Zhu and Ammar Ammar have now kindly created a website where you can vote on each of the entries, as well as new entries suggested by commenters. It’s pretty simple: you just register (by entering an email address, username, and password), then upvote each entry you like and downvote each entry you dislike (you can also abstain on any entry).
For reference, here are the 17 new contenders added by popular demand:
150BC Chinese text describes Gaussian elimination
499 Indian mathematician Aryabhata describes the “kuttaka” algorithm for solving Diophantine equations
1206 al-Jazari builds elaborate water clocks and musical automata
1801 The Jacquard loom uses punched cards to control textile manufacturing
1951 Wilkes, Wheeler, and Gill describe the concept of closed subroutines
1956 Stephen Kleene invents regular expressions
1962 The Atlas computer begins operation in Manchester
1962 Robert Gallager introduces low-density parity check codes
1968 First deployed packet-switching network
1969 Strassen’s algorithm for fast matrix multiplication
1969 Stephen Wiesner conceives of quantum money and multiplexing
1971 Vapnik and Chervonenkis introduce VC dimension
1982 PostScript
1992 The PCP Theorem
1999 SETI@home
2006 DaVinci surgical robot performs the first unaided operation
2007 Checkers solved
Update: A new feature has been added that lets you rank four randomly-selected entries—click “Done” on the bottom of the page to access it.
Update: You can now undo a vote by clicking twice on the same arrow.
One of the crown jewels of complexity theory is Valiant’s 1979 theorem that computing the permanent of an n*n matrix is #P-hard. Here we show that, by using the model of linear-optical quantum computing—and in particular, a universality theorem due to Knill, Laflamme, and Milburn—one can give a different and arguably more intuitive proof of this theorem.
For decades, Harvard’s Leslie Valiant has obviously deserved a Turing Award—and today, the ACM most excellently announced its agreement with the obvious. I have little to add to the prize citation (see also Lance’s post): from launching new fields whose reach extends beyond theory (PAC-learning), to proving epochal results (#P-completeness of the permanent), to asking hugely influential questions (permanent vs. determinant), Valiant has been a creative powerhouse of theoretical computer science for longer than I’ve been alive.
One thing the prize citation doesn’t mention is that Valiant is now the third Turing Award winner (after Andy Yao and Len Adleman) to have made a major contribution to quantum computing theory. Valiant’s 2001 paper Quantum Computers that can be Simulated Classically in Polynomial Time introduced the beautiful computational model that computer scientists now know as “matchgates,”and that physicists know as “noninteracting fermions.” It still amazes that Valiant proposed this model for purely mathematical reasons—hitting physical relevance straight between the eyes despite (as far as I can tell) not having that target anywhere in his sights.
To put the point in terms that my physicist friends will understand, that Valiant himself would probably dispute, but that I would defend:
Valiant’s work has shown that, even if our universe hadn’t been made of bosons and fermions, theoretical computer scientists would have had compelling reasons of their own to invent those particles or something equivalent to them—and furthermore, that at least one theoretical computer scientist would have had the imagination to do so.
On Wednesday, Larry Hardesty of the MIT News Office published a nice article about my work with Alex Arkhipov on the computational complexity of linear optics. Although the title—“The quantum singularity”—made me wince a little, I was impressed by the effort Larry put into getting the facts right, and especially laying out the problems that still need to be solved.
Less successful was a story in PC Magazine based on MIT’s press release, which contained the following sentence (let me know if you can decipher what the author meant—I couldn’t):
Aaronson says that he and Arkhipov have not successfully proven that designing a device capable of testing the theory is impossible—which is an important first step, whether to eventually building a quantum computer, or even just laying the initial framework for using the microscopic secrets of the universe to let humans better understand the world that surrounds them.
However, in the competition for Popular Science Article Sentence of the Year, the sentence above will have to contend with a now-classic sentence from the New York Times article about Watson:
More than anything, the contest was a vindication for the academic field of computer science, which began with great promise in the 1960s with the vision of creating a thinking machine and which became the laughingstock of Silicon Valley in the 1980s, when a series of heavily financed start-up companies went bankrupt.
To the NYT’s credit, they quickly posted a correction:
An article last Thursday about the I.B.M. computer Watson misidentified the academic field vindicated by Watson’s besting of two human opponents on “Jeopardy!” It is artificial intelligence — not computer science, a broader field that includes artificial intelligence.
Wouldn’t Jeopardy! be better without those stupid buzzers? Even if the contestants just, y’know, took turns? In a game focused solely on question-answering (OK, OK, answer-questioning) rather than buzzing, Watson would still have done amazingly well and reflected credit on its developers, but the man/machine competition would have been much closer and much more interesting to watch. No one needs a repeated demonstration that computers have faster reaction times than humans.
Inspired by the timeline discussion: could something like Watson have been built in, say, 2000? If not, then which developments of the past decade played important roles?
Back when Deep Blue beat Kasparov, IBM made a big to-do about the central role played by its large, specially-designed mainframe with custom “chess chips”—but then it wasn’t long before programs like Deep Fritz running on desktop PCs produced similar (and today, probably superior) performance. How long before we can expect a computer Jeopardy! champion that fits behind the podium?
A month ago, Caltech hosted a daylong event called “TEDxCaltech / Feynman’s Vision: the Next 50 Years”, which was attended by about a thousand people. Celebrity participants included Stephen Hawking, Carl and Michelle Feynman (Carl told me he’s a fan of the blog—hi Carl!), and Ondar, a Tuvan throat-singer who pretty much stole the show.
Videos are finally being posted on YouTube; my talk is here. My goal was to cover the P versus NP question, quantum computing, conceptual issues in quantum mechanics, and Feynman’s relation to all three, while also cracking crude masturbation jokes (in a talk like this, one has to bring out the heavy humor cannons), and to finish in 15 minutes. I don’t know how well I succeeded—but if I die tomorrow, then at least Stephen Hawking was in the audience when I made my case for P and NP being as big a deal as anything in physics.
Two explanatory comments:
By far my most successful joke was a reference to “prime numbers, such as 3, 5, 1…” Before the lunch break, the emcee had told everyone to be back by 1:35, “which I’m sure you nerds will remember since it’s the first three prime numbers.”
Yeah, I know the current upper bound on the matrix multiplication exponent is 2.376, not 2.736! It was correct on the slides I submitted, but got transposed when the slides were converted into “TED format.”
If you think my talk stinks, my only defense is that showing up to give it was already an accomplishment: my flight (from Tel Aviv to LA through Newark) was canceled because of a snowstorm, so I arrived at Caltech exhausted and barely conscious, via a 36-hour alternate route through Frankfurt and London.
I have to confess that I was skeptical of this event’s entire premise. Richard Feynman was famous for his contempt of pomp and authority; would he really have enjoyed a heavily-scripted day extolling him as a secular saint? In the end, though, the quality of many of the talks made the event more than worthwhile for me, even without counting Ondar’s throat-singing as a “talk.” I particularly enjoyed the cosmology talk of fellow-blogger Sean Carroll (yo, Sean), the Feynman stories of Lenny Susskind, a demonstration of the mind-blowing WorldWide Telescope by Curtis Wong of Microsoft, and a “Who Wants to be a Millionaire?” parody skit put on by the three “black hole bettors” (John Preskill, Kip Thorne, and Hawking, the last of whom wheeled into the auditorium to thunderous applause and the opening fanfare of Thus Spake Zarathustra). I understand that all the talks will eventually be on YouTube here.
Thanks to Michael Roukes, Mary Sikora, John Preskill, Ann Harvey, and the other organizers for putting this thing together and for inviting me.
Last night, the MIT Egyptian Club hosted a “What’s Going On In Egypt?” event, which included a lecture, a Q&A session with Egyptian students, Egyptian music, and free falafel and baklava. I went, not least because of the falafel.
The announcement that Mubarak was leaving came just a few hours before the event, which was planned as a somber discussion but hastily reconfigured as a celebration. As you’d imagine, the mood was ecstatic: some people came draped in Egyptian flags, and there was shouting, embracing, and even blowing of vuvuzelas. Building E51 wasn’t quite Tahrir Square, but it was as close as I was going to get.
About 300 people showed up. I’d expected an even bigger turnout—but then again, this was MIT, where the democratic awakening of the Arab world might have to wait if there’s a pset due next week. Many of the people who came were speaking Arabic, greeting each other with “salaam aleykum.” But only a minority were Egyptians: I met jubilant Syrians and Saudi Arabians, and pan-Arab pride was a major theme of the evening.
At one point, I overheard two guys speaking something that sounded like Arabic but wasn’t: “yesh khasa? eyn?” It was Hebrew, which I’m proud to say I now speak at almost the level of a 3-year-old. The Israelis were debating whether there was lettuce in the falafel (there wasn’t). Joining their conversation, I confirmed that we had come for basically the same reasons: first, to “witness” (insofar as one could without leaving campus) one of the great revolutions of our time; secondly, the falafel.
Two socialist organizations were selling newspapers, with headlines trumpeting the events in Egypt as the dawn of a long-awaited global workers’ revolt against capitalism. Buying a $1 newspaper (and politely turning down a subscription), I thought to myself that one has to admire these folks’ persistence, if not their powers of analysis.
Finally the main event started. An Egyptian student from Harvard presented a slideshow, which summarized both the events of the last three weeks and the outrages of the last 30 years that led to them (poverty, torture, suppression of opposition parties, indefinite detention without charges, arrests for things like having long hair). He said that this uprising wasn’t anything like Iran’s 30 years ago, that it was non-Islamic and led by the pro-democracy Facebook generation. Then there was half an hour for Q&A.
Someone asked about the protesters’ economic goals. One student panelist started to answer, but then another interjected: “Look, the people in Tahrir Square just overthrew the government. I don’t think they’ve had much time yet to think through their economic plan.”
Someone else asked about the role of the US. A student answered that it was “complicated, to say the least,” and that the Obama administration seemed internally divided.
Perhaps the most interesting question was whether the students themselves planned to return to Egypt, to help build the new democratic society. After a long silence, two students said yes.
No one asked about the future of Egypt/Israel relations, and the subject never came up. But it seemed obvious that, if the students I saw were running Egypt, they’d be too busy modernizing their country’s economy to spend much time denouncing Zionist iniquities.
In general, I agree with Natan Sharansky that, for the US and Israel, it would be incredibly shortsighted to see only danger and “instability” in the Great Egyptian Twitter Revolt of 2011. The variance is enormous, which makes it almost impossible to estimate the expectation, but there’s certainly large support on the positive half of the spectrum.
So, to my Egyptian readers: congratulations, best wishes, mazel tov, and mabrouk from the entire executive staff of Shtetl-Optimized. May your revolution be remembered with those of 1776 and 1989 and not with those of 1917 and 1979.
Update (Feb. 14): Thanks so much to the many commenters who offered suggestions—I’ve implemented a large fraction of them! In addition to many clarifications and corrections of existing entries, I’ve added entries for:
Al-Khwarizmi
The slide rule
William Stanley Jevons’s “logic piano”
The Atanasoff-Berry computer
Claude Shannon’s cryptographic work
Reed-Solomon codes (replacing the Hamming code)
Solomonoff-Kolmogorov-Chaitin complexity
The 1ESS switching system
Semaphores
Viterbi’s algorithm
Simula 67
Back-propagation (now sharing an entry with the book “Perceptrons”)
The Solovay-Strassen primality test
Lempel-Ziv compression
PAC-learning
Microsoft Office
Global Positioning System
Slashdot (replacing the entry for the word “blog”)
CAPTCHAs
BitTorrent
Egypt’s “Twitter revolution”
The trouble is that I now have 165 entries, whereas I was told to deliver 150. So, after realizing the infeasibility of simply “choosing” items for deletion, the MIT150 folks and I reached a decision to set up a voting site for the top 150 entries, which should be available within a couple weeks. In the meantime, if you want to suggest even more entries, you can go ahead and do so … thanks!
This year, MIT is celebrating its 150th anniversary—and as part of the birthday festivities, I somehow got roped into creating a timeline of “150 major events in computer science history” (i.e., in the world, not MIT). I understand that the timeline will go up on a wall somewhere.
My first thought was that singling out the “most important events in CS history” was an exercise in futility, and not a particularly original one either. But then, as I spent weekends reading about Konrad Zuse, the UNIVAC, and the IBM 360, I started really getting into my assigned exercise in futility. Sure, there were great CS timelines already on the web—some of which I stole from shamelessly—but none of them had exactly the focus I was looking for. For example, the founding of a company didn’t strike me as a milestone in itself: the question was, what qualitatively new things did the company do? So I decided to take as a starting point the words and concepts that populate the mental landscape of CS: floating-point, compiler, graphics, network, bug. Which events were decisive in loosing these concepts upon the Earth? To clarify, I was just as interested in the demonstration, popularization, and commercialization of existing concepts as in the discovery of new ones: the release of Windows in 1985 brought few new ideas into the world, but it seems hard to argue that Windows hasn’t carved out its place on the mental map. But so, in my opinion, has the Time Hierarchy Theorem, even if only 0.01% as many people have heard of it.
I’m including my current draft list of 152 events below, so that Shtetl-Optimized readers can critique it, nitpick it, and tell me all about the important events I’ve left out and the events I’ve included that shouldn’t be there. I’ll carefully consider all of your suggestions, and will implement the ones that I like.
While you’re sharpening your nitpick-knives, though, let me explain the ground rules I decided to follow:
I adopted a strict policy against listing general trends (“word processing gains widespread acceptance”). To earn inclusion in the list, an event had to be something that happened at a particular time: for example, the release of a product, the publication of a book, or the beginning of a project. Of course, I often chose a specific event to illustrate a trend.
I also adopted a strict policy against arbitrary numerical milestones (“millionth website,” “first processor to break the 1GHz barrier”), since otherwise such milestones would proliferate endlessly.
Even though this was a list of events, I cared a great deal about representing people who played major roles in CS. When—as often happened—the same person was involved in too many significant events to list, I wasn’t shy about picking one or two events to highlight the person.
I wasn’t interested in external “recognition” bestowed on computers, such as TIME Magazine naming the computer its 1982 “Machine of the Year.” Nor was I interested in what you might call “internal housekeeping milestones”: the founding of the ACM or the first CS departments, the establishment of the Turing Award, etc. I felt that, if we’re going to engage in self-congratulation, then at least let it not be self-congratulation over self-congratulation!
Before we get to the list itself, let me give you a breakdown of number of milestones per time period:
From the data, it would appear that the level of intellectual excitement in computer science peaked in the 1960s, and has steadily declined ever since, except for a bump in the 1990s coinciding with the Internet boom. The past decade has been a particularly arid one, producing little of note besides Facebook, Wikipedia, YouTube, and the iPhone. If you don’t like that message—and I’m guessing that many of the MIT150 organizers won’t—then hey, don’t shoot the messenger! The biased, misleading, and unscientific data is what it is.
Without further ado:
300BC Euclid’s Elements describes nontrivial algorithms (for problems such as Greatest Common Divisor) that are still used today
150-100BC The Antikythera mechanism, an astronomical computer, is built in ancient Greece
825 Abu Abdallah Muhammad ibn Musa al-Khwarizmi writes “On the Calculation with Hindu Numerals,” the work primarily responsible for spreading decimal notation in the West. The word “algorithm” will be named for al-Khwarizmi
1622 William Oughtred invents the slide rule, which will remain in use until the 1970s
1642 Blaise Pascal builds an addition and subtraction machine
1669 Isaac Newton describes Newton’s method, an early numerical algorithm for finding roots of equations
1679 Gottfried Wilhelm Leibniz develops binary notation; Leibniz’s writings about a “Calculus Ratiocinator” are among the first to envision a general-purpose computer
1737 Jacques de Vaucanson builds a mechanical duck able to flap its wings, eat grain, and “defecate”
1770 Wolfgang von Kempelen unveils the Mechanical Turk, a “chess-playing automaton” secretly operated by a human. The hoax is only revealed 50 years later
1801 In his “Disquisitiones Arithmeticae,” Carl Friedrich Gauss discusses the computational complexity of factoring and primality testing
1837 Charles Babbage first describes plans for the Analytical Engine
1842 In her notes on the Analytical Engine, Ada Lovelace writes what’s generally considered the first computer program, to calculate Bernoulli numbers
1847 George Boole proposes Boolean logic
1869 William Stanley Jevons recasts Boole’s ideas in algebraic terms, and builds a wooden “logic piano” to construct the truth tables of small Boolean formulas
1879 Gottlob Frege publishes his “Begriffsschrift,” introducing first-order logic as a mathematically-precise language of thought
1890 Herman Hollerith builds the first electromechanical counting machine; the US government buys it to complete the census
1900 In a lecture to the International Congress of Mathematicians, David Hilbert asks for way to mechanize all of mathematical reasoning
1921 Czech author Karel Capek popularizes the term ‘robot’ in his play “R.U.R. (Rossum’s Universal Robots)”
1925 Vannevar Bush and colleagues create the first large-scale analog calculator at MIT
1931 Kurt Gödel publishes his Incompleteness Theorem; the system of “Gödel numbering” used in the proof foreshadows computer programming
1936 Alan Turing publishes “On computable numbers,” often considered the founding document of computer science; the paper gives an explicit construction of a universal Turing machine. Alonzo Church and Emil Post arrive at similar ideas independently
1936 Working alone in Germany, Konrad Zuse builds the Z1, the first working stored-program computer
1937 In his MIT master’s thesis—considered possibly the most influential master’s thesis in history—Claude Shannon proposes the application of Boolean algebra to electrical circuit design
1940 Building on earlier breakthroughs by Polish mathematician Marian Rejewski, Alan Turing builds an improved “Bombe” at Bletchley Park, to break the German Enigma code and help the Allies win WWII
1942 Isaac Asimov introduces his Three Laws of Robotics
1942 At Iowa State College, John Atanasoff and Clifford Berry successfully test a special-purpose vaccum-tube computer able to solve up to 29 simultaneous linear equations; Atanasoff will later spend decades in legal disputes to establish his computer’s priority over Eckert and Mauchly’s ENIAC
1943 Colossus, the world’s first programmable electronic computer, begins operation at Bletchley Park
1943 During the Manhattan Project, Richard Feynman and others pioneer large-scale scientific computing, using humans and later mechanical calculators
1943 In their paper “A Logical Calculus Immanent in Nervous Activity,” Warren McCulloch and Walter Pitts propose neural networks and finite automata
1944 The Mark I, designed by Howard Aiken, begins operations at Harvard
1945 In a 100-page “draft report on the EDVAC”, John von Neumann describes the architecture of a stored-program computer (henceforth called “von Neumann architecture”)
1945 Vannevar Bush publishes “As We May Think” in the Atlantic Monthly, a now-famous article that foresees a global information network based on hypertext
1946 At the University of Pennsylvania, J. Presper Eckert and John Mauchly complete the ENIAC
1946 In Los Alamos, Stanislaw Ulam develops the Monte Carlo method to speed up calculations of neutron diffusion in nuclear weapons
1947 At Bell Labs, John Bardeen, Walter Brattain, and William Shockley invent the transistor
1947 Operators of the Mark II computer trace an error to a moth trapped in a relay. The incident, popularized by Grace Murray Hopper, stimulates wider adoption of the terms “bug” and “debugging”
1947 George Dantzig proposes the simplex algorithm for linear programming
1948 In his landmark paper “A Mathematical Theory of Communication,” Claude Shannon first uses the word “bit,” attributing it to John Tukey
1948 Norbert Wiener publishes “Cybernetics: Or Control and Communication in the Animal and the Machine”
1949 The EDSAC, built by Maurice Wilkes, begins operations at Cambridge University
1949 Claude Shannon initiates the rigorous mathematical study of cryptography, publishing his proof that the one-time pad is unbreakable and that any unbreakable encryption system has the same basic properties
1950 Alan Turing proposes the Turing Test for artificial intelligence; four years later, Turing will commit suicide after being prosecuted for homosexuality
1950 CSIRAC, in Australia, becomes the first computer to play music
1951 J. Presper Eckert and John Mauchly release UNIVAC I, the first commercial electronic computer
1951 Grace Murray Hopper creates A-O, considered the first compiler
1951 MIT’s Whirlwind I computer goes online. Designed by Jay Forrester, Whirlwind features magnetic core memory and vacuum tubes that last 1000 times longer than those previously available
1952 Based on analysis of early returns, a UNIVAC computer borrowed by CBS News predicts that Dwight Eisenhower will defeat Adlai Stevenson in the presidential election
1954 Researchers at Birkbeck College perform the first demonstration of machine translation, with a rudimentary translation of English into French
1955 MIT’s Whirlwind I becomes the first computer to display graphics on a video console
1956 Dartmouth hosts the first conference on artificial intelligence, bringing the term AI into use
1956 In a letter to John von Neumann, Kurt Gödel first poses what will later become known as the P versus NP problem
1956 Edsger Dijkstra conceives his shortest-path algorithm, the basis for modern trip-planning software (Dijkstra also may have been the first person to list his profession as “programmer”)
1956 Noam Chomsky proposes the Chomsky Hierarchy, linking the theory of computing to formal languages
1956 MIT Lincoln Laboratories builds the TX-0, the first general-purpose computer to be built with transistors
1956 Reynold Johnson at IBM introduces the hard drive
1957 A team led by John Backus at IBM delivers a compiler for FORTRAN, the first high-level programming language
1958 John McCarthy proposes the LISP family of functional programming languages
1958 As part of the air-defense project SAGE, MIT Lincoln Labs links hundreds of radar stations in the first large-scale computer network
1958 Jack St. Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor propose the integrated circuit
1959 In the seminal paper “Finite Automata and Their Decision Problems,” Dana Scott and Michael Rabin introduce the concept of nondeterminism into computer science
1960 Tony Hoare invents the Quicksort algorithm, while a visiting student at Moscow State University
1960 Irving Reed and Gustave Solomon give a systematic construction of error-correcting codes; later, Elwyn Berlekamp and James Massey will discover an efficient decoding procedure
1960 Ray Solomonoff proposes measuring the complexity of a string by the length of the shortest program that generates it; Andrey Kolmogorov and Gregory Chaitin will later arrive at similar ideas independently
1961 UNIMATE, the first industrial robot, begins work at General Motors
1961 A team led by MIT professor Fernando Corbato demonstrates the first computer time-sharing system
1961 A team led by MIT student Steve Russell creates Spacewar!, the first computer game
1962 While studying computer models of the weather, MIT professor Edward Lorentz discovers the phenomena that would later be popularized as “chaos theory”
1963 At MIT, Joseph Weizenbaum develops ELIZA, the now-famous program that simulates conversation between a Rogerian psychotherapist and a patient
1963 MIT graduate student Ivan Sutherland develops Sketchpad, the first Computer-Aided Design (CAD) software
1963 The first edition of ASCII (American Standard Code for Information Interchange) is published
1964 IBM releases the System/360, one of the first computers based on integrated circuits. Fred Brooks, the lead developer, later describes the lessons learned in his book “The Mythical Man-Month”
1964 At Dartmouth, John Kemeny and Thomas Kurtz create the BASIC programming language
1964 Seymour Cray releases the CDC 6600, the first of many record-breaking supercomputers
1964 American Airlines and IBM debut SABRE, the first computer-based travel reservation system
1965 AT&T debuts 1ESS, the first electronic telephone switching system, in Succasunna, New Jersey
1965 Intel cofounder Gordon Moore enunciates Moore’s Law, that the number of transistors per integrated circuit doubles roughly every two years
1965 At IBM, James Cooley and John Tukey rediscover and popularize the Fast Fourier Transform (versions of which were known to Gauss and others)
1965 Juris Hartmanis and Richard Stearns prove the existence of an infinite hierarchy of harder and harder computational problems
1965 MIT student Richard Greenblatt begins developing Mac Hack, the first computer chess program to succeed in tournament play. Mac Hack also defeats the AI skeptic Hubert Dreyfus, who had famously claimed that computers would never play high-quality chess
1965 Edsger Dijkstra introduces semaphores, which allow multiple concurrently-running programs to share the same resource
1966 Texas Instruments unveils the first electronic handheld calculator
1966 The first cash-dispensing ATM is installed in Tokyo
1967 At SRI, Douglas Engelbart and Bill English apply for a patent for the first computer mouse
1967 Andrew Viterbi invents the Viterbi algorithm for maximum-likelihood estimation in Hidden Markov Models; the algorithm will find major applications in speech recognition, bioinformatics, and both the CDMA and GSM cellular-phone standards
1967 In Norway, Ole-Johan Dahl and Kristen Nygaard develop Simula 67, considered the first object-oriented programming language and a major influence on Smalltalk and later C++
1968 Donald Knuth publishes Volume 1 of “The Art of Computer Programming”
1968 Edsger Dijkstra publishes his now-famous article “Go To Statement Considered Harmful,” igniting an acrimonious debate about programming practices
1968 The movie “2001: A Space Odyssey” introduces the world to HAL
1968 Work begins at MIT on Macsyma, the first computer algebra system
1969 Victor Scheinman builds the Stanford Arm, the first electronic computer-controlled robotic arm
1969 ARPAnet, the precursor of the Internet, links UCLA, SRI, Santa Barbara, and Utah (MIT joins in 1970)
1969 At Bell Labs, Ken Thompson and Dennis Ritchie create UNIX, and begin developing the C programming language
1969 Arthur Bryson and Yu-Chi Ho introduce back-propagation, a learning technique for neural networks that is largely ignored until the 1980s. Meanwhile, Marvin Minsky and Seymour Papert publish “Perceptrons,” a book that introduces important mathematical techniques into computer science, but has also been accused of “killing” neural-net research for more than a decade
1969 The Apollo Guidance Computer plays a crucial role in steering Neil Armstrong and Buzz Aldrin to the lunar surface
1970 John Horton Conway invents the Game of Life cellular automaton; it is estimated that millions of dollars of computer time are wasted watching Lifeforms evolve
1970 E. F. Codd proposes the relational database management system (RDBMS)
1970 Yuri Matiyasevich, building on work of Julia Robinson, Martin Davis, and Hilary Putnam, shows the nonexistence of an algorithm to solve Diophantine equations, negatively resolving Hilbert’s Tenth Problem and demonstrating Turing-universality in one of the oldest parts of mathematics
1971 Stephen Cook and Leonid Levin independently prove that the Satisfiability problem is NP-complete; a paper by Richard Karp a year later demonstrates the pervasiveness of the NP-completeness phenomenon
1971 IBM commercially releases the floppy disk
1971 Ray Tomlinson sends the first email message on ARPANET
1971 Bob Thomas at BBN Technologies creates the first experimental computer virus
1972 Atari releases Pong
1973 At Xerox PARC, Alan Kay and collaborators create the Alto, featuring the first graphical user interface (GUI) with windows, icons, and menus
1973 Robert Metcalfe at Xerox PARC creates Ethernet
1974 MIT professor Barbara Liskov and students begin work on CLU, a predecessor of modern object-oriented programming languages
1976 Kenneth Appel and Wolfgang Haken prove the Four-Color Map Theorem, the first major theorem to be proved using a computer
1975 Bill Gates and Paul Allen adapt BASIC to the MITS Altair microcomputer
1975 Inspired by work of Ralph Merkle, Stanford researchers Whitfield Diffie and Martin Hellman announce the first protocol for public key exchange (which had been discovered previously at the British GCHQ, but kept classified)
1975 Robert Kahn and Vint Cerf test the new TCP/IP protocol between Stanford and University College London
1975 At IBM, John Cocke advocates RISC processor architecture, and begins work on the 801 to implement it
1977 Ronald Rivest, Adi Shamir, and Leonard Adleman develop the public-key encryption system that they call RSA, and announce it via Martin Gardner’s Mathematical Games column in Scientific American
1977 Robert Solovay and Volker Strassen publish an efficient randomized algorithm for primality testing, demonstrating both the feasibility of RSA and the power of randomized algorithm; shortly afterward, Gary Miller and Michael Rabin find a more efficient such algorithm
1977 Steve Jobs and Steve Wozniak release the Apple II
1977 William Kahan puts forward a draft proposal for the IEEE floating-point standard
1977 In Israel, Abraham Lempel and Jacob Ziv propose the data compression algorithm that (after improvements by Terry Welch and others) becomes the basis for PKZIP and most other general-purpose compression software
1978 Intel releases the 8086, the first in the line of x86 processors
1978 Donald Knuth begins developing the TeX computer typesetting system, which will eventually become the lingua franca of science
1979 Dan Bricklin and Bob Frankston release VisiCalc, the first personal spreadsheet program and “killer app” for the Apple II
1980 Duke University graduate students Tom Truscott and Jim Ellis create Usenet
1981 Nintendo achieves its first video game success with Donkey Kong, designed by Shigeru Miyamoto
1981 The IBM PC is released, running Microsoft’s MS-DOS
1981 Richard Feynman points out the exponential difficulty of simulating quantum physics with a conventional computer, and speculates that a quantum-mechanical computer would do better; a few years later; David Deutsch will publish his construction of a universal quantum Turing machine
1982 Work on pseudorandom generators by Manuel Blum, Silvio Micali, Shafi Goldwasser, and Andy Yao begins the modern era of complexity-based cryptography, which will lead over the next decade to notions such as zero-knowledge, interactive proofs, and probabilistically checkable proofs
1982 Sony and Philips commercially release the compact disc
1982 Michael Fischer, Nancy Lynch, and Michael Paterson prove that it’s impossible to reach consensus in an asynchronous distributed system if there’s even a single faulty processor
1983 Bjarne Stroustrup at Bell Labs develops C++, which is to become the dominant object-oriented programming language
1984 With its iconic Super Bowl commercial, Apple announces the Macintosh, the first consumer machine with a mouse/windows interface
1984 Robert Axelrod publishes “The Evolution of Cooperation,” a classic book that uses computer experiments with the Iterated Prisoners’ Dilemma to shed light on human nature
1984 Leslie Valiant at Harvard proposes PAC (Probably Approximately Correct) learning, which becomes the central mathematical framework for machine learning, and helps to explain why Occam’s Razor “works”
1985 Microsoft releases Windows
1985 Richard Stallman publishes his GNU Manifesto, setting out the principles of the free-software movement
1986 Thinking Machines begins shipping the CM-1 Connection Machine, considered the first massively-parallel supercomputer
1988 While a graduate student at Cornell, Robert Morris creates a computer worm that cripples the Internet (though that wasn’t Morris’s intention)
1989 Mike Godwin formulates Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1”
1990 At CERN in Geneva, Switzerland, Tim Berners-Lee creates the World Wide Web
1990 The IP=PSPACE Theorem, proved by Carsten Lund, Lance Fortnow, Howard Karloff, Noam Nisan, and Adi Shamir, shows the unexpected power of interactive protocols and opens a new era in the theory of computing
1990 The discovery of boosting—a technique for combining the predictions of different learning algorithms—by Michael Kearns, Rob Schapire, and Yoav Freund, starts a revolution in the theory and practice of machine learning
1990 Microsoft releases its first Office application suite, consisting of Word, Excel, and PowerPoint
1991 At Los Alamos National Laboratory, Paul Ginsparg founds the arXiv, ushering in the era of rapid online dissemination of scientific research
1991 The Linux kernel is designed by Finnish student Linus Torvalds
1991 Phil Zimmermann releases Pretty Good Privacy (PGP), which makes “military-grade” public-key encryption easily available. After the Customs Service opens a criminal investigation, Zimmermann becomes an Internet folk hero
1993 A team headed by Marc Andreessen at the University of Illinois Urbana-Champaign releases NCSA Mosaic, the browser credited with popularizing the Web
1993 Joel Furr first uses the word “spam” to mean “excessive multiple postings” (in that case, to Usenet newsgroups)
1993 With 24 satellites in orbit, the Global Positioning System achieves “initial operational capability.” Originally designed by the US Department of Defense for military applications, GPS quickly finds numerous civilian uses including computerized car navigation
1994 Peter Shor proves that a quantum computer, if built, would be able to factor numbers efficiently, launching quantum computing as an active research field
1994 Thomas Nicely discovers the Pentium FDIV bug; the ensuing controversy and costly recall by Intel underscore the need for hardware verification
1995 Pixar releases Toy Story, the first feature film made entirely with computer-generated imagery
1995 Lee Zehrer launches Match.com, the first major online dating service
1995 Amazon.com is launched by Jeff Bezos and sells its first book
1995 The Institute for Genomic Research uses whole-genome shotgun sequencing—a technique dependent on massive computer power—to sequence the genome of the first free-living organism, the bacterium Haemophilus influenzae
1996 Stanford graduate students Larry Page and Sergey Brin begin developing Google
1997 IBM’s Deep Blue computer defeats human world champion Garry Kasparov
1997 Rob “Commander Taco” Malda founds Slashdot, which hosts freewheeling web-based conversations of the sort that would later be commonplace on blogs
1997 NASA’s Sojourner robotic rover moves semi-autonomously across the surface of Mars for 83 days, sending spectacular images back to Earth
1999 Widespread fear of the “Y2K millennium bug” underscores society’s dependence on legacy computer systems. Later, many will argue the fears were overblown
2000 The first major denial-of-service (DoS) attack is launched against CNN, Yahoo, and eBay
2000 Putting an unexpected practical twist on Alan Turing’s ideas from 1950, Manuel Blum, Luis von Ahn, and John Langford at Carnegie Mellon articulate the notion of CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the challenges—usually involving reading distorted text—that are used by numerous websites to discourage spam-bots
2001 Larry Sanger and Jimmy Wales launch Wikipedia
2001 Bram Cohen develops BitTorrent, the controversial peer-to-peer file sharing protocol now estimated to account for 27-55% of all Internet traffic
2001 In the wake of the 9/11 attacks, news websites continue running smoothly in part because of routing algorithms designed by Akamai Technologies. Meanwhile, former MIT student Danny Lewin, who cofounded Akamai with Professor Tom Leighton, is on American Airlines flight 11 when it crashes into the World Trade Center
2002 At IIT Kanpur, Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena announce a deterministic polynomial-time algorithm for primality testing
2004 Harvard sophomore Mark Zuckerberg launches Thefacebook.com
2005 YouTube is launched, beginning an era of online video-sharing
2007 Apple releases the iPhone
2010 Some of Iran’s centrifuges for uranium enrichment are destroyed by the Stuxnet worm, in the first known large-scale cyberattack to target physical infrastructure
2011 Making essential use of Facebook and Twitter as organizing tools, protesters in Egypt force the resignation of President Hosni Mubarak after 30 years of rule
2011 IBM’s Watson computer defeats all-time human champions Ken Jennings and Brad Rutter on Jeopardy
This fall, for the second time, I taught my 6.845 Quantum Complexity Theory graduate course (see here for the lecture notes from the first iteration). Thanks so much to the students for making the course a success—I hope they enjoyed it at least half as much as I did!
A central part of 6.845 is the course project, which can be either a literature survey or original research in quantum complexity, and which can be done either individually or in pairs. The majority of the students chose to do original research—which surprised me, given how little time was available and how inherently unpredictable theorizing is. Yet all the projects ended up being good, and some ended up being spectacular—initiating new topics, making progress on open problems that I’d worked on without success, etc. So with the students’ kind permission, I decided to pick six outstanding projects for a “blog showcase.” (Obviously, inclusion in this showcase doesn’t preclude the projects being published “for real,” as I hope and expect they will be!)
Without further ado:
Alessandro Chiesa and Michael Forbes, A Note on QMA With Multiple Provers. Here Ale and Michael improve previous QMA(k) protocols for NP-complete problems due to Aaronson-Beigi-Drucker-Fefferman-Shor and Beigi—boosting the success probability by polynomial factors and showing how to verify a much wider range of problems than just 3SAT and 3-Coloring.
Paul Christiano, Toward Quantum Money Relative to a Classical Oracle. In a Complexity’09 paper (whose full version, alas, isn’t yet finished), I showed that there exists a “quantum oracle” relative to which quantum money, which anyone can verify but no one can efficiently counterfeit, is possible. Here Paul takes the next step, giving a candidate quantum money scheme that only requires a classical oracle. Unfortunately, there’s still a gap in the security proof for this scheme, but I’m optimistic that with new ideas the gap can be filled.
Alan Deckelbaum, Quantum Correlated Equilibria in Classical Complete Information Games. In this innovative paper, Alan defines a new concept of quantum correlated equilibria in quantum game theory (see here for the definition of classical correlated equilibria, due to Aumann), and studies its basic properties. In particular, he proves the nontrivial result that there exist equilibria that can be realized using classical correlation, but that can’t be realized using pure-state entanglement without one or more players having incentive to deviate. See here for some independent related work by Shengyu Zhang.
Shelby Kimmel, Quantum Adversary (Upper) Bound. (Also on the arXiv; two closely-related arXiv preprints are Speed from Repetition by Shelby, and Super-Polynomial Quantum Speed-ups for Boolean Evaluation Trees with Hidden Structure by Shelby along with Bohua Zhan and Avinatan Hassidim.) This work has to be read and understood to be believed—I too was skeptical at first! Basically, Shelby gives an example of a promise problem with a constant-query quantum algorithm—except she has no idea what the algorithm is! She can only prove its existence nonconstructively, by first giving a quantum algorithm for a composed version of the problem, and then appealing to Ben Reichardt’s breakthrough characterization of quantum query complexity in terms of span programs. For a special case of the problem, she’s able to give an explicit O(1)-query quantum algorithm by using the Haar wavelet transform.
Andy Lutomirski, On the Query Complexity of Counterfeiting Quantum Money. Independently of Paul Christiano, here Andy proposes a different quantum money scheme using a classical oracle, which again ought to work but is missing only a security proof. Along the way, Andy also proposes a beautiful new query complexity problem—the “Separate Components Problem”—which cries out for a quantum lower bound, and might also lead to a classical oracle separation between QMA and QCMA.
Raluca Ada Popa, Witness-Indistinguishability Against Quantum Adversaries. Building on John Watrous’s work on quantum zero-knowledge, here Raluca defines the new notion of quantum witness-indistinguishability, and proves many of its basic properties. For example, she shows that if quantum computationally-concealing commitment schemes exist, then all of NP has witness-indistinguishable proofs that are computationally secure against quantum adversaries. As with so much else in cryptography, even just getting the definitions right is a nontrivial affair!
First, sadly, I’ll be going to neither ICS’2011 in Beijing nor QIP’2011 in Singapore this coming week—too much travel! If you’re going to either conference and would like to contribute a guest post, please let me know.
We show that any quantum algorithm to decide whether a function f:[n]→[n] is a permutation or far from a permutation must make Ω(n1/3/w) queries to f, even if the algorithm is given a w-qubit quantum witness in support of f being a permutation. This implies that there exists an oracle A such that SZKA⊄QMAA, answering an eight-year-old open question of the author. Indeed, we show that relative to some oracle, SZK is not in the counting class A0PP defined by Vyalyi. The proof is a fairly simple extension of the quantum lower bound for the collision problem.
This result is neither hard nor surprising, but it does more-or-less solve a problem that’s bothered me since grad school (and which I mentioned a couple months ago on this blog) in a ridiculously simple-in-retrospect way, which is either nice or disappointing depending on how you look at it.
Third, some of you might have heard that the Carmel region in Israel recently suffered a terrible forest fire, which destroyed about 30 million trees and killed 44 people, and which required the assistance of many countries to put out. Yesterday, after giving a talk at the Technion in Haifa, I had a chance to tour some of the fire damage. While we were on the hike, a torrential downpour started (which caught me without coat or umbrella)—if only the rain had come a few weeks earlier! Anyway, here are some photos:
Disclaimer: The White House Office of Science and Technology Policy has asked me to clarify that, although this post will contain a photograph of me standing near the President of the United States, nothing in the post, or in Shtetl-Optimized more generally, is endorsed in any way by the White House or the President. You know, just in case you were wondering.
It’s a good thing that I chose a career in science rather than in public relations.
Within one century, government-sponsored scientific research radically changed the ways that human beings exist on this planet. Electronics are possible because of the quantum revolution of the 1920s, a revolution that many of us are still trying to understand the full implications of. While it benefited from a government monopoly, Bell Labs was able to invent and/or commercialize the transistor, the laser, the fiber-optic cable, and the communications satellite. (As soon as Congress opened the telecom market to competitors, Bell Labs’ capacity to innovate was permanently crippled.) Computers, the Internet, cell phones, nuclear energy, DNA testing, and widespread vaccination are a reality today largely because of a partnership between academic scientists and their governments, in the US and elsewhere, that started in earnest during World War II and has continued to the present.
I sort of imagined that, if you were reading this blog, then you knew all of that, and also knew that I knew it. But I was mistaken. In writing about what seemed to me like a slam-dunk issue for any thinking person—namely, protecting the 0.18% of the United States federal budget that goes to the National Science Foundation—I somehow managed to make enemies not only of the NSF’s opponents, who skewered me as an ivory-tower elitist, but also of many of its supporters, who either didn’t understand or didn’t appreciate my attempts at gallows humor.
Fortunately, today I have a happy story involving the NSF. As Lance Fortnow kindly mentioned a month ago, I had the honor of being included in this year’s PECASE (Presidential Early Career Award for Scientists and Engineers) class. Here I followed in the footsteps of Adam Smith and Sean Hallgren, two theoretical computer scientists from Penn State (and very nice people) who won last year. The PECASE is given for a combination of research and outreach, so there’s little doubt this blog played a role, in addition (I hope!) to the research and teaching that I sometimes do in my spare time. There’s no money in the PECASE, just a fun trip to DC for ceremonies and a photo-op with the President.
The day (last Monday) started with a ceremony in the Department of Agriculture building. There was a Color Guard, then a beautiful live performance of the national anthem, then short speeches, then a presentation of awards that resembled a high-school graduation, then a reception where they served these really nice smoked-salmon wraps, as well as chocolate truffles that were on sticks like lollipops. The awardees’ families were all there with us, but unfortunately, only the awardees themselves were cleared to enter the White House complex for the presidential photo-op. There was no Air Force One pickup to get to the White House: we took the Metro. We arrived at the Eisenhower Executive Office Building, which is to the left of the White House, adjacent to the West Wing. There were Christmas decorations all around.
After going through a security check, we were ushered into a room that seemed specially designed for presidential photo-ops. It had staggered platforms for standing on, with curtains in the background.
I was allowed to bring my cellphone, but it didn’t work inside the White House. There was a strict no-photography rule.
We were called to pose for the photo in order of height: people over 6ft in the back row, then people over 5ft 10in in the next two rows, etc. I was lucky to be short enough to land a spot in the second-to-front row. We stood there for about fifteen minutes while waiting for the President to arrive.
The organizer from the Office of Science and Technology Policy warned the women in the front row that last year, the President put his arm around them for the photo—so they should be prepared!
At 1:55pm, we received word that the President would arrive at 2:05pm, and at 2pm, we received word that he was on his way over. Finally, at 2:05 on the dot, he bounded into the room and the PECASE awardees erupted into applause. My MIT colleague Manolis Kellis bellowed “Mr. President!”, which made the President laugh.
The President looked and sounded pretty much the same as on TV. I was happy to see that his lip looked fine. He shook hands with everyone in the front row, assuring everyone else that they’d get a chance to shake his hand later as well.
(I’m the one wearing a tie with a little drawing of the MIT Stata Center on the bottom.)
The President spoke for about five minutes, while Secret Service agents stood unobtrusively in the corners of the room. Here were his main points, as I remember them:
He couldn’t be more proud of us.
Science and technology are extremely important for the nation’s future.
He’s been fighting for more science funding. (At this, the PECASE awardees burst into applause again.)
Science will be a highlight of his next State of the Union address. (Hey, you read it here first.)
He understands that the PECASE award is not just for research but also for outreach and education, which is great.
As someone with two daughters, he’s especially happy to see so many female PECASE winners.
He feels so honored to be able to pose for a photo with us. (At this, everyone laughed.)
He made a reference to “young people, which most of you still qualify as” (causing more laughter), and said he’s expecting us to “produce” and win some Nobel prizes.
As the rows cleared out, the President shook hands with everyone in turn. A few people said Merry Christmas. I just said “thank you,” and he said “thank you” back. Then I quickly moved away, since I had a cold and was worried about giving it to him. (Also, my hand was sweating for some reason—maybe because I was wearing a suit, which was definitely one of the more unusual aspects of the day for me!)
Immediately after the photo, we were escorted out of the Eisenhower Building. (Apparently the PECASE awardees in some previous years got a tour of the White House, but we didn’t.)
Later in the afternoon, there was a reception at NSF headquarters for the 19 PECASE winners whose research was sponsored by NSF (the remaining 66 were sponsored by the National Institutes of Health, the Department of Energy, the Defense Department, NASA, or other agencies). After opening remarks by Subra Suresh, the new NSF director and previously Dean of Engineering at MIT, each of the awardees gave a 3-minute speech about his or her work. I really enjoyed listening to the other 18 talks (as for my own, I spoke too fast and probably lost people).
At the risk of annoying earnestness, I’d like to thank:
My NSF program officer (and all-around favorite government official), Dmitri Maslov.
Every reader of this blog who ever said anything positive (or at least non-negative) about it.
The Office of Science and Technology Policy, for putting together an awesome day (and inducing me to wear a tie even though no one was being married, buried, or bar-mitzvahed).
President Obama, for supporting science and education even in the face of determined opposition.
My fellow American taxpayers, for bankrolling the NSF. May all who receive grants strive to be worthy of them.