Archive for the ‘Nerd Interest’ Category

Enough with Bell’s Theorem. New topic: Psychopathic killer robots!

Friday, May 25th, 2012
A few days ago, a writer named John Rico emailed me the following question, which he’s kindly given me permission to share.
If a computer, or robot, was able to achieve true Artificial Intelligence, but it did not have a parallel programming or capacity for empathy, would that then necessarily make the computer psychopathic?  And if so, would it then follow the rule devised by forensic psychologists that it would necessarily then become predatory?  This then moves us into territory covered by science-fiction films like “The Terminator.”  Would this psychopathic computer decide to kill us?  (Or would that merely be a rational logical decision that wouldn’t require psychopathy?)

See, now this is precisely why I became a CS professor: so that if anyone asked, I could give not merely my opinion, but my professional, expert opinion, on the question of whether psychopathic Terminators will kill us all.

My response (slightly edited) is below.

Dear John,

I fear that your question presupposes way too much anthropomorphizing of an AI machine—that is, imagining that it would even be understandable in terms of human categories like “empathetic” versus “psychopathic.”  Sure, an AI might be understandable in those sorts of terms, but only if it had been programmed to act like a human.  In that case, though, I personally find it no easier or harder to imagine an “empathetic” humanoid robot than a “psychopathic” robot!  (If you want a rich imagining of “empathetic robots” in science fiction, of course you need look no further than Isaac Asimov.)

On the other hand, I personally also think it’s possible –even likely—that an AI would pursue its goals (whatever they happened to be) in a way so different from what humans are used to that the AI couldn’t be usefully compared to any particular type of human, even a human psychopath.  To drive home this point, the AI visionary Eliezer Yudkowsky likes to use the example of the “paperclip maximizer.”  This is an AI whose programming would cause it to use its unimaginably-vast intelligence in the service of one goal only: namely, converting as much matter as it possibly can into paperclips!

Now, if such an AI were created, it would indeed likely spell doom for humanity, since the AI would think nothing of destroying the entire Earth to get more iron for paperclips.  But terrible though it was, would you really want to describe such an entity as a “psychopath,” any more than you’d describe (say) a nuclear weapon as a “psychopath”?  The word “psychopath” connotes some sort of deviation from the human norm, but human norms were never applicable to the paperclip maximizer in the first place … all that was ever relevant was the paperclip norm!

Motivated by these sorts of observations, Yudkowsky has thought and written a great deal about how the question of how to create a “friendly AI,” by which he means one that would use its vast intelligence to improve human welfare, instead of maximizing some arbitrary other objective like the total number of paperclips in existence that might be at odds with our welfare.  While I don’t always agree with him—for example, I don’t think AI has a single “key,” and I certainly don’t think such a key will be discovered anytime soon—I’m sure you’d find his writings at yudkowsky.net, lesswrong.com, and overcomingbias.com to be of interest to you.

I should mention, in passing, that “parallel programming” has nothing at all to do with your other (fun) questions.  You could perfectly well have a murderous robot with parallel programming, or a kind, loving robot with serial programming only.

Hope that helps,
Scott

U. of Florida CS department: let it be destroyed by rising sea levels 100 years from now, not reckless administrators today

Monday, April 23rd, 2012

Update (4/27): A famous joke concerns an airplane delivered to the US Defense Department in the 1950s, which included a punch-card computer on board.  By regulation, the contractor had to provide a list of all the components of the plane—engine, wings, fuselage, etc.—along with the weight of each component.  One item in the list read, “Computer software: 0.0 kg.”

“That must be a mistake—it can’t weigh 0 kg!” exclaimed the government inspector.  “Here, show me where the software is.”  So the contractor pointed to a stack of punched cards.  “OK, fine,” said the government inspector.  “So just weigh those cards, and that’s the weight of the software.”

“No, sir, you don’t understand,” replied the contractor.  “The software is the holes.”

If the Abernathy saga proves anything, it’s the continuing relevance of this joke even in 2012.  Abernathy is the government inspector who hears that software weighs nothing, and concludes that it does nothing—or, at least, that whatever division is responsible for punching the holes in the cards, can simply be folded into the division that cuts the card paper into rectangles.


As many of you have heard by now, Cammy Abernathy, Dean of Engineering at the University of Florida, has targeted her school’s Computer and Information Science and Engineering (CISE) department for disembowelment: moving most faculty to other departments, and shunting any who remain into non-research positions.  Though CISE is by all accounts one of UF’s strongest engineering departments, no other department faces similar cuts, and the move comes just as UF is increasing its sports budget by more than would be saved by killing computer science. (For more, see Lance’s blog, or letters from Eric Grimson and Zvi Galil. Also, click here to add your name to the already 7000+ petitioning UF to reconsider.)

On its face, this decision seems so boneheadedly perverse that it immediately raises the suspicion that the real reasons for it, whatever they are, have not been publicly stated. The closest I could find to a comprehensible rationale came from this comment, which speculates that the UF administration might be sabotaging its CS department as a threat to the Florida State legislature: “see, keep slashing our budget, and this is the sort of thing we’ll be forced to do!”  But I don’t find that theory very plausible; UF must realize that the Republican-controlled legislature’s likely reaction would be “go ahead, knock yourselves out!”

On a personal note, my parents live part-time in beautiful Sarasota, FL, home of the Mote Marine Laboratory, which does amazing work rehabilitating dolphins, manatees, and sea turtles.  Having visited Sarasota just a few weeks ago, I can testify that, despite frequent hurricanes, a proven inability to hold democratic elections, and its reputation as a giant retirement compound, Florida has definite potential as a state.

Academic computer science as a whole will be fine.  As for Florida, may the state prove greater than its Katherine Harrises, Rick Scotts, and Cammy Abernathys.

Update: See this document for more of the backstory on Abernathy’s underhanded tactics in dismantling the UF CISE department.  Based on the evidence presented there, she really does deserve the scorn now being heaped on her by much of the academic world.

Another Update: UF’s president issued a rather mealy-mouthed statement saying that they’re going to set aside their original evisceration proposal and find a compromise, though who knows what the compromise will look like.

In another news, Greg Kuperberg posted a comment that not only says everything I was trying to say more eloquently, but also explains why I and other CS folks care so much about this issue: because what’s really at stake is the concept of Turing-universality itself.  Let me repost Greg’s comment in its entirety.

It looks like Dean Abernathy hasn’t explained herself all that well, which is not surprising if what she is doing makes no sense. Reading the tea leaves, in particular the back-story document that Scott posted, it looks like she had it in for the CS department from the beginning of her tenure as Dean at Florida. In her interview with Stanford when she had just been appointed as dean, she already said then that “we” wanted to bring EE and CS closer together, even though at the time, there had been no discussion and there was no “we”. Then during discussions with the CS department, she refused to take no for an answer, even though she sometimes pretended to, and as time went on the actual plan looked more and more punitive. She appointed an outside chair to the department, and then in the final plan she terminated the graduate program, moved half of the department to EE, and left the other half to do teaching only. The CS department was apparently very concerned about its NRC ranking, but this ranking only came out when Abernathy’s wheels were already in motion. In any case everyone knows that the NRC rankings were notoriously shabby across all disciplines and the US News rankings, although hardly deep, are much less ridiculous.

So what gives? Apparently from Abernathy’s Stanford interview, and from her actions, she simply takes computer science to be a special case of electrical engineering. Ultimately, it’s a rejection of the fundamental concept of Turing universality. In this world view, there is no such thing as an abstract computer, or at best who really cares if there is one; all that really exists is electronic devices.

Scott points out that those departments that are combined EECS are really combined in name only. This is not just empirical happenstance; it comes from Turing universality and the abstract concept of a computer. Yes, in practice modern computers are electronic. However, if someone does research in compilers, much less CS theory, then really nothing at all is said about electricity. To most people in computer science, it’s completely peripheral that computers are electronic. Nor is this just a matter of theoretical vs applied computer science. CS theory may be theoretical, but compiler research isn’t, much less other topics such as user interfaces or digital libraries.

Abernathy herself works in materials engineering and has a PhD from Stanford. I’m left wondering at what point she failed to understand, or began to misunderstand or dismiss, the abstract concept of a computer. If she were dean of letters of sciences, then I could imagine an attempt to dump half of the literature department into a department of paper and printing technology, and leave the other half only to teach grammar. It would be exactly the same mistake.

Tell President Obama to support the Federal Research Public Access Act

Tuesday, February 28th, 2012

If you’re tired of blog posts about open science, sorry dude—but it feels great to be part a group of blogging nerds who, for once, are actually having a nonzero (and positive, I think!) impact on the political process.  Yesterday, Elsevier, which had been the biggest supporter of the noxious Research Works Act, announced, under pressure from the “Cost of Knowledge” movement, that it was dropping its support for RWA.  Only hours later, Elsevier’s paid cheerleaders in Congress, Darrell Issa (R-CA) and Carolyn Maloney (D-NY), announced that they were shelving the RWA for now.  See this hilarious post by physicist John Baez, which translates Issa and Maloney’s statement on why they’re letting the RWA die into ordinary English sentence-by-sentence.

But it gets better: Representative Mike Doyle (D-PA) has introduced a sort of anti-RWA, the Federal Research Public Access Act (or easily-pronounced FRPAA), which would require federal agencies with budgets of over $100 million to make the research they sponsor freely available less than 6 months after its publication in a peer-reviewed journal (thereby expanding the NIH’s successful open-access policy).  If you’re a US citizen, and you care about the results of taxpayer-funded medical and other research being accessible to the public, then please sign this petition telling President Obama you support the FRPAA.  Tell your coworker, husband, wife, grandmother, etc. to sign it too.  Apparently the President will personally review it if it gets to 25,000 signatures by March 9.

And if you’re not a US citizen: that’s cool too!  Support open-access initiatives in your country.  (Or, if you live someplace like Syria, support the prerequisite “not-getting-shot” initiatives.)  Just don’t have a cow about my blogging American issues from time to time, like this easily-offended Aussie did over on Cosmic Variance.

The battle against Elsevier gains momentum

Wednesday, February 8th, 2012

Check out this statement on “The Cost of Knowledge” released today, which (besides your humble blogger) has been signed by Ingrid Daubechies (President of the International Mathematical Union), Timothy Gowers, Terence Tao, László Lovász, and 29 others.  The statement carefully explains the rationale for the current Elsevier boycott, and answers common questions like “why single out Elsevier?” and “what comes next?”

Also check out Timothy Gowers’ blog post announcing the statement.  The post includes a hilarious report by investment firm Exane Paribas, explaining that the current boycott has caused Reed Elsevier’s stock price to fall, but presenting that as a great investment opportunity, since they fully expect the price to rebound once this boycott fails like all the previous ones.  I ask you: does that not want to make you boycott Elsevier, for no other reason than to see the people who follow Exane Paribas’ cynical advice lose their money?

In related news, the boycott petition now has 4600+ signatures and counting.  If you’ve already signed, great!  If you haven’t, why not?

Update (Feb. 9): There’s now a great editorial by Gareth Cook in the Boston Globe supporting the Elsevier boycott (and analogizing it to both the Tahrir Square uprising and the Boston Tea Party!).

Whether or not God plays dice, I do

Friday, February 3rd, 2012

Another Update (Feb. 7): I have a new piece up at IEEE Spectrum, explaining why I made this bet.  Thanks to Rachel Courtland for soliciting the piece and for her suggestions improving it.

Update: My $100,000 offer for disproving scalable quantum computing has been Slashdotted.  Reading through the comments was amusing as always.  The top comment suggested that winning my prize was trivial: “Just point a gun at his head and ask him ‘Convinced?'”  (For the record: no, I wouldn’t be, even as I handed over my money.  And if you want to be a street thug, why limit yourself to victims who happen to have made public bets about quantum computing?)  Many people assumed I was a QC skeptic, and was offering the prize because I hoped to spur research aimed at disproving QC.  (Which is actually an interesting misreading: I wonder how much “pro-paranormal” research has been spurred by James Randi’s million-dollar prize?)  Other people said the bet was irrelevant since D-Wave has already built scalable QCs.  (Oh, how I wish I could put the D-Wave boosters and the QC deniers in the same room, and let them duke it out with each other while leaving me alone for a while!)  One person argued that it would be easy to prove the impossibility of scalable QCs, just like it would’ve been easy to prove the impossibility of scalable classical computers in 1946: the only problem is that both proofs would then be invalidated by advances in technology.  (I think he understands the word “proof” differently than I do.)  Then, buried deep in the comments, with a score of 2 out of 5, was one person who understood precisely:

I think he’s saying that while a general quantum computer might be a very long way off, the underlying theory that allows such a thing to exist is on very solid ground (which is why he’s putting up the money). Of course this prize might still cost him since if the news of the prize goes viral he’s going to spend the next decade getting spammed by kooks.

OK, two people:

    There’s some needed context.  Aaronson himself works on quantum complexity theory.  Much of his work deals with quantum computers (at a conceptual level–what is and isn’t possible).  Yet there are some people who reject the idea the quantum computers can scale to “useful” sizes–including some very smart people like Leonid Levin (of Cook-Levin Theorem fame)–and some of them send him email, questions, comments on his blog, etc. saying so.  These people are essentially asserting that Aaronson’s career is rooted in things that can’t exist.  Thus, Aaronson essentially said “prove it.”  It’s true that proving such a statement would be very difficult … But the context is that Aaronson gets mail and questions all the time from people who simply assert that scalable QC is impossible, and he’s challenging them to be more formal about it.  He also mentions, in fairness, that if he does have to pay out, he’d consider it an honor, because it would be a great scientific advance.

For better or worse, I’m now offering a US$100,000 award for a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world.  This award has no time limit other than my death, and is entirely at my discretion (though if you want to convince me, a good approach would be to convince most of the physics community first).  I might, also at my discretion, decide to split the award among several people or groups, or give a smaller award for a discovery that dramatically weakens the possibility of scalable QC while still leaving it open.  I don’t promise to read every claimed refutation of QC that’s emailed to me.  Indeed, you needn’t even bother to send me your refutation directly: just convince most of the physics community, and believe me, I’ll hear about it!  The prize amount will not be adjusted for inflation.

The impetus for this prize was a post on Dick Lipton’s blog, entitled “Perpetual Motion of the 21st Century?”  (See also this followup post.)  The post consists of a debate between well-known quantum-computing skeptic Gil Kalai and well-known quantum-computing researcher Aram Harrow (Shtetl-Optimized commenters both), about the assumptions behind the Quantum Fault-Tolerance Theorem.  So far, the debate covers well-trodden ground, but I understand that it will continue for a while longer.  Anyway, in the comments section of the post, I pointed out that a refutation of scalable QC would require, not merely poking this or that hole in the Fault-Tolerance Theorem, but the construction of a dramatically-new, classically-efficiently-simulable picture of physical reality: something I don’t expect but would welcome as the scientific thrill of my life.  Gil more-or-less dared me to put a large cash prize behind my words—as I’m now, apparently, known for doing!—and I accepted his dare.

To clarify: no, I don’t expect ever to have to pay the prize, but that’s not, by itself, a sufficient reason for offering it.  After all, I also don’t expect Newt to win the Republican primary, but I’m not ready to put $100,000 on the line for that belief.  The real reason to offer this prize is that, if I did have to pay, at least doing so would be an honor: for I’d then (presumably) simply be adding a little to the well-deserved Nobel Prize coffers of one of the greatest revolutionaries in the history of physics.

Over on Lipton’s blog, my offer was criticized for being “like offering $100,000 to anyone who can prove that Bigfoot doesn’t exist.”  To me, though, that completely misses the point.  As I wrote there, whether Bigfoot exists is a question about the contingent history of evolution on Earth.  By contrast, whether scalable quantum computing is possible is a question about the laws of physics.  It’s perfectly conceivable that future developments in physics would conflict with scalable quantum computing, in the same way that relativity conflicts with faster-than-light communication, and the Second Law of Thermodynamics conflicts with perpetuum mobiles.  It’s for such a development in physics that I’m offering this prize.

Update: If anyone wants to offer a counterpart prize for a demonstration that scalable quantum computing is possible, I’ll be happy for that—as I’m sure, will many experimental QC groups around the world.  I’m certainly not offering such a prize.

Boycott Elsevier!

Thursday, January 26th, 2012

If you’re in academia and haven’t done so yet, please take a moment to sign this online petition organized by Tyler Neylon, and pledge that you won’t publish, referee, or do editorial work for any Elsevier journals.  I’ve been boycotting Elsevier (and most other commercial journal publishers—Elsevier is merely the worst) since 2004, when I first learned about their rapacious pricing policies.  I couldn’t possibly be happier with my choice: unlike most idealistic principles, this one gets you out of onerous work rather than committing you to it!  Sure, Elsevier is huge and we’re tiny, but the fight against them is finally gathering steam (possibly because of Elsevier’s support for the “Research Works Act”), years after the case against them became inarguable.  Since their entire business model depends on our donating free labor to them, all it will take to bring them down is for enough of us to decide we’re through being had.  We can actually win this one … Yes We Can.

For more information, see this wonderful recent post by Fields medalist and Shtetl-Optimized commenter Timothy Gowers, entitled “Elsevier — my part in its downfall.”  (Added: also check out this great post by Aram Harrow.)  You might also enjoy a parody piece I wrote years ago, trying to imagine how Elsevier’s “squeeze those dupes for all they’ve got” business model would work in any other industry.

The Alternative to Resentment

Friday, December 2nd, 2011

A year ago, in a post entitled Anti-Complexitism, I tried to grapple with the strange phenomenon—one we’ve seen in force this past week—of anonymous commenters getting angry about the mere fact of announcements, on theoretical computer science blogs, of progress on longstanding open problems in theoretical computer science.  When I post something about global warming, Osama Bin Laden, or (of course) the  interpretation of quantum mechanics, I expect a groundswell of anger … but a lowering of the matrix-multiplication exponent ω?  Huh?  What was that about?

Well, in this case, some commenters were upset about attribution issues (which hopefully we can put behind us now, everyone agreeing about the importance of both Stothers’ and Vassilevska Williams’ contributions), while others honestly but mistakenly believed that a small improvement to ω isn’t a big deal (I tried to explain why they’re wrong here).  What interests me in this post is the commenters who went further, positing the existence of a powerful “clique” of complexity bloggers that’s doing something reprehensible by “hyping” progress in complexity theory, or by exceeding some quota (what, exactly?) on the use of the word “breakthrough.”

One of the sharpest responses to that paranoid worldview came (ironically) from a wonderful anonymous comment on my Anti-Complexitism post, which I recommend everyone read.  Here was my favorite paragraph:

The final criticism [by the anti-complexites] seems to be: complexity theory makes too much noise which people in other areas do not like.  I really don’t understand this one, I mean what is wrong with people in an area being excited about their area?  Is that wrong?  And where do we make those noise?  On complexity blogs!  If you don’t like complexity theorists being excited about their area why are you reading these blogs?  The metaphor would be an outsider going to a wedding and asking the people in the wedding with a very serious tone: “why is everyone happy here?”

Yesterday, in response to my reposting the above comment on Lance and Bill’s blog, another anonymous commenter had something extremely illuminating to say:

Scott, you are missing the larger socio-economical context: it’s not about excitement.  It’s about researchers competing for scarce resources, primarily funding.  The work involved in funding acquisition is generally loathed, and directly reduces the time scientists have for research and teaching.  If some researchers ramp up their hype-level vis-a-vis the rest of the community, as the complexity community is believed to be doing (what with all them Goedel awards?), they are forcing (or are seen as forcing) the rest either to accept a lower level of funding with all the concomitant disadvantages, or invest more time in hype themselves.  In other words, hypers are defecting in the prisoners dilemma type game scientists are playing, the objective of which is to minimise the labour involved in funding acquisition.

This is similar to teeth-whitening: in the past, it was perfectly possible to be considered attractive with natural, slightly yellowish teeth. Then some defected by bleaching, then more and more, and today natural teeth are socially hardly acceptable, certainly not if you want to be good-looking.  Is that progress?

I posted a response on Lance and Bill’s blog, but then decided it was important enough to repost here.  So:

Dear Anonymous 2:47,

Let me see whether I understand you correctly.  On the view you propose, other scientists shouldn’t have praised (say) Carl Sagan for getting millions of people around the world excited about science.  Rather, they should have despised him, for using hype to divert scarce funding dollars from their own fields to the fields Sagan favored (like astronomy, or Sagan’s preferred parts of astronomy).  Sagan forced all those other scientists to accept a terrible choice: either accept reduced funding, or else sink to Sagan’s level, and perform the loathed task of communicating their own excitement about their own fields to the public.

Actually, there were other scientists who drew essentially that conclusion.  As an example, Sagan was famously denied membership in the National Academy of Sciences, apparently because of a few vocal NAS members who were jealous and resentful of Sagan’s outreach activities.  The view we’re now being asked to accept is that those NAS members are the ones who emerge from the story the moral victors.

So let me thank you, Anonymous 2:47: it’s rare for anyone to explain the motivation behind angry TCS blog comments with that much candor.

Now that the real motivation has (apparently) crawled out from underneath its rock, I can examine it and refute it.  The central point is simply that science isn’t a Prisoner’s-Dilemma-type game.   What you describe as the “socially optimal equilibrium,” where no scientists need to be bothered to communicate their excitement about their fields, is not socially optimal at all—neither from the public’s standpoint nor from science’s.

At the crudest level, science funding is not a fixed-size pie.  For example, when Congress was debating the cancellation of the Superconducting Supercollider, a few physicists from other fields eagerly jumped on the anti-SSC bandwagon, hoping that the SSC money might then get diverted to their own fields.  Ultimately, of course, the SSC was cancelled, and none of the money ever found its way to other areas of physics.

So, if you see people using blogs to talk about research results that excite them, then instead of resenting it, consider starting your own blog to talk about the research results that excite YOU.  If your blog is well-written and interesting, I’ll even add you to my blogroll, game-theoretic funding considerations be damned.  Just go to WordPress.com—it’s free, and it takes only a few minutes to set one up.

The pedophile upper bound

Monday, November 14th, 2011

Lance Fortnow now has a post up about how wonderful Graham Spanier and Joe Paterno were, and how sorry he is to see them go.

For what it’s worth, I take an extremely different view.  I’d be thrilled to see the insane football culture at many American universities—the culture that Spanier and Paterno epitomized—brought down entirely, and some good might yet come of the Penn State tragedy if it helps that happen.  Football should be, as it is at MIT, one of many fine extracurricular activities that are available to interested students (alongside table tennis, glassblowing, robot-building…), rather than a primary reason for a university’s existence.

What’s interesting about the current scandal is precisely that it establishes some finite upper bound on what people will tolerate, and thereby illustrates just what it takes for the public to turn on its football heroes.  Certainly the destruction of academic standards doesn’t suffice (are you kidding?).  More interestingly, sexism, sexual harassment, and “ordinary” rape—offenses that have brought down countless male leaders in other fields—barely even make a dent in public consciousness where football stars are concerned.  With child rape, by contrast, one can actually find a non-negligible fraction of Americans who consider it comparable in gravity to football.  (Though, as the thousands of rioting Penn State students reminded us, that’s far from a universal opinion.)  Many commentators have already made the obvious comparisons to the Catholic Church’s abuse scandal, and the lesson for powerful institutions the world over is indeed a similar one: sure, imprison Galileo; by all means stay silent during the Holocaust; but don’t protect pedophiles—cross that line, and your otherwise all-forgiving constituents might finally turn on you.

I should say that both of my parents are Penn State grads, and they’re both disgusted right now with the culture of hooliganism there—a culture that was present even in the late 60s and early 70s, but that’s become much more dominant since.  To the many of you at Penn State who want a university that’s more than an adjunct to a literally-rapacious football program, you have this blog’s admiration and support as you struggle to reclaim your great institution.  Go for the touchdown—WOOOOO!

What happened in the world this week

Friday, October 7th, 2011

A commenter named “Daniel Quilp” writes:

I am absolutely stunned that you have not posted an encomium to Steve Jobs.  You are a computer science professor.  Jobs was the most important innovator in the field.  You claim you want to reach out to the public but fail to take advantage of this opportunity.  Very sad, very disappointing.

Steve Jobs was indeed one of the great American innovators, and I was extremely sorry to hear about his passing.  I was riveted by the NYT obituary, from which I learned many facts about Jobs that I hadn’t known before.  Personally, I plan honor his memory by buying an iPhone 4S at the Apple Store near my apartment when it comes out on the 14th.  (I was debating between upgrading my 3GS to a 4S and switching to an Android, leaning toward 4S because of battery life.  The desire to honor the great man’s memory is what pushed me over the edge.)

As for why I didn’t write an encomium before: well, frankly, I don’t feel like being a theoretical computer scientist gives me any more of a “connection” to Steve Jobs than any of the hundreds of millions of people who use his products.  And when I do blog about world events, people often accuse me of jumping on a bandwagon and having nothing original to say, and tell me to stick to complexity theory.  That’s life as a blogger: not only is there nothing you can post, there’s nothing you can refrain from posting, that someone, somewhere, won’t be “absolutely stunned” by.

Even so, to anyone who was hurt or offended by my lack of a Steve Jobs post, I’m sorry.

And as long as I’m apologizing for silence about major news of the last week, I’m also sorry that I failed to congratulate the Royal Swedish Academy of Sciences for two truly magnificent decisions: first, awarding the Nobel Prize in Physics to Adam Riess, Saul Perlmutter, and Brian Schmidt for the discovery of the cosmic acceleration (see these two Cosmic Variance posts for more); second, awarding the Nobel Prize in Chemistry to Dan Shechtman for the discovery of quasicrystals.  If these two textbook-changing results don’t deserve Nobel Prizes, nothing does.

Since it’s Erev Yom Kippur, let me hereby repent for all of my countless mistakes, omissions, and lapses of judgment here at Shtetl-Optimized over the past year.  In the spirit of the “Kol Nidre” prayer, I also beg to be released from all survey articles that I promised to write, submissions that I promised to review, deadlines that I promised to meet, and emails that I promised to answer.  (Of course, if I were conventionally religious, I’d also have to repent for the very act of blogging on Yom Kippur.)

Ask Me Anything

Saturday, August 13th, 2011

Update (8/16): Phew! By my count, I’ve answered 139 questions over the past few days. Thanks so much to everyone for submitting them, and please don’t submit any more!

Incidentally, to those of you who complain (correctly) that I no longer update this blog enough, there’s a simple solution that should carry you through at least the next year.  Namely, just read a few “Ask Me Anything” answers every week!  To help you with that, I’ve compiled the following abridged table of contents to my uninformed spoutings:

 


Update: Thanks for the many, many, many great questions!  To keep things slightly under control, I’ll be fielding questions that are asked before 9PM EST tonight.

Also, sorry my blog went down for an hour!  I always count on Bluehost to not be there when I need it.


Alright, I put it off for most of the summer, but I guess it’s as good a time as any, now that (a) I’m finally done philosophizing for a while and (b) my wife Dana is away at a workshop, her civilizing and nerdiness-moderating influences temporarily absent.

So, by popular demand, and as promised a couple months ago, for the next 24 hours (with intermittent sleep breaks), I’ll once again be fielding any and all questions in the comments section.  Four simple ground rules:

  1. No multi-part questions: one question per comment and three total per person.
  2. While you can ask anything, if it’s too hostile, nosy, or irritating I might not answer it…
  3. I’ll only answer the first three questions about academic career advice (since in previous Ask Me Anything posts, that topic tended to drown out everything else).
  4. No questions that require me to read an article, watch a video, etc.