Archive for the ‘Self-Referential’ Category

Happy 40th Birthday Dana!

Friday, December 30th, 2022

The following is what I read at Dana’s 40th birthday party last night. Don’t worry, it’s being posted with her approval. –SA

I’d like to propose a toast to Dana, my wife and mother of my two kids.  My dad, a former speechwriter, would advise me to just crack a few jokes and then sit down … but my dad’s not here.

So instead I’ll tell you a bit about Dana.  She grew up in Tel Aviv, finishing her undergraduate CS degree at age 17—before she joined the army.  I met her when I was a new professor at MIT and she was a postdoc in Princeton, and we’d go to many of the same conferences. At one of those conferences, in Princeton, she finally figured out that my weird, creepy, awkward attempts to make conversation with her were, in actuality, me asking her out … at least in my mind!  So, after I’d returned to Boston, she then emailed me for days, just one email after the next, explaining everything that was wrong with me and all the reasons why we could never date.  Despite my general obliviousness in such matters, at some point I wrote back, “Dana, the absolute value of your feelings for me seems perfect. Now all I need to do is flip the sign!”

Anyway, the very next weekend, I took the Amtrak back to Princeton at her invitation. That weekend is when we started dating, and it’s also when I introduced her to my family, and when she and I planned out the logistics of getting married.

Dana and her family had been sure that she’d return to Israel after her postdoc. She made a huge sacrifice in staying here in the US for me. And that’s not even mentioning the sacrifice to her career that came with two very difficult pregnancies that produced our two very diffic … I mean, our two perfect and beautiful children.

Truth be told, I haven’t always been the best husband, or the most patient or the most grateful.  I’ve constantly gotten frustrated and upset, extremely so, about all the things in our life that aren’t going well.  But preparing the slideshow tonight, I had a little epiphany.  I had a few photos from the first two-thirds of Dana’s life, but of course, I mostly had the last third.  But what’s even happened in that last third?  She today feels like she might be close to a breakthrough on the Unique Games Conjecture.  But 13 years ago, she felt exactly the same way.  She even looks the same!

So, what even happened?

Well OK, fine, there was my and Dana’s first trip to California, a month after we started dating.  Our first conference together.  Our trip to Vegas and the Grand Canyon.  Our first trip to Israel to meet her parents, who I think are finally now close to accepting me. Her parents’ trip to New Hope, Pennsylvania to meet my parents. Our wedding in Tel Aviv—the rabbi rushing through the entire ceremony in 7 minutes because he needed to get home to his kids. Our honeymoon safari in Kenya.  Lily’s birth. Our trip to Israel with baby Lily, where we introduced Lily to Dana’s grandmother Rivka, an Auschwitz survivor, just a few months before Rivka passed away. Taking Lily to run around Harvard Yard with our Boston friends, Lily losing her beloved doll Tuza there, then finding Tuza the next day after multiple Harvard staff had been engaged in the quest. There’s me and Dana eating acai bowls in Rio de Janeiro, getting a personal tour of the LHC in Switzerland, with kangaroos and koalas in Australia. There’s our house here in Austin.  Oh, and here are all our Austin friends! Our trip to Disney World with Lily, while Dana was pregnant with Daniel (she did ride one rollercoaster). Daniel’s birth, which to my relief, went well despite the rollercoaster. Our sabbatical year in Israel. The birth of our nephews.

I confess I teared up a little going through all this. Because it’s like: if you showed all these photos to some third party, they’d probably be like, wow, that looks like a pretty good life. Even if you showed the photos to my 17-year-old self, I’d be like, OK, no need to roll the dice again, I’ll take that life. I’m not talking about what’s happened with the world—climate change or COVID or the insurrection or anything like that. But the part with the beautiful Israeli complexity theorist wife, who’s also caring and unbreakable in her moral convictions? Yes.

So, thank you and a very happy birthday to the one who’s given me all this!

And lastly, because it seems almost obligatory: I did feed everything I said just now into GPT (I won’t specify which version), and asked it to write Dana a special GPT birthday poem. Here’s what it came up with:

From Tel Aviv to Princeton,
You’ve traveled near and far,
A brilliant computer scientist,
A shining, guiding star.

You’ve made so many sacrifices,
For family and for love,
But your light shines through the darkness,
And fits me like a glove.

We’ve shared so many moments,
Too many to recount,
But each one is a treasure,
Each memory paramount.

So happy birthday, Dana,
You deserve the very best,
I’m grateful for your presence,
And feel so truly blessed.


Addendum: Speaking of GPT, should it and other Large Language Models be connected to the Internet and your computer’s filesystem and empowered to take actions directly, with reinforcement learning pushing it to achieve the user’s goals?

On the negative side, some of my friends worry that this sort of thing might help an unaligned superintelligence to destroy the world.

But on the positive side, at Dana’s birthday party, I could’ve just told the computer, “please display these photos in a slideshow rotation while also rotating among these songs,” and not wasted part of the night messing around with media apps that befuddle and defeat me as a mere CS PhD.

I find it extremely hard to balance these considerations.

Anyway, happy birthday Dana!

What I’ve learned from having COVID

Sunday, September 4th, 2022
  1. The same thing Salman Rushdie learned: either you spend your entire life in hiding, or eventually it’ll come for you. Years might pass. You might emerge from hiding once, ten times, a hundred times, be fine, and conclude (emotionally if not intellectually) that the danger must now be over, that if it were going to come at all then it already would have, that maybe you’re even magically safe. But this is just the nature of a Poisson process: 0, 0, 0, followed by 1.
  2. First comes the foreboding (in my case, on the flight back home from the wonderful CQIQC meeting in Toronto)—“could this be COVID?”—the urge to reassure yourself that it isn’t, the premature relief when the test is negative. Only then, up to a day later, comes the second vertical line on the plastic cartridge.
  3. I’m grateful for the vaccines, which have up to a 1% probability of having saved my life. My body was as ready for this virus as my brain would’ve been for someone pointing a gun at my head and demanding to know a proof of the Karp-Lipton Theorem. All the same, I wish I also could’ve taken a nasal vaccine, to neutralize the intruder at the gate. Through inaction, through delays, through safetyism that’s ironically caused millions of additional deaths, the regulatory bureaucracies of the US and other nations have a staggering amount to answer for.
  4. Likewise, Paxlovid should’ve been distributed like candy, so that everyone would have a supply and could start the instant they tested positive. By the time you’re able to book an online appointment and send a loved one to a pharmacy, a night has likely passed and the Paxlovid is less effective.
  5. By the usual standards of a cold, this is mild. But the headaches, the weakness, the tiredness … holy crap the tiredness. I now know what it’s like to be a male lion or a hundred-year-old man, to sleep for 20 hours per day and have that feel perfectly appropriate and normal. I can only hope I won’t be one of the long-haulers; if I were, this could be the end of my scientific career. Fortunately the probability seems small.
  6. You can quarantine in your bedroom, speak to your family only through the door, have meals passed to you, but your illness will still cast a penumbra on everyone around you. Your spouse will be stuck watching the kids alone. Other parents won’t let their kids play with your kids … and you can’t blame them; you’d do the same in their situation.
  7. It’s hard to generalize from a sample size of 1 (or 2 if you count my son Daniel, who recovered from a thankfully mild case half a year ago). Readers: what are your COVID stories?

A low-tech solution

Tuesday, July 19th, 2022

Thanks so much to everyone who offered help and support as this blog’s comment section endured the weirdest, most motivated and sophisticated troll attack in its 17-year history. For a week, a parade of self-assured commenters showed up to demand that I explain and defend my personal hygiene, private thoughts, sexual preferences, and behavior around female students (and, absurdly, to cajole me into taking my family on a specific Disney cruise ship). In many cases, the troll or trolls appropriated the names and email addresses of real academics, imitating them so convincingly that those academics’ closest colleagues told me they were confident it was really them. And when some trolls finally “outed” themselves, I had no way to know whether that was just another chapter in the trolling campaign. It was enough to precipitate an epistemic crisis, where one actively doubts the authenticity of just about every piece of text.

The irony isn’t lost on me that I’ve endured this just as I’m starting my year-long gig at OpenAI, to think, among other things, about the potential avenues for misuse of Large Language Models like GPT-3, and what theoretical computer science could contribute to mitigating them. To say this episode has given me a more vivid understanding of the risks would be an understatement.

But why didn’t I just block and ignore the trolls immediately? Why did I bother engaging?

At least a hundred people asked some variant of this question, and the answer is this. For most of my professional life, this blog has been my forum, where anyone in the world could show up to raise any issue they wanted, as if we were tunic-wearing philosophers in the Athenian agora. I prided myself on my refusal to take the coward’s way out and ignore anything—even, especially, severe personal criticism. I’d witnessed how Jon Stewart, let’s say, would night after night completely eviscerate George W. Bush, his policies and worldview and way of speaking and justifications and lies, and then Bush would just continue the next day, totally oblivious, never deigning to rebut any of it. And it became a core part of my identity that I’d never be like that. If anyone on earth had a narrative of me where I was an arrogant bigot, a clueless idiot, etc., I’d confront that narrative head-on and refute it—or if I couldn’t, I’d reinvent my whole life. What I’d never do is suffer anyone’s monstrous caricature of me to strut around the Internet unchallenged, as if conceding that only my academic prestige or tenure or power, rather than a reasoned rebuttal, could protect me from the harsh truths that the caricature revealed.

Over the years, of course, I carved out some exceptions: P=NP provers and quantum mechanics deniers enraged that I’d dismissed their world-changing insights. Raving antisemites. Their caricatures of me had no legs in any community I cared about. But if an attack carried the implied backing of the whole modern social-justice movement, of thousands of angry grad students on Twitter, of Slate and Salon and New York Times writers and Wikipedia editors and university DEI offices, then the coward’s way out was closed. The monstrous caricature then loomed directly over me; I could either parry his attacks or die.

With this stance, you might say, the astounding part is not that this blog’s “agora” model eventually broke down, but rather that it survived for so long! I started blogging in October 2005. It took until July 2022 for me to endure a full-scale “social/emotional denial of service attack” (not counting the comment-171 affair). Now that I have, though, it’s obvious even to me that the old way is no longer tenable.

So what’s the solution? Some of you liked the idea of requiring registration with real email addresses—but alas, when I tried to implement that, I found that WordPress’s registration system is a mess and I couldn’t see how to make it work. Others liked the idea of moving to Substack, but others actively hated it, and in any case, even if I moved, I’d still have to figure out a comment policy! Still others liked the idea of an army of volunteer moderators. At least ten people volunteered themselves.

On reflection, the following strikes me as most directly addressing the actual problem. I’m hereby establishing the Shtetl-Optimized Committee of Guardians, or SOCG (same acronym as the computational geometry conference 🙂 ). If you’re interested in joining, shoot me an email, or leave a comment on this post with your (real!) email address. I’ll accept members only if I know them in real life, personally or by reputation, or if they have an honorable history on this blog.

For now, the SOCG’s only job is this: whenever I get a comment that gives me a feeling of unease—because, e.g., it seems trollish or nasty or insincere, it asks a too-personal question, or it challenges me to rebut a hostile caricature of myself—I’ll email the comment to the SOCG and ask what to do. I commit to respecting the verdict of those SOCG members who respond, whenever a clear verdict exists. The verdict could be, e.g., “this seems fine,” “if you won’t be able to resist responding then don’t let this appear,” or “email the commenter first to confirm their identity.” And if I simply need reassurance that the commenter’s view of me is false, I’ll seek it from the SOCG before I seek it from the whole world.

Here’s what SOCG members can expect in return: I continue pouring my heart into this subscription-free, ad-free blog, and I credit you for making it possible—publicly if you’re comfortable with your name being listed, privately if not. I buy you a fancy lunch or dinner if we’re ever in the same town.

Eventually, we might move to a model where the SOCG members can log in to WordPress and directly moderate comments themselves. But let’s try it this way first and see if it works.

Choosing a new comment policy

Tuesday, July 12th, 2022

Update (July 13): I was honored to read this post by my friend Boaz Barak.

Update (July 14): By now, comments on this post allegedly from four CS professors — namely, Josh Alman, Aloni Cohen, Rana Hanocka, and Anna Farzindar — as well as from the graduate student “BA,” have been unmasked as from impersonator(s).

I’ve been the target of a motivated attack-troll (or multiple trolls, but I now believe just one) who knows about the CS community. This might be the single weirdest thing that’s happened to me in 17 years of blogging, surpassing even the legendary Ricoh printer episode of 2007. It obviously underscores the need for a new, stricter comment policy, which is what this whole post was about.


Yesterday and today, both my work and my enjoyment of the James Webb images were interrupted by an anonymous troll, who used the Shtetl-Optimized comment section to heap libelous abuse on me—derailing an anodyne quantum computing discussion to opine at length about how I’m a disgusting creep who surely, probably, maybe has lewd thoughts about his female students. Unwisely or not, I allowed it all to appear, and replied to all of it. I had a few reasons: I wanted to prove that I’m now strong enough to withstand bullying that might once have driven me to suicide. I wanted, frankly, many readers to come to my defense (thanks to those who did!). I at least wanted readers to see firsthand what I now regularly deal with: the emotional price of maintaining this blog. Most of all, I wanted my feminist, social-justice-supporting readers to either explicitly endorse or (hopefully) explicitly repudiate the unambiguous harassment that was now being gleefully committed in their name.

Then, though, the same commenter upped the ante further, by heaping misogynistic abuse on my wife Dana—while still, ludicrously and incongruously, cloaking themselves in the rhetoric of social justice. Yes: apparently the woke, feminist thing to do is now to rate female computer scientists on their looks.

Let me be blunt: I cannot continue to write Shtetl-Optimized while dealing with regular harassment of me and my family. At the same time, I’m also determined not to “surrender to the terrorists.” So, I’m weighing the following options:

  • Close comments except to commenters who provide a real identity—e.g., a full real name, a matching email address, a website.
  • Move to Substack, and then allow only commenters who’ve signed up.
  • Hire someone to pre-screen comments for me, and delete ones that are abusive or harassing (to me or others) before I even see them. (Any volunteers??)
  • Make the comment sections for readers only, eliminating any expectation that I’ll participate.

One thing that’s clear is that the status quo will not continue. I can’t “just delete” harassing or abusive comments, because the trolls have gotten too good at triggering me, and they will continue to weaponize my openness and my ethic of responding to all possible arguments against me.

So, regular readers: what do you prefer?

Linkz!

Saturday, July 9th, 2022

(1) Fellow CS theory blogger (and, 20 years ago, member of my PhD thesis committee) Luca Trevisan interviews me about Shtetl-Optimized, for the Bulletin of the European Association for Theoretical Computer Science. Questions include: what motivates me to blog, who my main inspirations are, my favorite posts, whether blogging has influenced my actual research, and my thoughts on the role of public intellectuals in the age of social-media outrage.

(2) Anurag Anshu, Nikolas Breuckmann, and Chinmay Nirkhe have apparently proved the NLTS (No Low-Energy Trivial States) Conjecture! This is considered a major step toward a proof of the famous Quantum PCP Conjecture, which—speaking of one of Luca Trevisan’s questions—was first publicly raised right here on Shtetl-Optimized back in 2006.

(3) The Microsoft team has finally released its promised paper about the detection of Majorana zero modes (“this time for real”), a major step along the way to creating topological qubits. See also this live YouTube peer review—is that a thing now?—by Vincent Mourik and Sergey Frolov, the latter having been instrumental in the retraction of Microsoft’s previous claim along these lines. I’ll leave further discussion to people who actually understand the experiments.

(4) I’m looking forward to the 2022 Conference on Computational Complexity less than two weeks from now, in my … safe? clean? beautiful? awe-inspiring? … birth-city of Philadelphia. There I’ll listen to a great lineup of talks, including one by my PhD student William Kretschmer on his joint work with me and DeVon Ingram on The Acrobatics of BQP, and to co-receive the CCC Best Paper Award (wow! thanks!) for that work. I look forward to meeting some old and new Shtetl-Optimized readers there.

OpenAI!

Friday, June 17th, 2022

I have some exciting news (for me, anyway). Starting next week, I’ll be going on leave from UT Austin for one year, to work at OpenAI. They’re the creators of the astonishing GPT-3 and DALL-E2, which have not only endlessly entertained me and my kids, but recalibrated my understanding of what, for better and worse, the world is going to look like for the rest of our lives. Working with an amazing team at OpenAI, including Jan Leike, John Schulman, and Ilya Sutskever, my job will be think about the theoretical foundations of AI safety and alignment. What, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want?

Yeah, I don’t know the answer either. That’s why I’ve got a whole year to try to figure it out! One thing I know for sure, though, is that I’m interested both in the short-term, where new ideas are now quickly testable, and where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern, and the long-term, where one might worry about what happens once AIs surpass human abilities across nearly every domain. (And all the points in between: we might be in for a long, wild ride.) When you start reading about AI safety, it’s striking how there are two separate communities—one mostly worried about machine learning perpetuating racial and gender biases, and the other mostly worried about superhuman AI turning the planet into goo—who not only don’t work together, but are at each other’s throats, with each accusing the other of totally missing the point. I persist, however, in the possibly-naïve belief that these are merely two extremes along a single continuum of AI worries. By figuring out how to align AI with human values today—constantly confronting our theoretical ideas with reality—we can develop knowledge that will give us a better shot at aligning it with human values tomorrow.

For family reasons, I’ll be doing this work mostly from home, in Texas, though traveling from time to time to OpenAI’s office in San Francisco. I’ll also spend 30% of my time continuing to run the Quantum Information Center at UT Austin and working with my students and postdocs. At the end of the year, I plan to go back to full-time teaching, writing, and thinking about quantum stuff, which remains my main intellectual love in life, even as AI—the field where I started, as a PhD student, before I switched to quantum computing—has been taking over the world in ways that none of us can ignore.

Maybe fittingly, this new direction in my career had its origins here on Shtetl-Optimized. Several commenters, including Max Ra and Matt Putz, asked me point-blank what it would take to induce me to work on AI alignment. Treating it as an amusing hypothetical, I replied that it wasn’t mostly about money for me, and that:

The central thing would be finding an actual potentially-answerable technical question around AI alignment, even just a small one, that piqued my interest and that I felt like I had an unusual angle on. In general, I have an absolutely terrible track record at working on topics because I abstractly feel like I “should” work on them. My entire scientific career has basically just been letting myself get nerd-sniped by one puzzle after the next.

Anyway, Jan Leike at OpenAI saw this exchange and wrote to ask whether I was serious in my interest. Oh shoot! Was I? After intensive conversations with Jan, others at OpenAI, and others in the broader AI safety world, I finally concluded that I was.

I’ve obviously got my work cut out for me, just to catch up to what’s already been done in the field. I’ve actually been in the Bay Area all week, meeting with numerous AI safety people (and, of course, complexity and quantum people), carrying a stack of technical papers on AI safety everywhere I go. I’ve been struck by how, when I talk to AI safety experts, they’re not only not dismissive about the potential relevance of complexity theory, they’re more gung-ho about it than I am! They want to talk about whether, say, IP=PSPACE, or MIP=NEXP, or the PCP theorem could provide key insights about how we could verify the behavior of a powerful AI. (Short answer: maybe, on some level! But, err, more work would need to be done.)

How did this complexitophilic state of affairs come about? That brings me to another wrinkle in the story. Traditionally, students follow in the footsteps of their professors. But in trying to bring complexity theory into AI safety, I’m actually following in the footsteps of my student: Paul Christiano, one of the greatest undergrads I worked with in my nine years at MIT, the student whose course project turned into the Aaronson-Christiano quantum money paper. After MIT, Paul did a PhD in quantum computing at Berkeley, with my own former adviser Umesh Vazirani, while also working part-time on AI safety. Paul then left quantum computing to work on AI safety full-time—indeed, along with others such as Dario Amodei, he helped start the safety group at OpenAI. Paul has since left to found his own AI safety organization, the Alignment Research Center (ARC), although he remains on good terms with the OpenAI folks. Paul is largely responsible for bringing complexity theory intuitions and analogies into AI safety—for example, through the “AI safety via debate” paper and the Iterated Amplification paper. I’m grateful for Paul’s guidance and encouragement—as well as that of the others now working in this intersection, like Geoffrey Irving and Elizabeth Barnes—as I start this new chapter.

So, what projects will I actually work on at OpenAI? Yeah, I’ve been spending the past week trying to figure that out. I still don’t know, but a few possibilities have emerged. First, I might work out a general theory of sample complexity and so forth for learning in dangerous environments—i.e., learning where making the wrong query might kill you. Second, I might work on explainability and interpretability for machine learning: given a deep network that produced a particular output, what do we even mean by an “explanation” for “why” it produced that output? What can we say about the computational complexity of finding that explanation? Third, I might work on the ability of weaker agents to verify the behavior of stronger ones. Of course, if P≠NP, then the gap between the difficulty of solving a problem and the difficulty of recognizing a solution can sometimes be enormous. And indeed, even in empirical machine learing, there’s typically a gap between the difficulty of generating objects (say, cat pictures) and the difficulty of discriminating between them and other objects, the latter being easier. But this gap typically isn’t exponential, as is conjectured for NP-complete problems: it’s much smaller than that. And counterintuitively, we can then turn around and use the generators to improve the discriminators. How can we understand this abstractly? Are there model scenarios in complexity theory where we can prove that something similar happens? How far can we amplify the generator/discriminator gap—for example, by using interactive protocols, or debates between competing AIs?

OpenAI, of course, has the word “open” right in its name, and a founding mission “to ensure that artificial general intelligence benefits all of humanity.” But it’s also a for-profit enterprise, with investors and paying customers and serious competitors. So throughout the year, don’t expect me to share any proprietary information—that’s not my interest anyway, even if I hadn’t signed an NDA. But do expect me to blog my general thoughts about AI safety as they develop, and to solicit feedback from readers.

In the past, I’ve often been skeptical about the prospects for superintelligent AI becoming self-aware and destroying the world anytime soon (see, for example, my 2008 post The Singularity Is Far). While I was aware since 2005 or so of the AI-risk community; and of its leader and prophet, Eliezer Yudkowsky; and of Eliezer’s exhortations for people to drop everything else they’re doing and work on AI risk, as the biggest issue facing humanity, I … kept the whole thing at arms’ length. Even supposing I agreed that this was a huge thing to worry about, I asked, what on earth do you want me to do about it today? We know so little about a future superintelligent AI and how it would behave that any actions we took today would likely be useless or counterproductive.

Over the past 15 years, though, my and Eliezer’s views underwent a dramatic and ironic reversal. If you read Eliezer’s “litany of doom” from two weeks ago, you’ll see that he’s now resigned and fatalistic: because his early warnings weren’t heeded, he argues, humanity is almost certainly doomed and an unaligned AI will soon destroy the world. He says that there are basically no promising directions in AI safety research: for any alignment strategy anyone points out, Eliezer can trivially refute it by explaining how (e.g.) the AI would be wise to the plan, and would pretend to go along with whatever we wanted from it while secretly plotting against us.

The weird part is, just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic. Part of my optimism is because people like Paul Christiano have laid foundations for a meaty mathematical theory: much like the Web (or quantum computing theory) in 1992, it’s still in a ridiculously primitive stage, but even my limited imagination now suffices to see how much more could be built there. An even greater part of my optimism is because we now live in a world with GPT-3, DALL-E2, and other systems that, while they clearly aren’t AGIs, are powerful enough that worrying about AGIs has come to seem more like prudence than like science fiction. And we can finally test our intuitions against the realities of these systems, which (outside of mathematics) is pretty much the only way human beings have ever succeeded at anything.

I didn’t predict that machine learning models this impressive would exist by 2022. Most of you probably didn’t predict it. For godsakes, Eliezer Yudkowsky didn’t predict it. But it’s happened. And to my mind, one of the defining virtues of science is that, when empirical reality gives you a clear shock, you update and adapt, rather than expending your intelligence to come up with clever reasons why it doesn’t matter or doesn’t count.

Anyway, so that’s the plan! If I can figure out a way to save the galaxy, I will, but I’ve set my goals slightly lower, at learning some new things and doing some interesting research and writing some papers about it and enjoying a break from teaching. Wish me a non-negligible success probability!


Update (June 18): To respond to a couple criticisms that I’ve seen elsewhere on social media…

Can the rationalists sneer at me for waiting to get involved with this subject until it had become sufficiently “respectable,” “mainstream,” and ”high-status”? I suppose they can, if that’s their inclination. I suppose I should be grateful that so many of them chose to respond instead with messages of congratulations and encouragement. Yes, I plead guilty to keeping this subject at arms-length until I could point to GPT-3 and DALL-E2 and the other dramatic advances of the past few years to justify the reality of the topic to anyone who might criticize me. It feels internally like I had principled reasons for this: I can think of almost no examples of research programs that succeeded over decades even in the teeth of opposition from the scientific mainstream. If so, then arguably the best time to get involved with a “fringe” scientific topic, is when and only when you can foresee a path to it becoming the scientific mainstream. At any rate, that’s what I did with quantum computing, as a teenager in the mid-1990s. It’s what many scientists of the 1930s did with the prospect of nuclear chain reactions. And if I’d optimized for getting the right answer earlier, I might’ve had to weaken the filters and let in a bunch of dubious worries that would’ve paralyzed me. But I admit the possibility of self-serving bias here.

Should you worry that OpenAI is just hiring me to be able to say “look, we have Scott Aaronson working on the problem,” rather than actually caring about what its safety researchers come up with? I mean, I can’t prove that you shouldn’t worry about that. In the end, whatever work I do on the topic will have to speak for itself. For whatever it’s worth, though, I was impressed by the OpenAI folks’ detailed, open-ended engagement with these questions when I met them—sort of like how it might look if they actually believed what they said about wanting to get this right for the world. I wouldn’t have gotten involved otherwise.

An understandable failing?

Sunday, May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

Happy 70th birthday Dad!

Saturday, February 12th, 2022

When, before covid, I used to travel the world giving quantum computing talks, every once in a while I’d meet an older person who asked whether I had any relation to a 1970s science writer by the name of Steve Aaronson. So, yeah, Steve Aaronson is my dad. He majored in English in Penn State, where he was lucky enough to study under the legendary Phil Klass, who wrote under the pen name William Tenn and who basically created the genre of science-fiction comedy, half a century before there were any such things as Futurama. After graduating, my dad became a popular physics and cosmology writer, who interviewed greats like Steven Weinberg and John Archibald Wheeler and Arno Penzias (discoverer of the cosmic microwave background radiation). He published not only in science magazines but in Playboy and Penthouse, which (as he explained to my mom) paid better than the science magazines. When I was growing up, my dad had a Playboy on his office shelf, which I might take down if for example I wanted to show a friend a 2-page article, with an Aaronson byline, about the latest thinking on the preponderance of matter over antimatter in the visible universe.

Eventually, partly motivated by the need to make money to support … well, me, and then my brother, my dad left freelancing to become a corporate science writer at AT&T Bell Labs. There, my dad wrote speeches, delivered on the floor of Congress, about how breaking up AT&T’s monopoly would devastate Bell Labs, a place that stood with ancient Alexandria and Cambridge University among the human species’ most irreplaceable engines of scientific creativity. (Being a good writer, my dad didn’t put it in quite those words.) Eventually, of course, AT&T was broken up, and my dad’s dire warning about Bell Labs turned out to be 100% vindicated … although on the positive side, Americans got much cheaper long distance.

After a decade at Bell Labs, my dad was promoted to be a public relations executive at AT&T itself, where when I was a teenager, he was centrally involved in the launch of the AT&T spinoff Lucent Technologies (motto: “Bell Labs Innovations”), and then later the Lucent spinoff Avaya—developments that AT&T’s original breakup had caused as downstream effects.

In the 1970s, somewhere between his magazine stage and his Bell Labs stage, my dad also worked for Eugene Garfield, the pioneer of bibliometrics for scientific papers and founder of the Institute for Scientific Information, or ISI. (Sergey Brin and Larry Page would later cite Garfield’s work, on the statistics of the scientific-citation graph, as one of the precedents for the PageRank algorithm at the core of Google.)

My dad’s job at ISI was to supply Eugene Garfield with “raw material” for essays, which the latter would then write and publish in ISI’s journal Current Contents under the byline Eugene Garfield. Once, though, my dad supplied some “raw material” for a planned essay about “Style in Scientific Writing”—and, well, I’ll let Garfield tell the rest:

This topic of style in scientific writing was first proposed as something I should undertake myself, with some research and drafting help from Steve. I couldn’t, with a clear conscience, have put my name to the “draft” he submitted. And, though I don’t disagree with much of it, I didn’t want to modify or edit it in order to justify claiming it as my own. So here is Aaronson’s “draft,” as it was submitted for “review.” You can say I got a week’s vacation. After reading what he wrote it required little work to write this introduction.

Interested yet? You can read “Style in Scientific Writing” here. You can, if we’re being honest, tell that this piece was originally intended as “raw material”—but only because of the way it calls forth such a fierce armada of all of history’s awesomest quotations about what makes scientific writing good or bad, like Ben Franklin and William James and the whole gang, which would make it worth the read regardless. I love eating raw dough, I confess, and I love my dad’s essay. (My dad, ironically enough, likes everything he eats to be thoroughly cooked.)

When I read that essay, I hear my dad’s voice from my childhood. “Omit needless words.” There were countless revisions and pieces of advice on every single thing I wrote, but usually, “omit needless words” was the core of it. And as terrible as you all know me to be on that count, imagine how much worse it would’ve been if not for my dad! And I know that as soon as he reads this post, he’ll find needless words to omit.

But hopefully he won’t omit these:

Happy 70th birthday Pops, congrats on beating the cancer, and here’s to many more!

Welcome to scottaaronson.blog !

Thursday, October 21st, 2021

If you’ve visited Shtetl-Optimized lately — which, uh, I suppose you have — you may have noticed that your URL was redirected from www.scottaaronson.com/blog to scottaaronson.blog. That’s because Automattic, makers of WordPress.com, volunteered to move my blog there from Bluehost, free of charge. If all goes according to plan, you should notice faster loading times, less downtime, and hopefully nothing else different. Please let me know if you encounter any problems. And huge thanks to the WordPress.com Special Projects Team, especially Christopher Jones and Mark Drovdahl, for helping me out with this.

On Guilt

Thursday, June 10th, 2021

The other night Dana and I watched “The Internet’s Own Boy,” the 2014 documentary about the life and work of Aaron Swartz, which I’d somehow missed when it came out. Swartz, for anyone who doesn’t remember, was the child prodigy who helped create RSS and Reddit, who then became a campaigner for an open Internet, who was arrested for using a laptop in an MIT supply closet to download millions of journal articles and threatened with decades in prison, and who then committed suicide at age 26. I regret that I never knew Swartz, though he did once send me a fan email about Quantum Computing Since Democritus.

Say whatever you want about the tactical wisdom or the legality of Swartz’s actions; it seems inarguable to me that he was morally correct, that certain categories of information (e.g. legal opinions and taxpayer-funded scientific papers) need to be made freely available, and that sooner or later our civilization will catch up to Swartz and regard his position as completely obvious. The beautifully-made documentary filled me with rage and guilt not only that the world had failed Swartz, but that I personally had failed him.

At the time of Swartz’s arrest, prosecution, and suicide, I was an MIT CS professor who’d previously written in strong support of open access to scientific literature, and who had the platform of this blog. Had I understood what was going on with Swartz—had I taken the time to find out what was going on—I could have been in a good position to help organize a grassroots campaign to pressure the MIT administration to urge prosecutors to drop the case (like JSTOR had already done), which could plausibly have made a difference. As it was, I was preoccupied in those years with BosonSampling, getting married, etc., I didn’t bother to learn whether anything was being done or could be done about the Aaron Swartz matter, and then before I knew it, Swartz had joined Alan Turing in computer science’s pantheon of lost geniuses.

But maybe there was something deeper to my inaction. If I’d strongly defended the substance of what Swartz had done, it would’ve raised the question: why wasn’t I doing the same? Why was I merely complaining about paywalled journals from the comfort of my professor’s office, rather than putting my own freedom on the line like Swartz was? It was as though I had to put some psychological distance between myself and the situation, in order to justify my life choices to myself.

Even though I see the error in that way of “thinking,” it keeps recurring, keeps causing me to make choices that I feel guilt or at least regret about later. In February 2020, there were a few smart people saying that a new viral pneumonia from Wuhan was about to upend life on earth, but the people around me certainly weren’t acting that way, and I wasn’t acting that way either … and so, “for the sake of internal consistency,” I didn’t spend much time thinking about it or investigating it. After all, if the fears of a global pandemic had a good chance of being true, I should be dropping everything else and panicking, shouldn’t I? But I wasn’t dropping everything else and panicking … so how could the fears be true?

Then I publicly repented, and resolved not to make such an error again. And now, 15 months later, I realize that I have made such an error again.

All throughout the pandemic, I’d ask my friends, privately, why the hypothesis that the virus had accidentally leaked from the Wuhan Institute of Virology wasn’t being taken far more seriously, given what seemed like a shockingly strong prima facie case. But I didn’t discuss the lab leak scenario on this blog, except once in passing. I could say I didn’t discuss it because I’m not a virologist and I had nothing new to contribute. But I worry that I also didn’t discuss it because it seemed incompatible with my self-conception as a cautious scientist who’s skeptical of lurid coverups and conspiracies—and because I’d already spent my “weirdness capital” on other issues, and didn’t relish the prospect of being sneered at on social media yet again. Instead I simply waited for discussion of the lab leak hypothesis to become “safe” and “respectable,” as today it finally has, thanks to writers who were more courageous than I was. I became, basically, another sheep in one of the conformist herds that we rightly despise when we read about them in history.

(For all that, it’s still plausible to me that the virus had a natural origin after all. What’s become clear is simply that, even if so, the failure to take the possibility of a lab escape more seriously back when the trail of evidence was fresher will stand as a major intellectual scandal of our time.)

Sometimes people are wracked with guilt, but over completely different things than the world wants them to be wracked with guilt over. This was one of the great lessons that I learned from reading Richard Rhodes’s The Making of the Atomic Bomb. Many of the Manhattan Project physicists felt lifelong guilt, not that they’d participated in building the bomb, but only that they hadn’t finished the bomb by 1943, when it could have ended the war in Europe and the Holocaust.

On a much smaller scale, I suppose some readers would still like me to feel guilt about comment 171, or some of the other stuff I wrote about nerds, dating, and feminism … or if not that, then maybe about my defense of a two-state solution for Israel and Palestine, or of standardized tests and accelerated math programs, or maybe my vehement condemnation of Trump and his failed insurrection. Or any of the dozens of other times when I stood up and said something I actually believed, or when I recounted my experiences as accurately as I could. The truth is, though, I don’t.

Looking back—which, now that I’m 40, I confess is an increasingly large fraction of my time—the pattern seems consistent. I feel guilty, not for having stood up for what I strongly believed in, but for having failed to do so. This suggests that, if I want fewer regrets, then I should click “Publish” on more potentially controversial posts! I don’t know how to force myself to do that, but maybe this post itself is a step.