Archive for the ‘Nerd Interest’ Category

Freeman Dyson and Boris Tsirelson

Saturday, February 29th, 2020

Today, as the world braces for the possibility of losing millions of lives to the new coronavirus—to the hunger for pangolin meat, of all things (combined with the evisceration of competent public health agencies like the CDC)—we also mourn the loss of two incredibly special lives, those of Freeman Dyson (age 96) and Boris Tsirelson (age 69).

Freeman Dyson was sufficiently legendary, both within and beyond the worlds of math and physics, that there’s very little I can add to what’s been said. It seemed like he was immortal, although I’d heard from mutual friends that his health was failing over the past year. When I spent a year as a postdoc at the Institute for Advanced Study, in 2004-5, I often sat across from Dyson in the common room, while he drank tea and read the news. That I never once struck up a conversation with him is a regret that I’ll now carry with me forever.

My only exchange with Dyson came when he gave a lecture at UC Berkeley, about how life might persist infinitely far into the future, even after the last stars had burnt out, by feeding off steadily dimishing negentropy flows in the nearly-thermal radiation. During the Q&A, I challenged Dyson that his proposal seemed to assume an analog model of computation. But, I asked, once we took on board the quantum-gravity insights of Jacob Bekenstein and others, suggesting that nature behaves like a (quantum) digital computer at the Planck scale, with at most ~1043 operations per second and ~1069 qubits per square meter and so forth, wasn’t this sort of proposal ruled out? “I’m not going to argue with you,” was Dyson’s response. Yes, he’d assumed an analog computational model; if computation was digital then that surely changed the picture.

Sometimes—and not just with his climate skepticism, but also (e.g.) with his idea that general relativity and quantum mechanics didn’t need to be reconciled, that it was totally fine for the deepest layer of reality to be a patchwork of inconsistent theories—Dyson’s views struck me as not merely contrarian but as a high-level form of trolling. Even so, Dyson’s book Disturbing the Universe had had a major impact on me as a teenager, for the sparkling prose as much as for the ideas.

With Dyson’s passing, the scientific world has lost one of its last direct links to a heroic era, of Einstein and Oppenheimer and von Neumann and a young Richard Feynman, when theoretical physics stood at the helm of civilization like never before or since. Dyson, who apparently remained not only lucid but mathematically powerful (!) well into his last year, clearly remembered when the Golden Age of science fiction looked like simply sober forecasting; when the smartest young people, rather than denouncing each other on Twitter, dreamed of scouting the solar system in thermonuclear-explosion-powered spacecraft and seriously worked to make that happen.

Boris Tsirelson (homepage, Wikipedia), who emigrated from the Soviet Union and then worked at Tel Aviv University (where my wife Dana attended his math lectures), wasn’t nearly as well known as Dyson to the wider world, but was equally beloved within the quantum computing and information community. Tsirelson’s bound, which he proved in the 1980s, showed that even quantum mechanics could only violate the Bell inequality by so much and by no more, could only let Alice and Bob win the CHSH game with probability cos2(π/8). This seminal result anticipated many of the questions that would only be asked decades later with the rise of quantum information. Tsirelson’s investigations of quantum nonlocality also led him to pose the famous Tsirelson’s problem: loosely speaking, can all sets of quantum correlations that can arise from an infinite amount of entanglement, be arbitrarily well approximated using finite amounts of entanglement? The spectacular answer—no—was only announced one month ago, as a corollary of the MIP*=RE breakthrough, something that Tsirelson happily lived to see although I don’t know what his reaction was (update: I’m told that he indeed learned of it in his final weeks, and was happy about it). Sadly, for some reason, I never met Tsirelson in person, although I did have lively email exchanges with him 10-15 years ago about his problem and other topics. This amusing interview with Tsirelson gives some sense for his personality (hat tip to Gil Kalai, who knew Tsirelson well).

Please share any memories of Dyson or Tsirelson in the comments section.

From shtetl to Forum

Saturday, January 18th, 2020

Update (Feb. 4): Immediately after departing Davos, I visited the University of Waterloo and the Perimeter Institute to give three talks, then the Simons Institute at UC Berkeley to give another talk; then I returned to Austin for a weekend with my family, all while fighting off my definitely-not-coronavirus cold. Right now I’m at Harvard to speak at the Black Hole Initiative as well as the Center of Mathematical Sciences and Applications, then my old haunt MIT to speak at CSAIL Hot Topics, then Princeton to give a CS theory seminar—all part of my Quantum Supremacy 2020 World Tour.

Here’s a YouTube video for my Berkeley talk, which was entitled “Random Circuit Sampling: Thoughts and Open Problems.”

All of this is simply to say: I sincerely apologize if I left anyone hanging for the past week, by failing to wrap up my Davos travelogue!

So, alright: having now attended Davos, do I have any insight about its role in shaping the future of the world, and whether that role is good or bad?

Umm. The case against Davos is almost too obvious to state: namely, it’s a vehicle for the world’s super-mega-elite to preen about their own virtue and thereby absolve themselves of their sins.  (Oddly enough, both liberals and conservatives have their own versions of this argument.)

But having attended, I now understand exactly the response that Klaus Schwab, the Forum’s founder and still maestro, would make.  He’d say: well, we didn’t make these people “elite.”  They were already the elite.  And given that an elite exists, would you rather have them at cocaine-filled stripper parties on yachts or whatever, or flocking to an annual meeting where the peer pressure is relentlessly about going green and being socially responsible and giving back to the community and so forth?

See, it’s like this: if you want to be accepted by the Davos crowd, you can’t do stuff like dismember journalists who criticize you.  (While many Saudi princes were at Davos, Mohammad bin Salman himself was conspicuously absent.) While that might sound like a grotesquely low bar, it’s one that many, many elites through human history failed to clear.  And we can go further: if you want an enthusiastic (rather than chilly) welcome at Davos, you can’t separate migrant kids from their families and put them in cages. Again, a low bar but sadly a nontrivial one.

I’m reminded of something Steven Pinker once wrote, about how the United Nations and other international organizations can seem laughably toothless, what with their strongly worded resolutions threatening further resolutions to come. Yet improbably, over the span of decades, the resolutions were actually effective at pushing female genital mutilation and the execution of gays and lesbians and chemical weapons and much more from the world’s panoply of horrors, not entirely out of existence, but into a much darker corner than they’d been.

The positive view of Davos would see it as part of precisely that same process. The negative view would see it as a whitewash: worse than nothing, for letting its participants pretend to stand against the world’s horrors while doing little. Which view is correct? Here, I fear that each of our judgments is going to be hopelessly colored by our more general views about the state of the world. To lay my cards on the table, my views are that

(1) often “fake it till you make it” is a perfectly reasonable strategy, and a good enough simulacrum of a stance or worldview eventually blends into the stance or worldview itself, and

(2) despite the headlines, the data show that the world really has been getting better along countless dimensions … except that it’s now being destroyed by climate change, general environmental degradation, and recrudescent know-nothing authoritarianism.

But the clearest lesson I learned is that, in the unlikely event that I’m ever invited back to Davos and able to attend, before stepping onto the plane I need to get business cards printed.

Daily Updates:
Saturday January 18 (introduction)
Sunday January 19 (Elton John and Greta Thunberg)
Monday January 20 (the $71,000-a-head ski resort conference for Equality)
Tuesday January 21 (Trump! Greta! QC panel!)
Wednesday January 22 (wherein I fail to introduce myself to Al Gore)
Thursday January 23 (wherein I attend the IBM QC panel and “drunkenly unload” at the Canada Reception)
Friday January 24 (second Al Gore session, and getting lost)

It would be great to know whether anyone’s actually reading the later updates, so I know whether to continue putting effort into them!

Saturday January 18

Today I’m headed to the 50th World Economic Forum in Davos, where on Tuesday I’ll participate in a panel discussion on “The Quantum Potential” with Jeremy O’Brien of the quantum computing startup PsiQuantum, and will also host an ask-me-anything session about quantum computational supremacy and Google’s claim to have achieved it.

I’m well aware that this will be unlike any other conference I’ve ever attended: STOC or FOCS it ain’t. As one example, also speaking on Tuesday—although not conflicting with my QC sessions—will be a real-estate swindler and reality-TV star who’s somehow (alas) the current President of the United States. Yes, even while his impeachment trial in the Senate gets underway. Also speaking on Tuesday, a mere hour and a half after him, will be TIME’s Person of the Year, 17-year-old climate activist Greta Thunberg.

In short, this Davos is shaping up to be an epic showdown between two diametrically opposed visions for the future of life on Earth. And your humble blogger will be right there in the middle of it, to … uhh … explain how quantum computers can sample probability distributions that are classically intractable unless the polynomial hierarchy collapses to the third level. I feel appropriately sheepish.

Since the experience will be so unusual for me, I’m planning to “live-blog Davos”: I’ll be updating this post, all week, with any strange new things that I see or learn. As a sign of my devotion to you, my loyal readers, I’ll even clothespin my nose and attend Trump’s speech so I can write about it.

And Greta: on the off chance that you happen to read Shtetl-Optimized, let me treat you to a vegan lunch or dinner! I’d like to try to persuade you of just how essential nuclear power will be to a carbon-free future. Oh, and if it’s not too much trouble, I’d also like a selfie with you for this blog. (Alas, a friend pointed out to me that it would probably be easier to meet Trump: unlike Greta, he won’t be swarmed with thousands of fans!)

Anyway, check back here throughout the week for updates. And if you’re in Davos and would like to meet, please shoot me an email. And please use the comment section to give me your advice, suggestions, well-wishes, requests, or important messages for me to fail to deliver to the “Davoisie” who run the world.

Sunday January 19

So I’ve arrived in Klosters, a village in the Swiss Alps close to Davos where I’ll be staying. (All the hotels in Davos itself were booked by the time I checked.)

I’d braced myself for the challenge of navigating three different trains through the Alps not knowing German. In reality, it was like a hundred times easier than public transportation at home. Every train arrived at the exact right second at the exact platform that was listed, bearing the exact right number, and there were clear visible signs strategically placed at exactly the places where anyone could get confused. I’d entered Bizarro Opposite World. I’m surely one of the more absentminded people on earth, as well as one of the more neurotic about being judged by bystanders if I ever admit to being lost, and it was nothing.

Snow! Once a regular part of my life, now the first I’d seen in several years. Partly because I now live in Texas, but also because even when we take the kids back to Pennsylvania for ChanuChrismaNewYears, it no longer snows like it did when I was a kid. If you show my 2-year-old, Daniel, a picture of snow-covered wilderness, he calls it a “beach.” Daniel’s soon-to-be 7-year-old sister still remembers snow from Boston, but the memory is rapidly fading. I wonder for how many of the children of the 21st century will snow just be a thing from old books and movies, like typewriters or rotary phones.

The World Economic Forum starts tomorrow afternoon. In the meantime, though, I thought I’d give an update not on the WEF itself, but on the inflight movie that I watched on my way here.

I watched Rocketman, the recent biopic/hagiography about Elton John, though as I watched I found that I kept making comparisons between Elton John and Greta Thunberg.

On the surface, these two might not seem to have a great deal of similarity.

But I gathered that they had this in common: while still teenagers, they saw a chance and they seized it. And doing so involved taking inner turmoil and then succesfully externalizing it to the whole planet. Making hundreds of millions of people feel the same emotions that they had felt. If I’m being painfully honest (how often am I not?), that’s something I’ve always wanted to achieve and haven’t.

Of course, when some of the most intense and distinctive emotions you’ve ever felt revolved around the discovery of quantum query complexity lower bounds … yeah, it might be tough to find more people than could fill a room to relive those emotional journeys with you. But a child’s joy at discovering numbers like Ackerman(100) (to say nothing of BB(100)), which are so incomprehensibly bigger than \( 9^{9^{9^{9^9}}} \) that I didn’t need to think twice about how many 9’s I put there? Or the exasperation at those who, yeah, totally get that quantum computers aren’t known to give exponential speedups for NP-complete problems, that’s a really important clarification coming from the theory side, but still, let’s continue to base our entire business or talk or article around the presupposition that quantum computers do give exponential speedups for NP-complete problems? Or even just the type of crush that comes with a ceaseless monologue about what an objectifying, misogynist pig you must be to experience it? Maybe I could someday make people vicariously experience and understand those emotions–if I could only find the right words.

My point is, this is precisely what Greta did for the burgeoning emotion of existential terror about the Anthropocene—another emotion that’s characterized my life since childhood. Not that I ever figured out anything to do about it, with the exception of Gore/Nader vote-swapping. By the standards of existential terrors, I consider this terror to be extraordinarily well-grounded. If Steven Weinberg is scared, who among us has the right to be calm?

The obvious objection to Greta—why should anyone care what a histrionic teenager thinks about a complicated scientific field that thousands of people get PhDs in?—calls for a substantive answer. So here’s mine. Like many concerned citizens, I try to absorb some of the research on ocean warming or the collapse of ice sheets and the melting permafrost leading to even more warming or the collapse of ecosystems due to changes in rainfall or bushfires or climate migrations or whatever. And whenever I do, I’m reminded of Richard Feynman’s remark, during the investigation of the Challenger disaster, that maybe it wasn’t all that interesting for the commission to spend its time reconstructing the exact details of which system caused which other system to malfunction at which millisecond, after the Space Shuttle had already started exploding. The thing was hosed at that point.

Still, even after the 80s and 90s, there remained deep open questions about the eventual shape of the climate crisis, and foremost among them was: how do you get people to stop talking about this crisis in the language of intellectual hypotheticals and meaningless virtue-signalling gestures and “those crazy scientists, who knows what they’ll say tomorrow”? How does one get people to revert to a more ancient language, the one that was used to win WWII for example, which speaks of courage and duty and heroism and defiance in the jaws of death?

Greta’s origin story—the one where the autistic girl spends months so depressed over climate inaction that she can’t eat or leave her room, until finally, no longer able to bear the psychic burden, she ditches school and carries a handmade protest sign to the front of the Swedish parliament—is not merely a prerequisite to a real contribution. It is Greta’s real contribution (so far anyway), and by that I don’t mean to diminish it. The idea was “trivial,” yes, but only in the sense that the wheel, Arabic numerals, or “personal computers will be important” were trivial ideas. Greta modeled for the rest of the world how they, too, would probably feel about climate change were they able to sync up their lizard brains with their higher brains … and crucially, a substantial segment of the world was already primed to agree with her. But it needed to see one successful example of a succesful sync between the science and the emotions appropriate to the science, as a crystal needs a seed.

The thesis of Rocketman is that Elton John’s great achievement was not only to invent a new character, but actually to become that character, since only by succesfully fusing the two could he touch the emotions of the masses. In a similar way, Greta Thunberg’s great accomplishment of her short life has been to make herself into the human race’s first Greta Thunberg.

Monday January 20

Happy 7th birthday to my daughter Lily!  (No, I didn’t miss her birthday party.  We did it on the 18th, right before I flew out.)

I think my goals for Davos have been downgraded from delivering a message of peace and nerd liberation to the world’s powerful, or even getting a selfie with Greta, to simply taking in a week in an environment that’s so alien to me.

Everything in Davos is based on a tiered system of badges, which determine which buildings you can get into to participate in the sessions.  I have a white badge, the highest tier, which would’ve set me back around $71,000 had WEF not thankfully waived its fees for academics.  I should mention that I’m also extremely underdressed compared to most of the people here, and that I spent much of my time today looking for free food.  It turns out that there’s pretty copious and excellent free food, although the sponsors sometimes ask you to leave your business card before you take any.  I don’t have a business card.

The above, for me, represents the true spirit of Davos: a conference at a Swiss ski resort that costs $71,000 to attend, held on behalf of the ideal of human equality.

But maybe I shouldn’t scoff.  I learned today about a war between Greece and Turkey that was averted only because the heads of the two countries talked it over at Davos, so that’s cool.  At the opening ceremony today, besides a beautiful orchestral rendition of “Ode to Joy,” there were a bunch of speeches about how Davos pioneered the entire concept of corporate social responsibility.  I suppose the critics might say instead that Davos pioneered the concept of corporate whitewashing—as with the wall-sized posters that I saw this afternoon, wherein a financial services corporation showcased a diverse cast of people each above their preferred pronouns (he/him, she/her, they/them).  Amazing how pronouns make everything woke and social-justicey!  I imagine that the truth is somewhere between these visions.  Just like the easiest way for NASA to fake a moon landing was actually to send humans to the moon, sometimes the easiest way to virtue-signal is actually to become more virtuous.

Tonight I went to a reception specifically for the academics at Davos.  There, for the first time since my arrival, I saw people who I knew (Shafi Goldwasser, Neha Narula…), and met someone who I’d known by reputation (Brian Schmidt, who shared the Nobel Prize in Physics for the discovery of dark energy).  But even the people who I didn’t know were clearly “my people,” with familiar nerdy mannerisms and interests, and in some cases even a thorough knowledge of SlateStarCodex references.  Imagine visiting a foreign country where no one spoke your language, then suddenly stumbling on the first ones who did.  I found it a hundred times easier than at the main conference to strike up conversations.

Oh yeah, quantum computing.  This afternoon I hosted three roundtable discussions about quantum computing, which were fun and stress-free — I spent much more of my mental energy today figuring out the shuttle buses.  If you’re a regular reader of this blog or my popular articles, or a watcher of my talks on YouTube, etc., then congratulations: you’ve gotten the same explanations of quantum computing for free that others may have paid $71,000 apiece to hear!  Tomorrow are my two “real” quantum computing sessions, as well as the speeches by both the Donald and the Greta (the latter being the much hotter ticket).  So it’s a big day, which I’ll tell you about after it’s happened. Stay tuned!

Tuesday January 21

PsiQuantum’s Jeremy O’Brien and I did the Davos quantum computing panel this morning (moderated by Jennifer Schenker). You can watch our 45-minute panel here. For regular readers of this blog, the territory will be familiar, but I dunno, I hope someone enjoys it anyway!

I’m now in the Congress Hall, in a seat near the front, waiting for Trump to arrive. I will listen to the President of the United States and not attract the Secret Service’s attention by loudly booing, but I have no intention to stand or applaud either.

Alas, getting a seat at Greta’s talk is looking like it will be difficult or impossible.

I was struck by the long runup to Trump’s address: the President of Switzerland gave a searing speech about the existential threats of climate change and ecosystem destruction, and “the politicians in many nations who appeal to fear and bigotry”—never mentioning Trump by name but making clear that she despised the entire ideology of the man people had come to hear. I thought it was a nice touch. Then some technicians spent 15 minutes adjusting Trump’s podium, then nothing happened for 20 minutes as we all waited for a tardy Trump, then some traditional Swiss singers did a performance on stage (!), and finally Klaus Schwab, director of the WEF, gave Trump a brief and coldly cordial introduction, joking about the weather in Davos.

And … now Trump is finally speaking. Once he starts, I suddenly realize that I have no idea what new insight I expected from this. He’s giving his standard stump speech, America has regained its footing after the disaster of the previous administration, winning like it’s never won before, unemployment is the lowest in recorded history, blah blah blah. I estimate that less than half of the audience applauded Trump’s entrance; the rest sat in stony silence. Meanwhile, some people were passing out flyers to the audience documenting all the egregious errors in Trump’s economic statistics.

Given the small and childish nature of the remarks (“we’re the best! ain’t no one gonna push us around!”), it feels somehow right to be looking down at my phone, blogging, rather than giving my undivided attention to the President of the United States speaking 75 feet in front of me.

Ok, I admit I just looked up, when Trump mentioned America’s commitment to developing new technologies like “5G and quantum computing” (he slowly drew out the word “quantum”).

His whole delivery is strangely lethargic, as if he didn’t sleep well last night (I didn’t either).

Trump announced that the US would be joining the WEF’s “1 trillion trees” environmental initiative, garnering the only applause in his speech. But he then immediately pivoted to a denunciation of the “doomsayers and pessimists and socialists who want to control our lives and take away our liberty” (he presumably meant people worried about climate change).

Now, I kid you not, Trump is expanding on his “optimism” theme by going on and on about the architectural achievements of Renaissance Florence.

You can watch Trump’s speech for yourself here.

While I wasn’t able to get in to see Greta Thunberg in person, you can watch her (along with others) here. I learned that her name is pronounced “toon-berg.”

Having now listened to Greta’s remarks, I confess that I disagree with the content of what she says.  She explicitly advocates a sort of purity-based carbon absolutism—demanding that companies and governments immediately implement, not merely net zero emissions (i.e. offsetting their emissions by paying to plant trees and so forth), but zero emissions period.  Since she can’t possibly mean literally zero, I’ll interpret her to mean close to zero.  Even so, it seems to me that the resulting economic upheavals would provoke a massive backlash against whoever tried to enforce such a policy.  Greta also dismisses the idea of technological solutions to climate change, saying that we don’t have time to invent such solutions.  But of course, some of the solutions already exist—a prime example being nuclear power.  And if we no longer have time to nuclearize the world, then to a great extent, that’s the fault of the antinuclear activists—an unbelievable moral and strategic failure that may have doomed our civilization, and for which there’s never been a reckoning.

Despite all my disagreements, if Greta’s strident, uncompromising rhetoric helps push the world toward cutting emissions, then she’ll have to be counted as one of the greatest people who ever lived. Of course, another possibility is the world’s leaders will applaud her and celebrate her moral courage, while not taking anything beyond token actions.

Wednesday January 22

Alas, I’ve come down with a nasty cold (is there any other kind?).  So I’m paring back my participation in the rest of Davos to the stuff that really interests me.  The good news is that my quantum computing sessions are already finished!

This morning, as I sat in the lobby of the Congress Centre checking my email and blowing my nose, I noticed some guy playing a cello nearby.  Dozens were gathered around him — so many that I could barely see the guy, only hear the music.  After he was finished, I worked up the courage to ask someone what the fuss was about.  Turns out that the guy was Yo-Yo Ma.

The Prince Regent of Liechtenstein was explaining to one of my quantum computing colleagues that Liechtenstein does not have much in the way of quantum.

Speaking of princes, I’m now at a cybersecurity session with Shafi Goldwasser and others, at which the attendance might be slightly depressed because it’s up against Prince Charles. That’s right: Davos is the conference where the heir apparent to the British throne speaks in a parallel session.

I’ve realized these past few days that I’m not very good at schmoozing with powerful people.  On the other hand, it’s possible that my being bad at it is a sort of mental defense mechanism.  The issue is that, the more I became a powerful “thought leader” who unironically used phrases like “Fourth Industrial Revolution” or “disruptive innovation,” the more I used business cards and LinkedIn to expand my network of contacts or checked my social media metrics … well, the less I’d be able to do the research that led to stuff like being invited here in the first place.  I imagine that many Davos regulars started out as nerds like me, and that today, coming to Davos to talk about “disruptive innovation” is a fun kind of semi-retirement.  If so, though, I’m not ready to retire just yet!  I still want to do things that are new enough that they don’t need to be described using multiple synonyms for newness.

Apparently one of the hottest tickets at Davos is a post-Forum Shabbat dinner, which used to be frequented by Shimon Peres, Elie Wiesel, etc.  Alas, not having known about it, I already planned my travel in a way that won’t let me attend it.  I feel a little like the guy in this Onion article.

I had signed up for a session entitled What’s At Stake: The Arctic, featuring Al Gore. As I waited for them to start letting people in, I suddenly realized that Al Gore was standing right next to me. However, he was engrossed in conversation with a young woman, and even though I assumed she was just some random fan like I was, I didn’t work up the courage to interrupt them. Only once the panel had started, with the woman on it two seats from Gore, did I realize that she was Sanna Marin, the new Prime Minister of Finland (and at 34, the world’s second-youngest head of state).

You can watch the panel here. Briefly, the Arctic has lost about half of its ice cover, not merely since preindustrial times but since a few decades ago. And this is not only a problem for polar bears. It’s increasing the earth’s absorption of sunlight and hence significantly accelerating global warming, and it’s also screwing up weather patterns all across the northern hemisphere. Of course, the Siberian permafrost is also thawing and releasing greenhouse gases that are even worse than CO2, further accelerating the wonderful feedback loop of doom.

I thought that Gore gave a masterful performance. He was in total command of the facts—discoursing clearly and at length on the relative roles of CO2, SO2, and methane in the permafrost as well as the economics of oil extraction, less in the manner of thundering (or ‘thunberging’?) prophet than in the manner of an academic savoring all the non-obvious twists as he explains something to a colleague—and his every response to the other panelists was completely on point.

In 2000, there was indeed a bifurcation of the universe, and we ended up in a freakishly horrible branch. Instead of something close to the best, most fact-driven US president one could conjure in one’s mind, we got something close to the worst, and then, after an 8-year interregnum just to lull us into complacency, we got something even worse than the worst.

The other panelists were good too. Gail Whiteman (the scientist) had the annoying tic of starting sentence after sentence with “the science says…,” but then did a good job of summarizing what the science does say about the melting of the Arctic and the permafrost.

Alas, rather than trying to talk to Gore, immediately after the session ended, I headed back to my hotel to go to sleep. Why? Partly because of my cold. But partly also because of incident immediately before the panel. I was sitting in the front row, next to an empty seat, when a woman who wanted to occupy that seat hissed at me that I was “manspreading.”

If, on these narrow seats packed so tightly together that they were basically a bench, my left leg had strayed an inch over the line, I would’ve addressed the situation differently: for example, “oh hello, may I sit here?” (At which point I would’ve immediately squeezed in.) Amazingly, the woman didn’t seem to didn’t care that a different woman, the one to my right, kept her pocketbook and other items on the seat next to her throughout the panel, preventing anyone else from using the seat in what was otherwise a packed house. (Is that “womanspreading”?)

Anyway, the effect of her comment was to transform the way I related to the panel. I looked around at the audience and thought: “these activists, who came to hear a panel on climate change, are fighting for a better world. And in their minds, one of the main ways that the world will be better is that it won’t contain sexist, entitled ‘manspreaders’ like me.”

In case any SneerClubbers are reading, I should clarify that I recognize an element of the irrational in these thoughts. I’m simply reporting, truthfully, that they’re what bubbled up outside the arena of conscious control. But furthermore, I feel like the fact that my brain works this way might give me some insight into the psychology of Trump support that few Democrats share—so much that I wonder if I could provide useful service as a Democratic political consultant!

I understand the mindset that howls: “better that every tree burn to the ground, every fish get trawled from the ocean, every coastal city get flooded out of existence, than that these sanctimonious hypocrites ‘on the right side of history,’ singing of their own universal compassion even as they build a utopia with no place for me in it, should get to enjoy even a second of smug self-satisfaction.” I hasten to add that I’ve learned how to override that mindset with a broader, better mindset: I can jump into the abyss, but I can also climb back out, and I can even look down at the abyss from above and report what’s there. It’s as if I’d captured some virulent strain of Ebola in a microbiology lab of the soul. And if nearly half of American voters (more in crucial swing states) have gotten infected with that Ebola strain, then maybe my lab work could have some broader interest.

I thought about Scott Minerd, the investor on the panel, who became a punching bag for the other panelists (except for Gore, a politician in a good sense, who went out of his way to find points of agreement). In his clumsy way, Minerd was making the same point that climate activists themselves correctly make: namely, that the oil companies need to be incentivized (for example, through a carbon tax) to leave reserves in the ground, that we can’t just trust them to do the noble thing and write off their own assets. But for some reason, Minerd presented himself as a greedy fat-cat, raining on the dreams of the hippies all around him for a carbon-free future, so then that’s how the other panelists duly treated him (except, again, for Gore).

But I looked at the audience, which was cheering attacks on Minerd, and the Ebola in my internal microbiology lab said: “the way these activists see Scott Minerd is not far from how they see Scott Aaronson. You’ll never be good enough for them. The people in this room might or might not succeed at saving the world, but in any case they don’t want your help.”

After all, what was the pinnacle of my contribution to saving the world? It was surely when I was 19, and created a website to defend the practice of NaderTrading (i.e., Ralph Nader supporters in swing states voting for Al Gore, while Gore supporters in safe states pledged to vote Nader on their behalf). Alas, we failed. We did help arrange a few thousand swaps, including a few hundred swaps in Florida, but it was 538 too few. We did too little, too late.

So what would I have talked to Gore about, anyway? Would I have reminded him of the central tragedy of his life, which was also a central tragedy of recent American history, just in order to babble, or brag, about a NaderTrading website that I made half a lifetime ago? Would I have made up a post-hoc rationalization for why I work on quantum computing, like that I hope it will lead to the discovery of new carbon-capture methods? Immediately after Gore’s eloquent brief for the survival of the Arctic and all life on earth, would I have asked him for an autograph or a selfie? No, better to just reflect on his words. At a crucial pivot point in history, Gore failed by a mere 538 votes, and I also failed to prevent the failure. But amazingly, Gore never gave up-–he just kept on fighting for what he knew civilization needed to do—and yesterday I sat a few feet away while he explained why the rest of us shouldn’t give up either. And he’s right about this—if not in the sense of the outlook being especially hopeful or encouraging right now, then surely in the sense of which attitude is the useful one to adopt. And my attitude, which you might call “Many-Worlds-inflected despair,” might be epistemically sound but it definitely wasn’t useful. What further clarifications did I need?

Thursday January 23

I attended a panel discussion on quantum computing hosted by IBM. The participants were Thomas Friedman (the New York Times columnist), Arvind Krishna (a senior Vice President at IBM), Raoul Klingner (director of a European research organization), and Alison Snyder (the managing editor of Axios magazine). There were about 100 people in the audience, more than at all of my Davos quantum computing sessions combined. I sat right in front, although I don’t think anyone on the panel recognized me.

Ginni Rometty, the CEO of IBM, gave an introduction. She said that quantum will change the world by speeding up supply-chain and other optimization problems. I assume she was talking about the Grover speedup? She also said that IBM is committed to delivering value for its customers, rather than “things you can do in two seconds that are not commercially valid” (I assume she meant Google’s supremacy experiment). She asked for a show of hands of who knows absolutely nothing about the science behind quantum computing. She then quipped, “well, that’s all of you!” She may have missed two hands that hadn’t gone up (both belonging to the same person).

I accepted an invitation to this session firstly for the free lunch (which turned out to be delicious), and secondly because I was truly, genuinely curious to hear what Thomas Friedman, many of whose columns I’ve liked, had to teach me about quantum computing. The answer turns out to be this: in his travels around the world over the past 6 years, Friedman has witnessed firsthand how the old dichotomy between right-wing parties and left-wing parties is breaking down everywhere (I assume he means, as both sides get taken over by populist movements?). And this is just like how a qubit breaks down the binary dichotomy between 0’s and 1’s! Also, the way a quantum computer can be in multiple states at once, is like how the US now has to be in multiple states at once in its relationship with China.

Friedman opened his remarks by joking about how he never took a single physics course, and had no idea why he was on a quantum computing panel at all. He quickly added, though, that he toured IBM’s QC labs, where he found IBM’s leaders to be wonderful explainers of what it all means.

I’ll note that Friedman, the politics and Middle East affairs writer — not the two panelists serving the role of quantum experts — was the only one who mentioned, even in passing, the idea that the advantage of QCs depends on something called “constructive interference.”

Krishna, the IBM Vice President, explained why IBM rejects the entire concept of “quantum supremacy”: because it’s an irrelevant curiosity, and creating value for customers in the marketplace (for example by solving their supply-chain optimization problems) is the only test that matters. No one on the panel expressed a contrary view.

Later, Krishna explained why quantum computers will never replace classical computers: because if you stored your bank balance on a quantum computer, one day you’d have $1, the next day $1000, the day after that $1 again, and so forth! He explained how, where current supercomputers use the same amount of energy needed to power all of Davos to train machine learning models, quantum computers would use less than the energy needed to power a single house. New algorithms do need to be designed to run neural networks quantumly, but fortunately that’s all being done as we speak.

I got the feeling that the businesspeople who came to this session felt like they got a lot more out of it than the businesspeople who came to my and Jeremy O’Brien’s session felt like they got out of ours. After all, this session got across some big real-world takeaways—e.g., that if you don’t quantum, your business will be left in the dust, stuck with a single value at a time rather than exploring all values in parallel, and IBM can help you rather than your competitors win the quantum race. It didn’t muddy the message with all the incomprehensible technicalities about how QCs only give exponential speedups for problems with special structure.

Later Update:

Tonight I went to a Davos reception hosted by the government of Canada (????????). I’m not sure why exactly they invited me, although I have of course enjoyed a couple years of life “up north” (well, in Waterloo, so actually further south than a decent chunk of the US … you see that I do have a tiny speck of a Canadian in me?).

I didn’t recognize a single person at the reception. So I just ate the food, drank beer, and answered emails. But then a few people did introduce themselves (two who recognized me, one who didn’t). As they gathered around, they started asking me questions about quantum computing: is it true that QCs could crack the classically impossible Traveling Salesman Problem? That they try all possible answers in parallel? Are they going to go commercial in 2-5 years, or have they already?

It might have been the beer, but for some reason I decided to launch an all-out assault of truth bombs, one after the next, with what they might have considered a somewhat emotional delivery.

OK fine, it wasn’t the beer. That’s just who I am.

And then, improbably, I was a sort of localized “life of the party” — although possibly for the amusement / novelty value of my rant more than for the manifest truth of my assertions. One person afterward told me that it was by far the most useful conversation he’d had at Davos.

And I replied: I’m flattered by your surely inflated praise, but in truth I should also thank you. You caught me at a moment when I’d been thinking to myself that, if only I could make one or two people’s eyes light up with comprehension about the fallacy of a QC simply trying all possible answers in parallel and then magically picking the best one, or about the central role of amplitudes and interference, or about the “merely” quadratic nature of the Grover speedup, or about the specialized nature of the most dramatic known applications for QCs, or about the gap between where the experimentalists are now and what’s needed for error correction and hence true scalability, or about the fact that “quantum supremacy” is obviously not a sufficient condition for a QC to be useful, but it’s equally obviously a necessary condition, or about the fact that doing something “practical” with a QC is of very little interest unless the task in question is actually harder for classical computers, which is a question of great subtlety … I say, if I could make only two or four eyes light up with comprehension of these things, then on that basis alone I could declare that the whole trip to Davos was worth it.

And then one of the people hugged me … and that was the coolest thing that happened to me today.

Friday January 24

I attended a second session with Al Gore, about the problem of the world filling up with plastic. I learned that the world’s plastic waste is set to double over the next 15-20 years, and that a superb solution—indeed, it seems like a crime that it hasn’t been implemented already—-would be to set up garbage booms at the mouths of a few major rivers from which something like 80% of the plastic waste in the ocean gets there.

Anyway, still didn’t introduce myself.

I wrote before about how surprisingly clear and logical the trains to Davos were, even with multiple changes. Unfortunately God’s mercy on me didn’t last. All week, I kept getting lost in warren-like buildings with dozens of “secret passageways” (often literally behind unmarked doors) and few signs—not even exit signs. In one case I missed a tram that was the only way out from somewhere because I arrived to the wrong side of the tram—and getting to the right side required entering a building and navigating another unmarked labyrinth, by which point the tram had already left. In another case, I wandered through a Davos hotel for almost an hour trying to find an exit, ricocheting like a pinball off person after person giving me conflicting directions. Only after I literally started ranting to a crowd: ”holy f-ck, is this place some psychological torture labyrinth designed by Franz Kafka? Am I the only one? Is it clear to all of you? Please, WHERE IS THE F-CKING EXIT???” until finally some local took pity and walked me through the maze. As I mentioned earlier, logistical issues like these made me about 5,000 times more anxious on this trip than the prospect of giving quantum computing talks to the world’s captains of industry. I don’t recall having had a nightmare about lecturing even once—but I’ve had never-ending nightmares about failing to show up to give a lecture because I’m wandering endlessly through an airport or a research center or whatever, always the only one who’s lost.

An alternative argument for why women leave STEM: Guest post by Karen Morenz

Thursday, January 16th, 2020

Scott’s preface: Imagine that every time you turned your blog over to a certain topic, you got denounced on Twitter and Reddit as a privileged douchebro, entitled STEMlord, counterrevolutionary bourgeoisie, etc. etc. The sane response would simply be to quit blogging about that topic. But there’s also an insane (or masochistic?) response: the response that says, “but if everyone like me stopped talking, we’d cede the field by default to the loudest, angriest voices on all sides—thereby giving those voices exactly what they wanted. To hell with that!”

A few weeks ago, while I was being attacked for sharing Steven Pinker’s guest post about NIPS vs. NeurIPS, I received a beautiful message of support from a PhD student in physical chemistry and quantum computing named Karen Morenz. Besides her strong words of encouragement, Karen wanted to share with me an essay she had written on Medium about why too many women leave STEM.

Karen’s essay, I found, marshaled data, logic, and her own experience in support of an insight that strikes me as true and important and underappreciated—one that dovetails with what I’ve heard from many other women in STEM fields, including my wife Dana. So I asked Karen for permission to reprint her essay on this blog, and she graciously agreed.

Briefly: anyone with a brain and a soul wants there to be many more women in STEM. Karen outlines a realistic way to achieve this shared goal. Crucially, Karen’s way is not about shaming male STEM nerds for their deep-seated misogyny, their arrogant mansplaining, or their gross, creepy, predatory sexual desires. Yes, you can go the shaming route (God knows it’s being tried). If you do, you’ll probably snare many guys who really do deserve to be shamed as creeps or misogynists, along with many more who don’t. Yet for all your efforts, Karen predicts, you’ll no more solve the original problem of too few women in STEM, than arresting the kulaks solved the problem of lifting the masses out of poverty.

For you still won’t have made a dent in the real issue: namely that, the way we’ve set things up, pursuing an academic STEM career demands fanatical devotion, to the exclusion of nearly everything else in life, between the ages of roughly 18 and 35. And as long as that’s true, Karen says, the majority of talented women are going to look at academic STEM, in light of all the other great options available to them, and say “no thanks.” Solving this problem might look like more money for maternity leave and childcare. It might also look like re-imagining the academic career trajectory itself, to make it easier to rejoin it after five or ten years away. Way back in 2006, I tried to make this point in a blog post called Nerdify the world, and the women will follow. I’m grateful to Karen for making it more cogently than I did.

Without further ado, here’s Karen’s essay. –SA

Is it really just sexism? An alternative argument for why women leave STEM

by Karen Morenz

Everyone knows that you’re not supposed to start your argument with ‘everyone knows,’ but in this case, I think we ought to make an exception:

Everyone knows that STEM (Science, Technology, Engineering and Mathematics) has a problem retaining women (see, for example Jean, Payne, and Thompson 2015). We pour money into attracting girls and women to STEM fields. We pour money into recruiting women, training women, and addressing sexism, both overt and subconscious. In 2011, the United States spent nearly $3 billion tax dollars on STEM education, of which roughly one third was spent supporting and encouraging underrepresented groups to enter STEM (including women). And yet, women are still leaving at alarming rates.

Alarming? Isn’t that a little, I don’t know, alarmist? Well, let’s look at some stats.

A recent report by the National Science Foundation (2011) found that women received 20.3% of the bachelor’s degrees and 18.6% of the PhD degrees in physics in 2008. In chemistry, women earned 49.95% of the bachelor’s degrees but only 36.1% of the doctoral degrees. By comparison, in biology women received 59.8% of the bachelor’s degrees and 50.6% of the doctoral degrees. A recent article in Chemical and Engineering News showed a chart based on a survey of life sciences workers by Liftstream and MassBio demonstrating how women are vastly underrepresented in science leadership despite earning degrees at similar rates, which I’ve copied below. The story is the same in academia, as you can see on the second chart — from comparable or even larger number of women at the student level, we move towards a significantly larger proportion of men at the more and more advanced stages of an academic career.

Although 74% of women in STEM report “loving their work,” half (56%, in fact) leave over the course of their career — largely at the “mid-level” point, when the loss of their talent is most costly as they have just completed training and begun to contribute maximally to the work force.

A study by Dr. Flaherty found that women who obtain faculty position in astronomy spent on average 1 year less than their male counterparts between completing their PhD and obtaining their position — but he concluded that this is because women leave the field at a rate 3 to 4 times greater than men, and in particular, if they do not obtain a faculty position quickly, will simply move to another career. So, women and men are hired at about the same rate during the early years of their post docs, but women stop applying to academic positions and drop out of the field as time goes on, pulling down the average time to hiring for women.

There are many more studies to this effect. At this point, the assertion that women leave STEM at an alarming rate after obtaining PhDs is nothing short of an established fact. In fact, it’s actually a problem across all academic disciplines, as you can see in this matching chart showing the same phenomenon in humanities, social sciences, and education. The phenomenon has been affectionately dubbed the “leaky pipeline.”

But hang on a second, maybe there just aren’t enough women qualified for the top levels of STEM? Maybe it’ll all get better in a few years if we just wait around doing nothing?

Nope, sorry. This study says that 41% of highly qualified STEM people are female. And also, it’s clear from the previous charts and stats that a significantly larger number of women are getting PhDs than going on the be professors, in comparison to their male counterparts. Dr. Laurie Glimcher, when she started her professorship at Harvard University in the early 1980s, remembers seeing very few women in leadership positions. “I thought, ‘Oh, this is really going to change dramatically,’ ” she says. But 30 years later, “it’s not where I expected it to be.” Her experiences are similar to those of other leading female faculty.

So what gives? Why are all the STEM women leaving?

It is widely believed that sexism is the leading problem. A quick google search of “sexism in STEM” will turn up a veritable cornucopia of articles to that effect. And indeed, around 60% of women report experiencing some form of sexism in the last year (Robnett 2016). So, that’s clearly not good.

And yet, if you ask leading women researchers like Nobel Laureate in Physics 2018, Professor Donna Strickland, or Canada Research Chair in Advanced Functional Materials (Chemistry), Professor Eugenia Kumacheva, they say that sexism was not a barrier in their careers. Moreover, extensive research has shown that sexism has overall decreased since Professors Strickland and Kumacheva (for example) were starting their careers. Even more interestingly, Dr. Rachael Robnett showed that more mathematical fields such as Physics have a greater problem with sexism than less mathematical fields, such as Chemistry, a finding which rings true with the subjective experience of many women I know in Chemistry and Physics. However, as we saw above, women leave the field of Chemistry in greater proportions following their BSc than they leave Physics. On top of that, although 22% of women report experiencing sexual harassment at work, the proportion is the same among STEM and non-STEM careers, and yet women leave STEM careers at a much higher rate than non-STEM careers.

So,it seems that sexism can not fully explain why women with STEM PhDs are leaving STEM. At the point when women have earned a PhD, for the most part they have already survived the worst of the sexism. They’ve already proven themselves to be generally thick-skinned and, as anyone with a PhD can attest, very stubborn in the face of overwhelming difficulties. Sexism is frustrating, and it can limit advancement, but it doesn’t fully explain why we have so many women obtaining PhDs in STEM, and then leaving. In fact, at least in the U of T chemistry department, faculty hires are directly proportional to the applicant pool —although the exact number of applicants are not made public, from public information we can see that approximately one in four interview invitees are women, and approximately one in four hires are women. Our hiring committees have received bias training, and it seems that it has been largely successful. That’s not to say that we’re done, but it’s time to start looking elsewhere to explain why there are so few women sticking around.

So why don’t more women apply?

Well, one truly brilliant researcher had the groundbreaking idea of asking women why they left the field. When you ask women why they left, the number one reason they cite is balancing work/life responsibilities — which as far as I can tell is a euphemism for family concerns.

The research is in on this. Women who stay in academia expect to marry later, and delay or completely forego having children, and if they do have children, plan to have fewer than their non-STEM counterparts (Sassler et al 2016Owens 2012). Men in STEM have no such difference compared to their non-STEM counterparts; they marry and have children about the same ages and rates as their non-STEM counterparts (Sassler et al 2016). Women leave STEM in droves in their early to mid thirties (Funk and Parker 2018) — the time when women’s fertility begins to decrease, and risks of childbirth complications begin to skyrocket for both mother and child. Men don’t see an effect on their fertility until their mid forties. Of the 56% of women who leave STEM, 50% wind up self-employed or using their training in a not for profit or government, 30% leave to a non-STEM more ‘family friendly’ career, and 20% leave to be stay-at-home moms (Ashcraft and Blithe 2002). Meanwhile, institutions with better childcare and maternity leave policies have twice(!) the number of female faculty in STEM (Troeger 2018). In analogy to the affectionately named “leaky pipeline,” the challenge of balancing motherhood and career has been titled the “maternal wall.”

To understand the so-called maternal wall better, let’s take a quick look at the sketch of a typical academic career.

For the sake of this exercise, let’s all pretend to be me. I’m a talented 25 year old PhD candidate studying Physical Chemistry — I use laser spectroscopy to try to understand atypical energy transfer processes in innovative materials that I hope will one day be used to make vastly more efficient solar panels. I got my BSc in Chemistry and Mathematics at the age of 22, and have published 4 scientific papers in two different fields already (Astrophysics and Environmental Chemistry). I’ve got a big scholarship, and a lot of people supporting me to give me the best shot at an academic career — a career I dearly want. But, I also want a family — maybe two or three kids. Here’s what I can expect if I pursue an academic career:

With any luck, 2–3 years from now I’ll graduate with a PhD, at the age of 27. Academics are expected to travel a lot, and to move a lot, especially in their 20s and early 30s — all of the key childbearing years. I’m planning to go on exchange next year, and then the year after that I’ll need to work hard to wrap up research, write a thesis, and travel to several conferences to showcase my work. After I finish my PhD, I’ll need to undertake one or two post doctoral fellowships, lasting one or two years each, probably in completely different places. During that time, I’ll start to apply for professorships. In order to do this, I’ll travel around to conferences to advertise my work and to meet important leaders in my field, and then, if I am invited for interviews, I’ll travel around to different universities for two or three days at a time to undertake these interviews. This usually occurs in a person’s early 30s — our helpful astronomy guy, Dr. Flaherty, found the average time to hiring was 5 years, so let’s say I’m 32 at this point. If offered a position, I’ll spend the next year or two renovating and building a lab, buying equipment, recruiting talented graduate students, and designing and teaching courses. People work really, really hard during this time and have essentially no leisure time. Now I’m 34. Within usually 5 years I’ll need to apply for tenure. This means that by the time I’m 36, I’ll need to be making significant contributions in my field, and then in the final year before applying for tenure, I will once more need to travel to many conferences to promote my work, in order to secure tenure — if I fail to do so, my position at the university would probably be terminated. Although many universities offer a “tenure extension” in cases where an assistant professor has had a child, this does not solve all of the problems. Taking a year off during that critical 5 or 6 year period often means that the research “goes bad” — students flounder, projects that were promising get “scooped” by competitors at other institutions, and sometimes, in biology and chemistry especially, experiments literally go bad. You wind up needing to rebuild much more than just a year’s worth of effort.

At no point during this time do I appear stable enough, career-wise, to take even six months off to be pregnant and care for a newborn. Hypothetical future-me is travelling around, or even moving, conducting and promoting my own independent research and training students. As you’re likely aware, very pregnant people and newborns don’t travel well. And academia has a very individualistic and meritocratic culture. Starting at the graduate level, huge emphasis is based on independent research, and independent contributions, rather than valuing team efforts. This feature of academia is both a blessing and a curse. The individualistic culture means that people have the independence and the freedom to pursue whatever research interests them — in fact this is the main draw for me personally. But it also means that there is often no one to fall back on when you need extra support, and because of biological constraints, this winds up impacting women more than men.

At this point, I need to make sure that you’re aware of some basics of female reproductive biology. According to Wikipedia, the unquestionable source of all reliable knowledge, at age 25, my risk of conceiving a baby with chromosomal abnormalities (including Down’s Syndrome) is 1 in about 1400. By 35, that risk more than quadruples to 1 in 340. At 30, I have a 75% chance of a successful birth in one year, but by 35 it has dropped to 66%, and by 40 it’s down to 44%. Meanwhile, 87 to 94% of women report at least 1 health problem immediately after birth, and 1.5% of mothers have a severe health problem, while 31% have long-term persistent health problems as a result of pregnancy (defined as lasting more than six months after delivery). Furthermore, mothers over the age of 35 are at higher risk for pregnancy complications like preterm delivery, hypertension, superimposed preeclampsia, severe preeclampsia (Cavazos-Rehg et al 2016). Because of factors like these, pregnancies in women over 35 are known as “geriatric pregnancies” due to the drastically increased risk of complications. This tight timeline for births is often called the “biological clock” — if women want a family, they basically need to start before 35. Now, that’s not to say it’s impossible to have a child later on, and in fact some studies show that it has positive impacts on the child’s mental health. But it is riskier.

So, women with a PhD in STEM know that they have the capability to make interesting contributions to STEM, and to make plenty of money doing it. They usually marry someone who also has or expects to make a high salary as well. But this isn’t the only consideration. Such highly educated women are usually aware of the biological clock and the risks associated with pregnancy, and are confident in their understanding of statistical risks.

The Irish say, “The common challenge facing young women is achieving a satisfactory work-life balance, especially when children are small. From a career perspective, this period of parenthood (which after all is relatively short compared to an entire working life) tends to coincide exactly with the critical point at which an individual’s career may or may not take off. […] All the evidence shows that it is at this point that women either drop out of the workforce altogether, switch to part-time working or move to more family-friendly jobs, which may be less demanding and which do not always utilise their full skillset.”

And in the Netherlands, “The research project in Tilburg also showed that women academics have more often no children or fewer children than women outside academia.” Meanwhile in Italy “On a personal level, the data show that for a significant number of women there is a trade-off between family and work: a large share of female economists in Italy do not live with a partner and do not have children”

Most jobs available to women with STEM PhDs offer greater stability and a larger salary earlier in the career. Moreover, most non-academic careers have less emphasis on independent research, meaning that employees usually work within the scope of a larger team, and so if a person has to take some time off, there are others who can help cover their workload. By and large, women leave to go to a career where they will be stable, well funded, and well supported, even if it doesn’t fulfill their passion for STEM — or they leave to be stay-at-home moms or self-employed.

I would presume that if we made academia a more feasible place for a woman with a family to work, we could keep almost all of those 20% of leavers who leave to just stay at home, almost all of the 30% who leave to self-employment, and all of those 30% who leave to more family friendly careers (after all, if academia were made to be as family friendly as other careers, there would be no incentive to leave). Of course, there is nothing wrong with being a stay at home parent — it’s an admirable choice and contributes greatly to our society. One estimate valued the equivalent salary benefit of stay-at-home parenthood at about $160,000/year. Moreover, children with a stay-at-home parent show long term benefits such as better school performance — something that most academic women would want for their children. But a lot of people only choose it out of necessity — about half of stay-at-home moms would prefer to be working (Ciciolla, Curlee, & Luthar 2017). When the reality is that your salary is barely more than the cost of daycare, then a lot of people wind up giving up and staying home with their kids rather than paying for daycare. In a heterosexual couple it will usually be the woman that winds up staying home since she is the one who needs to do things like breast feed anyways. And so we lose these women from the workforce.

And yet, somehow, during this informal research adventure of mine, most scholars and policy makers seem to be advising that we try to encourage young girls to be interested in STEM, and to address sexism in the workplace, with the implication that this will fix the high attrition rate in STEM women. But from what I’ve found, the stats don’t back up sexism as the main reason women leave. There is sexism, and that is a problem, and women do leave STEM because of it — but it’s a problem that we’re already dealing with pretty successfully, and it’s not why the majority of women who have already obtained STEM PhDs opt to leave the field. The whole family planning thing is huge and for some reason, almost totally swept under the rug — mostly because we’re too shy to talk about it, I think.

In fact, I think that the plethora of articles suggesting that the problem is sexism actually contribute to our unwillingness to talk about the family planning problem, because it reinforces the perception that that men in power will not hire a woman for fear that she’ll get pregnant and take time off. Why would anyone talk about how they want to have a family when they keep hearing that even the mere suggestion of such a thing will limit their chances of being hired? I personally know women who have avoided bringing up the topic with colleagues or supervisors for fear of professional repercussions. So we spend all this time and energy talking about how sexism is really bad, and very little time trying to address the family planning challenge, because, I guess, as the stats show, if women are serious enough about science then they just give up on the family (except for the really, really exceptional ones who can handle the stresses of both simultaneously).

To be very clear, I’m not saying that sexism is not a problem. What I am saying is that, thanks to the sustained efforts of a large number of people over a long period of time, we’ve reduced the sexism problem to the point where, at least at the graduate level, it is no longer the largest major barrier to women’s advancement in STEM. Hurray! That does not mean that we should stop paying attention to the issue of sexism, but does mean that it’s time to start paying more attention to other issues, like how to properly support women who want to raise a family while also maintaining a career in STEM.

So what can we do to better support STEM women who want families?

A couple of solutions have been tentatively tested. From a study mentioned above, it’s clear that providing free and conveniently located childcare makes a colossal difference to women’s choices of whether or not to stay in STEM, alongside extended and paid maternity leave. Another popular and successful strategy was implemented by a leading woman in STEM, Laurie Glimcher, a past Harvard Professor in Immunology and now CEO of Dana-Farber Cancer Institute. While working at NIH, Dr. Glimcher designed a program to provide primary caregivers (usually women) with an assistant or lab technician to help manage their laboratories while they cared for children. Now, at Dana-Farber Cancer Institute, she has created a similar program to pay for a technician or postdoctoral researcher for assistant professors. In the academic setting, Dr. Glimcher’s strategies are key for helping to alleviate the challenges associated with the individualistic culture of academia without compromising women’s research and leadership potential.

For me personally, I’m in the ideal situation for an academic woman. I graduated my BSc with high honours in four years, and with many awards. I’ve already had success in research and have published several peer reviewed papers. I’ve faced some mild sexism from peers and a couple of TAs, but nothing that’s seriously held me back. My supervisors have all been extremely supportive and feminist, and all of the people that I work with on a daily basis are equally wonderful. Despite all of this support, I’m looking at the timelines of an academic career, and the time constraints of female reproduction, and honestly, I don’t see how I can feasible expect to stay in academia and have the family life I want. And since I’m in the privileged position of being surrounded by supportive and feminist colleagues, I can say it: I’m considering leaving academia, if something doesn’t change, because even though I love it, I don’t see how it can fit in to my family plans.

But wait! All of these interventions are really expensive. Money doesn’t just grow on trees, you know!

It doesn’t in general, but in this case it kind of does — well, actually, we already grew it. We spend billions of dollars training women in STEM. By not making full use of their skills, if we look at only the american economy, we are wasting about $1.5 billion USD per year in economic benefits they would have produced if they stayed in STEM. So here’s a business proposal: let’s spend half of that on better family support and scientific assistants for primary caregivers, and keep the other half in profit. Heck, let’s spend 99% — $1.485 billion (in the states alone) on better support. That should put a dent in the support bill, and I’d sure pick up $15 million if I saw it lying around. Wouldn’t you?

By demonstrating that we will support women in STEM who choose to have a family, we will encourage more women with PhDs to apply for the academic positions that they are eminently qualified for. Our institutions will benefit from the wider applicant pool, and our whole society will benefit from having the skills of these highly trained and intelligent women put to use innovating new solutions to our modern day challenges.

NIPS vs. NeurIPS: guest post by Steven Pinker

Monday, December 23rd, 2019

Scott’s Update (Dec. 26): Comments on this post are now closed, since I felt that whatever progress could be made, had been, and I wanted to move on to more interesting topics. Thanks so much to everyone who came here to hash things out in good faith—which, as far as I’m concerned, included the majority of the participants on both sides.

If you want to see the position paper that led to the name change movement, see What’s In A Name? The Need to Nip NIPS, by Daniela Witten, Elana Fertig, Anima Anandkumar, and Jeff Dean. I apologize for not linking to this paper in the original post.

To recap what I said many times in this post and the comments: I myself am totally fine with the name NeurIPS. I think several of the arguments for changing the name were good arguments—and I thank some of the commenters on this post for elucidating those arguments without shaming anybody or calling them names. In any case the decision is done, and it belongs to the ML community, not to me and not to Steven Pinker.

The one part that I’m against is the bullying of anyone who disagrees by smearing them as a misogynist. And then, recursively, the smearing as a misogynist of anyone who objected to that bullying, and so on and so on. Most supporters of the name change did not engage in such bullying, but one leader of the movement very conspicuously did, and continues to do it even now (to, I’m told, the consternation even of many of her allies).

Since this post went up, something extremely interesting happened: Steven Pinker and I started getting emails from researchers in the NeurIPS community that said, in various words: “thank you for openly airing perspectives that we could not air, without jeopardizing our careers.” We were told that even women in ML, and even those who agreed with the activists on most points, could no longer voice opposition without risking their hiring or tenure. This put into a slightly different light, I thought, the constant claims of some movement leaders about their own marginalization and powerlessness.

Since I was 7 or 8 years old, the moral lodestar of my life has been my yearning (too often left unfulfilled) to stand up to the world’s bullies. Bullies come in all shapes and sizes: some are gangsters or men who sexually exploit vulnerable women; one, alas, is even the President of the United States. But bullying knows no bounds of ideology or gender. Some bullies resort to whisper networks, or Twitter shaming campaigns, or their power in academic hierarchies, to shut down dissenting voices. With the latter kinds of bully—well, to whatever extent this blog is now in a position to make some difference, I’d feel morally complicit if it didn’t.

As I wrote in the comments: may the 2020s be an era of intellectual freedom, compassion, and understanding for all people regardless of background. –SA

Scott’s prologue:

Happy Christmas and Merry Chanukah!

As a followup to last Thursday’s post about the term “quantum supremacy,” today all of us here at Shtetl-Optimized are humbled to host a guest post by Steven Pinker: the Johnstone Professor of Psychology at Harvard University, and author of The Language Instinct, How the Mind Works, The Blank Slate, Enlightenment Now (which I reviewed here), and other books.

The former NIPS—Neural Information Processing Systems—has been the premier conference for machine learning for 30 years. As many readers might know, last year NIPS changed its name to NeurIPS: ironically, giving greater emphasis to an aspect that I’m told has been de-emphasized at that conference over time. The reason, apparently, was that some male attendees had made puns involving the acronym “NIPS” and nipples.

I confess that the name change took me by surprise, simply because it had never occurred to me to make the NIPS/nipples connection—not when I gave a plenary at NIPS in 2012, and not when my collaborators and I coauthored a NIPS paper. It’s not that I’m averse to puerile humor. It’s just that neither I, nor anyone else I knew, had apparently ever felt the need for a shorthand for “nipples.” Of course, once I did learn about this controversy, it became hard to hear “NIPS” without thinking about it.

Back when this happened, Steven Pinker tweeted about NIPS being “forced to change its acronym … because some thought it was sexist. ?????,” apparently as part of a longer thread about “the new Victorians.” In response, a computer science professor sent Pinker an extremely stern email, saying that Pinker’s tweeting about this had “caused harm to our community” and “just [made] the world a bleaker place for everyone.” After linking to a National Academies report on bias in STEM, the email ended: “I hope you will choose to inform yourself on the discussion to which you have just contributed and that you will offer a well-considered follow up.” I won’t risk betraying confidences by quoting further. Of course, the author is warmly welcomed to share anything they wish in the comments here (or I can add it to the main post).

Steve’s guest post today consists of his response to this email. (He told me that, after sending it, he received no further responses.)

I don’t have any dog in the NIPS/NeurIPS debate, being at most on the “margin” (har!) of machine learning. And in any case the debate ended a year ago: the name is now NeurIPS and it’s not changing back. Reopening the issue would seem to invite a strong risk of social-media denunciation for no possible gain.

So why am I doing this? Mostly because I thought it was in the interest of humanity to know that, even when Steven Pinker is answering someone’s email, with no expectation that his reply will be made public, he writes the same way he does in his books: with clarity, humor, and an amusing quote from his mom.

But also because—again, without taking a position on the NIPS vs. NeurIPS issue itself—there’s a tactic displayed by Pinker’s detractors that fundamentally grates on me. This is where you pretend to an open mind, but it turns out that you’re open only to the possibility that your opponent might not have read enough reports and studies to “do better”—i.e., that they sinned out of ignorance rather than out of malice. You don’t open your mind even a crack to the possibility that the opponent might have a point.

Without further ado, here’s Steven Pinker’s email:

I appreciate your frank comments. At the same time, I do not agree with them. Please allow me to explain.

If this were a matter of sexual harassment or other hostile behavior toward women, I would of course support strong measures to combat it. Any member of the Symposium who uttered demeaning comments toward or about women certainly deserves censure.

But that is not what is at issue here. It’s an utterly irrelevant matter: the three-decades-old acronym for the Neural Information Processing Symposium, the pleasingly pronounceable NIPS. To state what should be obvious: nip is not a sexual word. As Chair of the Usage Panel of the American Heritage Dictionary, I can support this claim.

(And as my mother wrote to me: “I don’t get it. I thought Nips was a brand of caramel candy.”)  [Indeed, I enjoyed those candies as a kid. –SA] Even if people with an adolescent mindset think of nipples when hearing the sound “nips,” the society should not endorse the idea that the concept of nipples is sexist. Men have nipples too, and women’s nipples evolved as organs of nursing, not sexual gratification. Indeed, many feminists have argued that it’s sexist to conceptualize women’s bodies from the point of view of male sexuality.

If some people make insulting puns that demean women, the society should condemn them for the insults, not concede to their puerility by endorsing their appropriation of an innocent sound. (The Linguistics Society of America and Boston Debate League do not change their names to disavow jejune clichés about cunning linguists and master debaters.) To act as if anything with the remotest connection to sexuality must be censored to protect delicate female sensibilities is insulting to women and reminiscent of prissy Victorian taboos against uncovered piano legs or the phrase “with the naked eye.”

Any harm to the community of computer scientists has been done not by me but by the pressure group and the Symposium’s surrender. As a public figure who hears from a broad range of people outside the academic bubble, I can tell you that this episode has not played well. It’s seen as the latest sign that academia has lost its mind—that it has traded reasoned argument, conceptual rigor, proportionality, and common sense for prudish censoriousness, snowflake sensibility, and virtue signaling. I often hear from intelligent non-leftists, “Why should I be impressed by the scientific consensus on climate change? Everyone knows that academics just fall into line with the politically correct position.” To secure the credibility of the academy, we have to make reasoned distinctions, and stop turning our enterprise into a laughingstock.

To repeat: none of this deprecates the important effort to stamp out harassment and misogyny in science, which I’m well aware of and thoroughly support, but which has nothing to do with the acronym NIPS.

You are welcome to share this note with interested parties.

Best,
Steve

Two updates

Monday, December 2nd, 2019
  1. Two weeks ago, I blogged about the claim of Nathan Keller and Ohad Klein to have proven the Aaronson-Ambainis Conjecture. Alas, Keller and Klein tell me that they’ve now withdrawn their preprint (though it may take another day for that to show up on the arXiv), because of what looks for now like a fatal flaw, in Lemma 5.3, discovered by Paata Ivanishvili. (My own embarrassment over having missed this flaw is slightly mitigated by most of the experts in discrete Fourier analysis having missed it as well!) Keller and Klein are now working to fix the flaw, and I wholeheartedly wish them success.
  2. In unrelated news, I was saddened to read that Virgil Griffith—cryptocurrency researcher, former Integrated Information Theory researcher, and onetime contributor to Shtetl-Optimized—was arrested at LAX for having traveled to North Korea to teach the DPRK about cryptocurrency, against the admonitions of the US State Department. I didn’t know Virgil well, but I did meet him in person at least once, and I liked his essays for this blog about how, after spending years studying IIT under Giulio Tononi himself, he became disillusioned with many aspects of it and evolved to a position not far from mine (though not identical either).
    Personally, I despise the North Korean regime for the obvious reasons—I regard it as not merely evil, but cartoonishly so—and I’m mystified by Virgil’s apparently sincere belief that he could bring peace between the North and South by traveling to North Korea to give a lecture about blockchain. Yet, however world-historically naïve he may have been, his intentions appear to have been good. More pointedly—and here I’m asking not in a legal sense but in a human one—if giving aid and comfort to the DPRK is treasonous, then isn’t the current occupant of the Oval Office a million times guiltier of that particular treason (to say nothing of others)? It’s like, what does “treason” even mean anymore? In any case, I hope some plea deal or other arrangement can be worked out that won’t end Virgil’s productive career.

Book Review: ‘The AI Does Not Hate You’ by Tom Chivers

Sunday, October 6th, 2019

A couple weeks ago I read The AI Does Not Hate You: Superintelligence, Rationality, and the Race to Save the World, the first-ever book-length examination of the modern rationalist community, by British journalist Tom Chivers. I was planning to review it here, before it got preempted by the news of quantum supremacy (and subsequent news of classical non-supremacy). Now I can get back to rationalists.

Briefly, I think the book is a triumph. It’s based around in-person conversations with many of the notable figures in and around the rationalist community, in its Bay Area epicenter and beyond (although apparently Eliezer Yudkowsky only agreed to answer technical questions by Skype), together of course with the voluminous material available online. There’s a good deal about the 1990s origins of the community that I hadn’t previously known.

The title is taken from Eliezer’s aphorism, “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” In other words: as soon as anyone succeeds in building a superhuman AI, if we don’t take extreme care that the AI’s values are “aligned” with human ones, the AI might be expected to obliterate humans almost instantly as a byproduct of pursuing whatever it does value, more-or-less as we humans did with woolly mammoths, moas, and now gorillas, rhinos, and thousands of other species.

Much of the book relates Chivers’s personal quest to figure out how seriously he should take this scenario. Are the rationalists just an unusually nerdy doomsday cult? Is there some non-negligible chance that they’re actually right about the AI thing? If so, how much more time do we have—and is there even anything meaningful that can be done today? Do the dramatic advances in machine learning over the past decade change the outlook? Should Chivers be worried about his own two children? How does this risk compare to the more “prosaic” civilizational risks, like climate change or nuclear war? I suspect that Chivers’s exploration will be most interesting to readers who, like me, regard the answers to none of these questions as obvious.

While it sounds extremely basic, what makes The AI Does Not Hate You so valuable to my mind is that, as far as I know, it’s nearly the only examination of the rationalists ever written by an outsider that tries to assess the ideas on a scale from true to false, rather than from quirky to offensive. Chivers’s own training in academic philosophy seems to have been crucial here. He’s not put off by people who act weirdly around him, even needlessly cold or aloof, nor by utilitarian thought experiments involving death or torture or weighing the value of human lives. He just cares, relentlessly, about the ideas—and about remaining a basically grounded and decent person while engaging them. Most strikingly, Chivers clearly feels a need—anachronistic though it seems in 2019—actually to understand complicated arguments, be able to repeat them back correctly, before he attacks them.

Indeed, far from failing to understand the rationalists, it occurs to me that the central criticism of Chivers’s book is likely to be just the opposite: he understands the rationalists so well, extends them so much sympathy, and ends up endorsing so many aspects of their worldview, that he must simply be a closet rationalist himself, and therefore can’t write about them with any pretense of journalistic or anthropological detachment. For my part, I’d say: it’s true that The AI Does Not Hate You is what you get if you treat rationalists as extremely smart (if unusual) people from whom you might learn something of consequence, rather than as monkeys in a zoo. On the other hand, Chivers does perform the journalist’s task of constantly challenging the rationalists he meets, often with points that (if upheld) would be fatal to their worldview. One of the rationalists’ best features—and this precisely matches my own experience—is that, far from clamming up or storming off when faced with such challenges (“lo! the visitor is not one of us!”), the rationalists positively relish them.

It occurred to me the other day that we’ll never know how the rationalists’ ideas would’ve developed, had they continued to do so in a cultural background like that of the late 20th century. As Chivers points out, the rationalists today are effectively caught in the crossfire of a much larger cultural war—between, to their right, the recrudescent know-nothing authoritarians, and to their left, what one could variously describe as woke culture, call-out culture, or sneer culture. On its face, it might seem laughable to conflate the rationalists with today’s resurgent fascists: many rationalists are driven by their utilitarianism to advocate open borders and massive aid to the Third World; the rationalist community is about as welcoming of alternative genders and sexualities as it’s humanly possible to be; and leading rationalists like Scott Alexander and Eliezer Yudkowsky strongly condemned Trump for the obvious reasons.

Chivers, however, explains how the problem started. On rationalist Internet forums, many misogynists and white nationalists and so forth encountered nerds willing to debate their ideas politely, rather than immediately banning them as more mainstream venues would. As a result, many of those forces of darkness (and they probably don’t mind being called that) predictably congregated on the rationalist forums, and their stench predictably wore off on the rationalists themselves. Furthermore, this isn’t an easy-to-fix problem, because debating ideas on their merits, extending charity to ideological opponents, etc. is sort of the rationalists’ entire shtick, whereas denouncing and no-platforming anyone who can be connected to an ideological enemy (in the modern parlance, “punching Nazis”) is the entire shtick of those condemning the rationalists.

Compounding the problem is that, as anyone who’s ever hung out with STEM nerds might’ve guessed, the rationalist community tends to skew WASP, Asian, or Jewish, non-impoverished, and male. Worse yet, while many rationalists live their lives in progressive enclaves and strongly support progressive values, they’ll also undergo extreme anguish if they feel forced to subordinate truth to those values.

Chivers writes that all of these issues “blew up in spectacular style at the end of 2014,” right here on this blog. Oh, what the hell, I’ll just quote him:

Scott Aaronson is, I think it’s fair to say, a member of the Rationalist community. He’s a prominent theoretical computer scientist at the University of Texas at Austin, and writes a very interesting, maths-heavy blog called Shtetl-Optimised.

People in the comments under his blog were discussing feminism and sexual harassment. And Aaronson, in a comment in which he described himself as a fan of Andrea Dworkin, described having been terrified of speaking to women as a teenager and young man. This fear was, he said, partly that of being thought of as a sexual abuser or creep if any woman ever became aware that he sexually desired them, a fear that he picked up from sexual-harassment-prevention workshops at his university and from reading feminist literature. This fear became so overwhelming, he said in the comment that came to be known as Comment #171, that he had ‘constant suicidal thoughts’ and at one point ‘actually begged a psychiatrist to prescribe drugs that would chemically castrate me (I had researched which ones), because a life of mathematical asceticism was the only future that I could imagine for myself.’ So when he read feminist articles talking about the ‘male privilege’ of nerds like him, he didn’t recognise the description, and so felt himself able to declare himself ‘only’ 97 per cent on board with the programme of feminism.

It struck me as a thoughtful and rather sweet remark, in the midst of a long and courteous discussion with a female commenter. But it got picked up, weirdly, by some feminist bloggers, including one who described it as ‘a yalp of entitlement combined with an aggressive unwillingness to accept that women are human beings just like men’ and that Aaronson was complaining that ‘having to explain my suffering to women when they should already be there, mopping my brow and offering me beers and blow jobs, is so tiresome.’

Scott Alexander (not Scott Aaronson) then wrote a furious 10,000-word defence of his friend… (p. 214-215)

And then Chivers goes on to explain Scott Alexander’s central thesis, in Untitled, that privilege is not a one-dimensional axis, so that (to take one example) society can make many women in STEM miserable while also making shy male nerds miserable in different ways.

For nerds, perhaps an alternative title for Chivers’s book could be “The Normal People Do Not Hate You (Not All of Them, Anyway).” It’s as though Chivers is demonstrating, through understated example, that taking delight in nerds’ suffering, wanting them to be miserable and alone, mocking their weird ideas, is not simply the default, well-adjusted human reaction, with any other reaction being ‘creepy’ and ‘problematic.’ Some might even go so far as to apply the latter adjectives to the sneerers’ attitude, the one that dresses up schoolyard bullying in a social-justice wig.

Reading Chivers’s book prompted me to reflect on my own relationship to the rationalist community. For years, I interacted often with the community—I’ve known Robin Hanson since ~2004 and Eliezer Yudkowsky since ~2006, and our blogs bounced off each other—but I never considered myself a member.  I never ranked paperclip-maximizing AIs among humanity’s more urgent threats—indeed, I saw them as a distraction from an all-too-likely climate catastrophe that will leave its survivors lucky to have stone tools, let alone AIs. I was also repelled by what I saw as the rationalists’ cultier aspects.  I even once toyed with the idea of changing the name of this blog to “More Wrong” or “Wallowing in Bias,” as a play on the rationalists’ LessWrong and OvercomingBias.

But I’ve drawn much closer to the community over the last few years, because of a combination of factors:

  1. The comment-171 affair. This was not the sort of thing that could provide any new information about the likelihood of a dangerous AI being built, but was (to put it mildly) the sort of thing that can tell you who your friends are. I learned that empathy works a lot like intelligence, in that those who boast of it most loudly are often the ones who lack it.
  2. The astounding progress in deep learning and reinforcement learning and GANs, which caused me (like everyone else, perhaps) to update in the direction of human-level AI in our lifetimes being an actual live possibility,
  3. The rise of Scott Alexander. To the charge that the rationalists are a cult, there’s now the reply that Scott, with his constant equivocations and doubts, his deep dives into data, his clarity and self-deprecating humor, is perhaps the least culty cult leader in human history. Likewise, to the charge that the rationalists are basement-dwelling kibitzers who accomplish nothing of note in the real world, there’s now the reply that Scott has attracted a huge mainstream following (Steven Pinker, Paul Graham, presidential candidate Andrew Yang…), purely by offering up what’s self-evidently some of the best writing of our time.
  4. Research. The AI-risk folks started publishing some research papers that I found interesting—some with relatively approachable problems that I could see myself trying to think about if quantum computing ever got boring. This shift seems to have happened at roughly around the same time my former student, Paul Christiano, “defected” from quantum computing to AI-risk research.

Anyway, if you’ve spent years steeped in the rationalist blogosphere, read Eliezer’s “Sequences,” and so on, The AI Does Not Hate You will probably have little that’s new, although it might still be interesting to revisit ideas and episodes that you know through a newcomer’s eyes. To anyone else … well, reading the book would be a lot faster than spending all those years reading blogs! I’ve heard of some rationalists now giving out copies of the book to their relatives, by way of explaining how they’ve chosen to spend their lives.

I still don’t know whether there’s a risk worth worrying about that a misaligned AI will threaten human civilization in my lifetime, or my children’s lifetimes, or even 500 years—or whether everyone will look back and laugh at how silly some people once were to think that (except, silly in which way?). But I do feel fairly confident that The AI Does Not Hate You will make a positive difference—possibly for the world, but at any rate for a little well-meaning community of sneered-at nerds obsessed with the future and with following ideas wherever they lead.

Blurry but clear enough

Friday, September 20th, 2019

My vision is blurry right now, because yesterday I had a procedure called corneal cross-linking, intended to prevent further deterioration of my eyes as I get older. But I can see clearly enough to tap out a post with random thoughts about the world.

I’m happy that the Netanyahu era might finally be ending in Israel, after which Netanyahu will hopefully face some long-delayed justice for his eye-popping corruption. If only there were a realistic prospect of Trump facing similar justice. I wish Benny Gantz success in putting together a coalition.

I’m happy that my two least favorite candidates, Bill de Blasio and Kirsten Gillibrand, have now both dropped out of the Democratic primary. Biden, Booker, Warren, Yang—I could enthusiastically support pretty much any of them, if they looked like they had a good chance to defeat Twitler. Let’s hope.

Most importantly, I wish to register my full-throated support for the climate strikes taking place today all over the world, including here in Austin. My daughter Lily, age 6, is old enough to understand the basics of what’s happening and to worry about her future. I urge the climate strikers to keep their eyes on things that will actually make a difference (building new nuclear plants, carbon taxes, geoengineering) and ignore what won’t (banning plastic straws).

As for Greta Thunberg: she is, or is trying to be, the real-life version of the Comet King from Unsong. You can make fun of her, ask what standing or expertise she has as some random 16-year-old to lead a worldwide movement. But I suspect that this is always what it looks like when someone takes something that’s known to (almost) all, and then makes it common knowledge. If civilization makes it to the 22nd century at all, then in whatever form it still exists, I can easily imagine that it will have more statues of Greta than of MLK or Gandhi.

On a completely unrelated and much less important note, John Horgan has a post about “pluralism in math” that includes some comments by me.

Oh, and on the quantum supremacy front—I foresee some big news very soon. You know which blog to watch for more.

Fake it till you make it (to the moon)

Friday, July 19th, 2019

While I wait to board a flight at my favorite location on earth—Philadelphia International Airport—I figured I might as well blog something to mark the 50th anniversary of Apollo 11. (Thanks also to Joshua Zelinsky for a Facebook post that inspired this.)

I wasn’t alive for Apollo, but I’ve been alive for 3/4 of the time after it, even though it now seems like ancient history—specifically, like a Roman cathedral being gawked at by a medieval peasant, like an achievement by some vanished, more cohesive civilization that we can’t even replicate today, let alone surpass.

Which brings me to a depressing mystery: why do so many people now deny that humans walked on the moon at all? Like, why that specifically? While they’re at it, why don’t they also deny that WWII happened, or that the Beatles existed?

Surprisingly, skepticism of the reality of Apollo seems to have gone all the way back to the landings themselves. One of my favorite stories growing up was of my mom, as a teenager, working as a waitress at an Israeli restaurant in Philadelphia, on the night of Apollo 11 landing. My mom asked for a few minutes off to listen to news of the landing on the radio. The owners wouldn’t grant it—explaining that it was all Hollywood anyway, just some actors in spacesuits on a sound stage, and obviously my mom wasn’t so naïve as to think anyone was actually walking to the moon?

Alas, as we get further and further from the event, with no serious prospect of ever replicating it past the stage of announcing an optimistic timetable (nor, to be honest, any scientific reason to replicate it), as the people involved die off, and as our civilization becomes ever more awash in social-media-fueled paranoid conspiracies, I fear that moon-landing denalism will become more common.

Because here’s the thing: Apollo could happen, but only because of a wildly improbable, once-in-history confluence of social and geopolitical factors. It was economically insane, taking 100,000 people and 4% of the US federal budget for some photo-ops, a flag-planting, some data and returned moon rocks that had genuine scientific value but could’ve been provided much more cheaply by robots. It was dismantled immediately afterwards like a used movie set, rather than leading to any greater successes. Indeed, manned spaceflight severely regressed afterwards, surely mocking the expectations of every last science fiction fan and techno-utopian who was alive at that time.

One could summarize the situation by saying that, in certain respects, the Apollo program really was “faked.” It’s just that the way they “faked” it, involved actually landing people on the moon!

On two blog posts of Jerry Coyne

Saturday, July 13th, 2019

A few months ago, I got to know Jerry Coyne, the recently-retired biologist at the University of Chicago who writes the blog “Why Evolution Is True.” The interaction started when Jerry put up a bemused post about my thoughts on predictability and free will, and I pointed out that if he wanted to engage me on those topics, there was more to go on than an 8-minute YouTube video. I told Coyne that it would be a shame to get off on the wrong foot with him, since perusal of his blog made it obvious that whatever he and I disputed, it was dwarfed by our areas of agreement. He and I exchanged more emails and had lunch in Chicago.

By way of explaining how he hadn’t read “The Ghost in the Quantum Turing Machine,” Coyne emphasized the difference in my and his turnaround times: while these days I update my blog only a couple times per month, Coyne often updates multiple times per day. Indeed the sheer volume of material he posts, on subjects from biology to culture wars to Chicago hot dogs, would take months to absorb.

Today, though, I want to comment on just two posts of Jerry’s.

The first post, from back in May, concerns David Gelernter, the computer science professor at Yale who was infamously injured in a 1993 attack by the Unabomber, and who’s now mainly known as a right-wing commentator. I don’t know Gelernter, though I did once attend a small interdisciplinary workshop in the south of France that Gelernter also attended, wherein I gave a talk about quantum computing and computational complexity in which Gelernter showed no interest. Anyway, Gelernter, in an essay in May for the Claremont Review of Books, argued that recent work has definitively disproved Darwinism as a mechanism for generating new species, and until something better comes along, Intelligent Design is the best available alternative.

Curiously, I think that Gelernter’s argument falls flat not for detailed reasons of biology, but mostly just because it indulges in bad math and computer science—in fact, in precisely the sorts of arguments that I was trying to answer in my segment on Morgan Freeman’s Through the Wormhole (see also Section 3.2 of Why Philosophers Should Care About Computational Complexity). Gelernter says that

  1. a random change to an amino acid sequence will pretty much always make it worse,
  2. the probability of finding a useful new such sequence by picking one at random is at most ~1 in 1077, and
  3. there have only been maybe ~1040 organisms in earth’s history.

Since 1077 >> 1040, Darwinism is thereby refuted—not in principle, but as an explanation for life on earth. QED.

The most glaring hole in the above argument, it seems to me, is that it simply ignores intermediate possible numbers of mutations. How hard would it be to change, not 1 or 100, but 5 amino acids in a given protein to get a usefully different one—as might happen, for example, with local optimization methods like simulated annealing run at nonzero temperature? And how many chances were there for that kind of mutation in the earth’s history?

Gelernter can’t personally see how a path could cut through the exponentially large solution space in a polynomial amount of time, so he asserts that it’s impossible. Many of the would-be P≠NP provers who email me every week do the same. But this particular kind of “argument from incredulity” has an abysmal track record: it would’ve applied equally well, for example, to problems like maximum matching that turned out to have efficient algorithms. This is why, in CS, we demand better evidence of hardness—like completeness results or black-box lower bounds—neither of which seem however to apply to the case at hand. Surely Gelernter understands all this, but had he not, he could’ve learned it from my lecture at the workshop in France!

Alas, online debate, as it’s wont to do, focused less on Gelernter’s actual arguments and the problems with them, than on the tiresome questions of “standing” and “status.” In particular: does Gelernter’s authority, as a noted computer science professor, somehow lend new weight to Intelligent Design? Or conversely: does the very fact that a computer scientist endorsed ID prove that computer science itself isn’t a real science at all, and that its practitioners should never be taken seriously in any statements about the real world?

It’s hard to say which of these two questions makes me want to bury my face deeper into my hands. Serge Lang, the famous mathematician and textbook author, spent much of his later life fervently denying the connection between HIV and AIDS. Lynn Margulis, the discoverer of the origin of mitochondria (and Carl Sagan’s first wife), died a 9/11 truther. What broader lesson should we draw from any of this? And anyway, what percentage of computer scientists actually do doubt evolution, and how does it compare to the percentage in other academic fields and other professions? Isn’t the question of how divorced we computer scientists are from the real world an … ahem … empirical matter, one hard to answer on the basis of armchair certainties and anecdotes?

Speaking of empiricism, if you check Gelernter’s publication list on DBLP and his Google Scholar page, you’ll find that he did influential work in programming languages, parallel computing, and other areas from 1981 through 1997, and then in the past 22 years published a grand total of … two papers in computer science. One with four coauthors, the other a review/perspective piece about his earlier work. So it seems fair to say that, some time after receiving tenure in a CS department, Gelernter pivoted (to put it mildly) away from CS and toward conservative punditry. His recent offerings, in case you’re curious, include the book America-Lite: How Imperial Academia Dismantled Our Culture (and Ushered In the Obamacrats).

Some will claim that this case underscores what’s wrong with the tenure system itself, while others will reply that it’s precisely what tenure was designed for, even if in this instance you happen to disagree with what Gelernter uses his tenured freedom to say. The point I wanted to make is different, though. It’s that the question “what kind of a field is computer science, anyway, that a guy can do high-level CS research on Monday, and then on Tuesday reject Darwinism and unironically use the word ‘Obamacrat’?”—well, even if I accepted the immense weight this question places on one atypical example (which I don’t), and even if I dismissed the power of compartmentalization (which I again don’t), the question still wouldn’t arise in Gelernter’s case, since getting from “Monday” to “Tuesday” seems to have taken him 15+ years.

Anyway, the second post of Coyne’s that I wanted to talk about is from just yesterday, and is about Jeffrey Epstein—the financier, science philanthropist, and confessed sex offender, whose appalling crimes you’ll have read all about this week if you weren’t on a long sea voyage without Internet or something.

For the benefit of my many fair-minded friends on Twitter, I should clarify that I’ve never met Jeffrey Epstein, let alone accepted any private flights to his sex island or whatever. I doubt he has any clue who I am either—even if he did once claim to be “intrigued” by quantum information.

I do know a few of the scientists who Epstein once hung out with, including Seth Lloyd and Steven Pinker. Pinker, in particular, is now facing vociferous attacks on Twitter, similar in magnitude perhaps to what I faced in the comment-171 affair, for having been photographed next to Epstein at a 2014 luncheon that was hosted by Lawrence Krauss (a physicist who later faced sexual harassment allegations of his own). By the evidentiary standards of social media, this photo suffices to convict Pinker as basically a child molester himself, and is also a devastating refutation of any data that Pinker might have adduced in his books about the Enlightenment’s contributions to human flourishing.

From my standpoint, what’s surprising is not that Pinker is up against this, but that it took this long to happen, given that Pinker’s pro-Enlightenment, anti-blank-slate views have had the effect of painting a giant red target on his back. Despite the near-inevitability, though, you can’t blame Pinker for wanting to defend himself, as I did when it was my turn for the struggle session.

Thus, in response to an emailed inquiry by Jerry Coyne, Pinker shared some detailed reflections about Epstein; Pinker then gave Coyne permission to post those reflections on his blog (though they were originally meant for Coyne only). Like everything Pinker writes, they’re worth reading in full. Here’s the opening paragraph:

The annoying irony is that I could never stand the guy [Epstein], never took research funding from him, and always tried to keep my distance. Friends and colleagues described him to me as a quantitative genius and a scientific sophisticate, and they invited me to salons and coffee klatches at which he held court. But I found him to be a kibitzer and a dilettante — he would abruptly change the subject ADD style, dismiss an observation with an adolescent wisecrack, and privilege his own intuitions over systematic data.

Pinker goes on to discuss his record of celebrating, and extensively documenting, the forces of modernity that led to dramatic reductions in violence against women and that have the power to continue doing so. On Twitter, Pinker had already written: “Needless to say I condemn Epstein’s crimes in the strongest terms.”

I probably should’ve predicted that Pinker would then be attacked again—this time, for having prefaced his condemnation with the phrase “needless to say.” The argument, as best I can follow, runs like this: given all the isms of which woke Twitter has already convicted Pinker—scientism, neoliberalism, biological determinism, etc.—how could Pinker’s being against Epstein’s crimes (which we recently learned probably include the rape, and not only statutorily, of a 15-year-old) possibly be assumed as a given?

For the record, just as Epstein’s friends and enablers weren’t confined to one party or ideology, so the public condemnation of Epstein strikes me as a matter that is (or should be) beyond ideology, with all reasonable dispute now confined to the space between “very bad” and “extremely bad,” between “lock away for years” and “lock away for life.”

While I didn’t need Pinker to tell me that, one reason I personally appreciated his comments is that they helped to answer a question that had bugged me, and that none of the mountains of other condemnations of Epstein had given me a clear sense about. Namely: supposing, hypothetically, that I’d met Epstein around 2002 or so—without, of course, knowing about his crimes—would I have been as taken with him as many other academics seem to have been? (Would you have been? How sure are you?)

Over the last decade, I’ve had the opportunity to meet some titans and semi-titans of finance and business, to discuss quantum computing and other nerdy topics. For a few (by no means all) of these titans, my overriding impression was precisely their unwillingness to concentrate on any one point for more than about 20 seconds—as though they wanted the crust of a deep intellectual exchange without the meat filling. My experience with them fit Pinker’s description of Epstein to a T (though I hasten to add that, as far as I know, none of these others ran teenage sex rings).

Anyway, given all the anger at Pinker for having intersected with Epstein, it’s ironic that I could easily imagine Pinker’s comments rattling Epstein the most of anyone’s, if Epstein hears of them from his prison cell. It’s like: Epstein must have developed a skin like a rhinoceros’s by this point about being called a child abuser, a creep, and a thousand similar (and similarly deserved) epithets. But “a kibitzer and a dilettante” who merely lured famous intellectuals into his living room, with wads of cash not entirely unlike the ones used to lure teenage girls to his massage table? Ouch!

OK, but what about Alan Dershowitz—the man who apparently used to be Epstein’s close friend, who still is Pinker’s friend, and who played a crucial role in securing Epstein’s 2008 plea bargain, the one now condemned as a travesty of justice? I’m not sure how I feel about Dershowitz.  It’s like: I understand that our system requires attorneys willing to mount a vociferous defense even for clients who they privately know or believe to be guilty—and even to get those clients off on technicalities or bargaining whenever they can.  I’m also incredibly grateful that I chose CS rather than law school, because I don’t think I could last an hour advocating causes that I knew to be unjust. Just like my fellow CS professor, the intelligent design advocate David Gelernter, I have the privilege and the burden of speaking only for myself.

Quanta of Solace

Thursday, June 20th, 2019

In Quanta magazine, Kevin Hartnett has a recent article entitled A New Law to Describe Quantum Computing’s Rise? The article discusses “Neven’s Law”—a conjecture, by Hartmut Neven (head of Google’s quantum computing effort), that the number of integrated qubits is now increasing exponentially with time, so that the difficulty of simulating a state-of-the-art QC on a fixed classical computer is increasing doubly exponentially with time. (Jonathan Dowling tells me that he expressed the same thought years ago.)

Near the end, the Quanta piece quotes some UT Austin professor whose surname starts with a bunch of A’s as follows:

“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work. They’re the ones who need to articulate where and why the progress will stop.”

The quote is perfectly accurate, but in context, it might give the impression that I’m endorsing Neven’s Law. In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50. I say only that, regardless of what anyone believes is the ultimate rate of progress in QC, what’s already happened today puts the ball firmly in the skeptics’ court.

Also in Quanta, Anil Ananthaswamy has a new article out on How to Turn a Quantum Computer Into the Ultimate Randomness Generator. This piece covers two schemes for using a quantum computer to generate “certified random bits”—that is, bits you can prove are random to a faraway skeptic. one due to me, the other due to Brakerski et al. The article cites my paper with Lijie Chen, which shows that under suitable computational assumptions, the outputs in my protocol are hard to spoof using a classical computer. The randomness aspect will be addressed in a paper that I’m currently writing; for now, see these slides.

As long as I’m linking to interesting recent Quanta articles, Erica Klarreich has A 53-Year-Old Network Coloring Conjecture is Disproved. Briefly, Hedetniemi’s Conjecture stated that, given any two finite, undirected graphs G and H, the chromatic number of the tensor product G⊗H is just the minimum of the chromatic numbers of G and H themselves. This reasonable-sounding conjecture has now been falsified by Yaroslav Shitov. For more, see also this post by Gil Kalai—who appears here not in his capacity as a quantum computing skeptic.

In interesting math news beyond Quanta magazine, the Berkeley alumni magazine has a piece about the crucial, neglected topic of mathematicians’ love for Hagoromo-brand chalk (hat tip: Peter Woit). I can personally vouch for this. When I moved to UT Austin three years ago, most offices in CS had whiteboards, but I deliberately chose one with a blackboard. I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything. With the Hagoromo brand, though, you pretty much get all the benefits of chalk with none of the downsides, so it just strictly dominates whiteboards.

Jan Kulveit asked me to advertise the European Summer Program on Rationality (ESPR), which will take place this August 13-23, and which is aimed at students ages 16-19. I’ve lectured both at ESPR and at a similar summer program that ESPR was modeled after (called SPARC)—and while I was never there as a student, it looked to me like a phenomenal experience. So if you’re a 16-to-19-year-old who reads this blog, please consider applying!

I’m now at the end of my annual family trip to Tel Aviv, returning to the Eastern US tonight, and then on to STOC’2019 at the ACM Federated Computing Research Conference in Phoenix (which I can blog about if anyone wants me to). It was a good trip, although marred by my two-year-old son Daniel falling onto sun-heated metal and suffering a second-degree burn on his leg, and then by the doctor botching the treatment. Fortunately Daniel’s now healing nicely. For future reference, whenever bandaging a burn wound, be sure to apply lots of Vaseline to prevent the bandage from drying out, and also to change the bandage daily. Accept no fancy-sounding substitute.