More Updates!
November 26th, 2023Yet Another Update (Dec. 5): For those who still haven’t had enough of me, check me out on Curt Jaimungal’s Theories of Everything Podcast, talking about … err, computational complexity, the halting problem, the time hierarchy theorem, free will, Newcomb’s Paradox, the no-cloning theorem, interpretations of quantum mechanics, Wolfram, Penrose, AI, superdeterminism, consciousness, integrated information theory, and whatever the hell else Curt asks me about. I strongly recommend watching the video at 2x speed to smooth over my verbal infelicities.
In answer to a criticism I’ve received: I agree that it would’ve been better for me, in this podcast, to describe Wolfram’s “computational irreducibility” as simply “the phenomenon where you can’t predict a computation faster than by running it,” rather than also describing it as a “discrete analog of chaos / sensitive dependence on initial conditions.” (The two generally co-occur in the systems Wolfram talks about, but are not identical.)
On the other hand: no, I do not recognize that Wolfram deserves credit for giving a new name (“computational irreducibility”) to a thing that was already well-understood in the relevant fields. This is particularly true given that
(1) the earlier understanding of the halting problem and the time hierarchy theorem was rigorous, giving us clear criteria for proving when computations can be sped up and when they can’t be, and
(2) Wolfram replaced it with handwaving (“well, I can’t see how this process could be predicted faster than by running it, so let’s assume that it can’t be”).
In other words, the earlier understanding was not only decades before Wolfram, it was superior.
It would be as if I announced my new “Principle of Spacetime Being Like A Floppy Trampoline That’s Bent By Gravity,” and then demanded credit because even though Einstein anticipated some aspects of my principle with his complicated and confusing equations, my version was easier for the layperson to intuitively understand.
I’ll reopen the comments on this post, but only for comments on my Theories of Everything podcast.
Another Update (Dec. 1): Quanta Magazine now has a 20-minute explainer video on Boolean circuits, Turing machines, and the P versus NP problem, featuring yours truly. If you already know these topics, you’re unlikely to learn anything new, but if you don’t know them, I found this to be a beautifully produced introduction with top-notch visuals. Better yet—and unusually for this sort of production—everything I saw looked entirely accurate, except that (1) the video never explains the difference between Turing machines and circuits (i.e., between uniform and non-uniform computation), and (2) the video also never clarifies where the rough identities “polynomial = efficient” and “exponential = inefficient” hold or fail to hold.
For the many friends who’ve asked me to comment on the OpenAI drama: while there are many things I can’t say in public, I can say I feel relieved and happy that OpenAI still exists. This is simply because, when I think of what a world-leading AI effort could look like, many of the plausible alternatives strike me as much worse than OpenAI, a company full of thoughtful, earnest people who are at least asking the right questions about the ethics of their creations, and who—the real proof that they’re my kind of people—are racked with self-doubts (as the world has now spectacularly witnessed). Maybe I’ll write more about the ethics of self-doubt in a future post.
For now, the narrative that I see endlessly repeated in the press is that last week’s events represented a resounding victory for the “capitalists” and “businesspeople” and “accelerationists” over the “effective altruists” and “safetyists” and “AI doomers,” or even that the latter are now utterly discredited, raw egg dripping from their faces. I see two overwhelming problems with that narrative. The first problem is that the old board never actually said that it was firing Sam Altman for reasons of AI safety—e.g., that he was moving too quickly to release models that might endanger humanity. If the board had said anything like that, and if it had laid out a case, I feel sure the whole subsequent conversation would’ve looked different—at the very least, the conversation among OpenAI’s employees, which proved decisive to the outcome. The second problem with the capitalists vs. doomers narrative is that Sam Altman and Greg Brockman and the new board members are also big believers in AI safety, and conceivably even “doomers” by the standards of most of the world. Yes, there are differences between their views and those of Ilya Sutskever and Adam D’Angelo and Helen Toner and Tasha McCauley (as, for that matter, there are differences within each group), but you have to drill deeper to articulate those differences.
In short, it seems to me that we never actually got a clean test of the question that most AI safetyists are obsessed with: namely, whether or not OpenAI (or any other similarly constituted organization) has, or could be expected to have, a working “off switch”—whether, for example, it could actually close itself down, competition and profits be damned, if enough of its leaders or employees became convinced that the fate of humanity depended on its doing so. I don’t know the answer to that question, but what I do know is that you don’t know either! If there’s to be a decisive test, then it remains for the future. In the meantime, I find it far from obvious what will be the long-term effect of last week’s upheavals on AI safety or the development of AI more generally. For godsakes, I couldn’t even predict what was going to happen from hour to hour, let alone the aftershocks years from now.
Since I wrote a month ago about my quantum computing colleague Aharon Brodutch, whose niece, nephews, and sister-in-law were kidnapped by Hamas, I should share my joy and relief that the Brodutch family was released today as part of the hostage deal. While it played approximately zero role in the release, I feel honored to have been able to host a Shtetl-Optimized guest post by Aharon’s brother Avihai. Meanwhile, over 180 hostages remain in Gaza. Like much of the world, I fervently hope for a ceasefire—so long as it includes the release of all hostages and the end of Hamas’s ability to repeat the Oct. 7 pogrom.
Greta Thunberg is now chanting to “crush Zionism” — ie, taking time away from saving civilization to ensure that half the world’s remaining Jews will be either dead or stateless in the civilization she saves. Those of us who once admired Greta, and experience her new turn as a stab to the gut, might be tempted to drive SUVs, fly business class, and fire up wood-burning stoves just to spite her and everyone on earth who thinks as she does.
The impulse should be resisted. A much better response would be to redouble our efforts to solve the climate crisis via nuclear power, carbon capture and sequestration, geoengineering, cap-and-trade, and other effective methods that violate Greta’s scruples and for which she and her friends will receive and deserve no credit.
(On Facebook, a friend replied that an even better response would be to “refuse to let people that we don’t like influence our actions, and instead pursue the best course of action as if they didn’t exist at all.” My reply was simply that I need a response that I can actually implement!)