On Montgomery County public magnet schools: a guest post by Daniel Gottesman

March 14th, 2026


Scott’s foreword: I’ve known fellow quantum computing theorist Daniel Gottesman, now at the University of Maryland, for a quarter-century at this point. Daniel has been a friend, colleague, coauthor, and one of the people from whom I’ve learned the most in my career. Today he writes about a topic close to my heart, and one to which I’ve regularly lent this blog over the decades: namely, the struggle to protect enrichment and acceleration in the United States (in this case, the public magnet programs in Montgomery County, Maryland) from the constant attempts to weaken or dismantle them. Thanks so much to Daniel for doing this, and please help out if you can!


Without further ado, Daniel Gottesman:

Scott has kindly let me write this guest post because I’d like to ask the readers of Shtetl-Optimized for help.  I live in Montgomery County, Maryland, and the county is getting ready to replace our current handful of great magnet programs with a plethora of mediocre ones.

Montgomery County has a generally quite good school system, but its gifted education programs are really inadequate at the elementary and middle school level.  Montgomery County Public Schools (MCPS) offers nothing at all for gifted children until 4th grade.  Starting in 4th grade, magnet programs are available, but there are not enough spaces for everyone who meets the minimum qualifications.  A few years ago, the elementary and middle school magnets were switched to a lottery system, meaning the highest-achieving students, who most need special programming, might or might not get in, based purely on luck of the draw.

The remaining bright spot has been the high school magnets.  Montgomery County has two well-known and high-performing magnets, a STEM magnet at Montgomery Blair high school and an International Baccalaureate (IB) program at Richard Montgomery.  The Richard Montgomery IB program draws students from the whole county and the Blair Magnet draws from 2/3 of the county (with the remaining 1/3 eligible to go to another successful but less well-known magnet at Poolesville).  And these programs have so far resisted the lottery: They pick the best students from the application pool.

So with inadequate magnets in the lower grades and stellar magnets in high school, you can guess which one is up for a change.

MCPS now wants to reconfigure the high school magnet programs by splitting the county up into 6 regions.  Students will only be allowed to apply to programs in their home region.  Each region will have its own STEM magnet and its own IB program, as well as programs in the arts, medicine, and leadership.  And actually there are multiple program strands in each of these subjects, sometimes in different schools.  The whole plan is big and complicated, with close to 100 different programs around the county, more than half of them new.

The stated purpose of this plan is to expand access to these programs by admitting more students and reducing travel times to the programs.  And who could object to that?  There are definitely places in the county that are far from the current magnets and there are certainly more students that can benefit from high-quality magnets than there is currently space for.

The problem is that making high-quality magnets has not been a priority in the design process.  The last time MCPS tried adding regional magnets was about 7 years ago, when they added 3 regional IB programs while keeping Richard Montgomery available to students all over the county.  It was a failure: Test scores at the regional IB programs are far below those at Richard Montgomery (the worst-performing regional IB had only 24% getting a passing grade in even one subject in 2024, compared to 99% at Richard Montgomery) and all 3 are underenrolled.  Now MCPS has decided they can solve this problem by preventing students from going to Richard Montgomery to try to force them to go to the regional IBs.  In addition, they want to repeat the same mistakes with the STEM and other magnets.  The best programs in the county will shrink and only be accessible to a small fraction of students, leaving everyone else with new programs of likely highly-varying quality.

And if that were not enough, they want to do this revamp on a ridiculously short timeline.  The new programs are supposed to start in the 2027-8 school year, and between now and then, they need to recruit and train teachers for these 100 programs, create all the curricula for the first year of the programs (they are only planning to do one year at a time), and much much more.  The probability of a train wreck in the early years of the new system seems high.

Equity is certainly a concern driving this change.  And let me be clear: I am totally in favor of improving equity in the school system.  But I agree with Scott on this point: strong magnet programs in the public schools are pro-equity and weakening magnet programs is anti-equity.  Magnet programs are pro-equity even if the magnets are disproportionally populated by more affluent students, which is admittedly the case in MCPS: Affluent students will always have access to enrichment outside school and to private schools for the most affluent, whereas the public magnet programs are the only source of enrichment for those without those resources.

If MCPS really wants to address the difference in achievement between richer and poorer students, the way to do that is to create gifted programming starting from kindergarten.  If you wait until high school, it is unreasonable to expect even brilliant students to catch up to their also highly-capable peers who have been doing math and science camps and extracurriculars and contests and whatnot since they were little.  Some can manage it, but it is certainly not easy.  Unfortunately, MCPS’s notion of equity seems more focused on optimizing the demographic breakdown of magnet programs, which is most easily achieved by techniques which don’t improve — and usually degrade — the quality of the education provided.

So how can you help?  The Board of Education (BOE) is supposed to vote on this plan on Mar. 26.  Those of us opposed to it are hoping to sway enough members to vote to tell MCPS to investigate alternatives.  For instance, I have proposed a model with only 3 regions, which could also substantially improve access while preserving the strong existing magnets.

If you live in Montgomery County, write to BOE members telling them you oppose this change.  You can also sign a petition — there are many, but my favorite is here.

If you are an alumnus of one of the MCPS magnets, write to the BOE telling them how your education there was valuable to you and how a smaller program would not have served you as well.

If you are unconnected to Montgomery County, you can still spread the word.  If the BOE gets enough press inquiries asking about the many things that don’t add up in the MCPS proposal, perhaps they will recognize that this is a bad idea.

If you are really really interested in this topic and want to learn more: Last fall, I put together a long analysis of some of the flaws in MCPS’s plan and their claims, and of the alternative 3-region model.  You can find it here.

Remarks at UT on the Pentagon/Anthropic situation

March 10th, 2026

Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.


Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.

The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”

I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.

That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.

The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.

Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.

To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.

That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.

But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?

The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.

Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.

Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with that view.

OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!

And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?

The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!


Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.

The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.

Zvi makes an extremely pertinent comparison:

[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.

Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.

Another key quote in Zvi’s piece, from Gregory Allen:

And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.

And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.

Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:

You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.

Two immediate responses:

  1. Even in WWII, in one of the most desperate circumstances in human history, the US government didn’t force a single scientist at gunpoint to build nuclear weapons for them. The scientists did so voluntarily, based on their own considered moral judgment at the time (even if some later came to regret their involvement).
  2. Even if I considered it “inevitable” that relatively thoughtful and principled people, like Dario Amodei, would lose control over the future to gleeful barbarians like Pete Hegseth, it still wouldn’t mean I couldn’t complain when it happened. This is still a free country, isn’t it?

The ”JVG algorithm” is crap

March 7th, 2026

Sorry to interrupt your regular programming about the AI apocalypse, etc., and return to the traditional beat of this blog’s very earliest years … but I’ve now gotten multiple messages asking me to comment on something called the “JVG (Jesse–Victor–Gharabaghi) algorithm” (yes, the authors named it after themselves). This is presented as a massive improvement over Shor’s factoring algorithm, which could (according to popular articles) allow RSA-2048 to be broken using only 5,000 physical qubits.

On inspection, the paper’s big new idea is that, in the key step of Shor’s algorithm where you compute xr mod N in a superposition over all r’s, you instead precompute the xr mod N’s on a classical computer and then load them all into the quantum state.

Alright kids, why does this not work? Shall we call on someone in the back of the class—like, any undergrad quantum computing class in the world? Yes class, that’s right! There are exponentially many r’s. Computing them all takes exponential time, and loading them into the quantum computer also takes exponential time. We’re out of the n2-time frying pan but into the 2n-time fire. This can only look like it wins on tiny numbers; on large numbers it’s hopeless.

If you want to see people explaining the same point more politely and at greater length, try this from Hacker News or this from Postquantum.com.

Even for those who know nothing about quantum algorithms, is there anything that could’ve raised suspicion here?

  1. The paper didn’t appear on the arXiv, but someplace called “Preprints.org.” Come to think of it, I should add this to my famous Ten Signs a Claimed Mathematical Breakthrough is Wrong! It’s not that there isn’t tons of crap on the arXiv as well, but so far I’ve seen pretty much only crap on preprint repositories other than arXiv, ECCC, and IACR.
  2. Judging from a Google search, the claim seems to have gotten endlessly amplified on clickbait link-farming news sites, but ignored by reputable science news outlets—yes, even the usual quantum hypesters weren’t touching this one!

Often, when something is this bad, the merciful answer is to let it die in obscurity. In this case, I feel like there was a sufficient level of intellectual hooliganism, just total lack of concern for what’s true, that those involved deserve to have this Shtetl-Optimized post as a tiny bit of egg on their faces forever.

Moar Updatez

March 5th, 2026

To start on a somber note: those of us at UT Austin are in mourning this week for Savitha Shan, an undergrad double major here in economics and information systems, who was murdered over the weekend by an Islamist terrorist who started randomly shooting people on Sixth Street, apparently angry about the war in Iran. Two other innocents were also killed.

As it happens, these murders happened just a few hours after the end of my daughter’s bat mitzvah, and in walking distance from the venue. The bat mitzvah itself was an incredibly joyful and successful event that consumed most of my time lately, and which I might or might not say more about—the nastier the online trolls get, the more I need to think about my family’s privacy.


Of all the many quantum computing podcasts/interviews I’ve done recently, I’m probably happiest with this one, with Yuval Boger of QuEra. It covers all the main points about where the hardware currently is, the threat to public-key cryptography, my decades-long battle against quantum applications hype, etc. etc., and there’s even an AI-created transcript that eliminates my verbal infelicities!


A month ago, I blogged about “The Time I Didn’t Meet Jeffrey Epstein” (basically, because my mom warned me not to). Now the story has been written up in Science magazine, under the clickbaity headline “Meet Three Scientists Who Said No to Epstein.” (Besides yours truly, the other two scientists are friend-of-the-blog Sean Carroll, whose not-meeting-Epstein story I’d already heard directly from him, and David Agus, whose story I hadn’t heard.)

To be clear: as I explained in my post, I never actually said “no” to Epstein. Instead, based on my mom’s advice, I simply failed to follow up with his emissary, to the point where no meeting ever happened.

Anyway, ever since Science ran this story and it started making the rounds on social media, my mom has been getting congratulatory messages from friends of hers who saw it!


I’ve been a huge fan of the philosopher-novelist Rebecca Newberger Goldstein ever since I read her celebrated debut work, The Mind-Body Problem, back in 2005. Getting to know Rebecca and her husband, Steven Pinker, was a highlight of my last years at MIT. So I’m thrilled that Rebecca will be visiting UT Austin next week to give a talk on Spinoza, related to her latest book The Mattering Instinct (which I’m reading right now), and hosted by me and my colleague Galen Strawson in UT’s philosophy department. More info is in the poster below. If you’re in Austin, I hope to see you there!


The 88-year-old Donald Knuth has published a 5-page document about how Claude was able to solve a tricky graph theory problem that arose while he was working on the latest volume of The Art of Computer Programming—a series that Knuth is still writing after half a century. As you’d expect from Knuth, the document is almost entirely about the graph theory problem itself and Claude’s solution to it, eschewing broader questions about the nature of machine intelligence and how LLMs are changing life on Earth. To anyone who’s been following AI-for-math lately, the fact that Claude now can help with this sort of problem won’t come as a great shock. The virality is presumably because Knuth is such a legend that to watch him interact productively with an LLM is sort of like watching Leibniz, Babbage, or Turing do the same.


John Baez is a brilliant mathematical physicist and writer, who was blogging about science before the concept of “blogging” even existed, and from whom I’ve learned an enormous amount. But regarding John’s quest for the past 15 years — namely, to use category theory to help solve the climate crisis (!) — I always felt like the Cookie Monster would, with equal intellectual justification, say that the key to arresting climate change was for him to eat more Oreos. Then I read this Quanta article on the details of Baez’s project, and … uh … I confess it failed to change my view. Maybe someday I’ll understand why it’s better to say using category theory what I would’ve said in a 100x simpler way without category theory, but I fear that day is not today.

Anthropic: Stay strong!

February 27th, 2026

I don’t have time to write a full post right now, but hopefully this is self-explanatory.

Regardless of their broader views on the AI industry, the eventual risks from AI, or American politics, right every person of conscience needs to stand behind Anthropic, as they stand up for their right to [checks notes] not be effectively nationalized by the Trump administration and forced to build murderbots and to help surveil American citizens. No, I wouldn’t have believed this either in a science-fiction movie, but it’s now just the straightforward reality of our world, years ahead of schedule. In particular, I call on all other AI companies, in the strongest possible terms, to do the right thing and stand behind Anthropic, in this make-or-break moment for the AI industry and the entire world.

Updatez!

February 20th, 2026
  1. The STOC’2026 accepted papers list is out. It seems to me that there’s an emperor’s bounty of amazing stuff this year. I felt especially gratified to see the paper on the determination of BusyBeaver(5) on the list, reflecting a broad view of what theory of computing is about.
  2. There’s a phenomenal profile of Henry Yuen in Quanta magazine. Henry is now one of the world leaders of quantum complexity theory, involved in breakthroughs like MIP*=RE and now pioneering the complexity theory of quantum states and unitary transformations (the main focus of this interview). I’m proud that Henry tells Quanta that learned about the field in 2007 or 2008 from a blog called … what was it again? … Shtetl-Optimized? I’m also proud that I got to help mentor Henry when he was a PhD student of my wife Dana Moshkovitz at MIT. Before I read this Quanta profile, I didn’t even know the backstory about Henry’s parents surviving and fleeing the Cambodian genocide, or about Henry growing up working in his parents’ restaurant. Henry never brought any of that up!
  3. See Lance’s blog for an obituary of Joe Halpern, a pioneer of the branch of theoretical computer science that deals with reasoning about knowledge (e.g., the muddy children puzzle), who sadly passed away last week. I knew Prof. Halpern a bit when I was an undergrad at Cornell. He was a huge presence in the Cornell CS department who’ll be sorely missed.
  4. UT Austin has announced the formation of a School of Computing, which will bring together the CS department (where I work) with statistics, data science, and several other departments. Many of UT’s peer institutions have recently done the same. Naturally, I’m excited for what this says about the expanded role of computing at UT going forward. We’ll be looking to hire even more new faculty than we were before!
  5. When I glanced at the Chronicle of Higher Education to see what was new, I learned that researchers at OpenAI had proposed a technical solution, called “watermarking,” that might help tackle the crisis of students relying on AI to write all their papers … but that OpenAI had declined to deploy that solution. The piece strongly advocates a legislative mandate in favor of watermarking LLM outputs, and addresses some of the main counterarguments to that position.
  6. For those who can’t get enough podcasts of me, here are the ones I’ve done recently. Quantum: Science vs. Mythology on the Peggy Smedley Show. AI Alignment, Complexity Theory, and the Computability of Physics, on Alexander Chin’s Philosophy Podcast. And last but not least, What Is Quantum Computing? on the Robinson Erhardt Podcast.
  7. Also, here’s an article that quotes me entitled “Bitcoin needs a quantum upgrade. So why isn’t it happening?” Also, here’s a piece that interviews me in Investor’s Business Daily, entitled “Is quantum computing the next big tech shift?” (I have no say over these titles.)

On reducing the cost of breaking RSA-2048 to 100,000 physical qubits

February 15th, 2026

So, a group based in Sydney, Australia has put out a preprint with a new estimate of the resource requirements for Shor’s algorithm, claiming that if you use LDPC codes rather than the surface code, you should be able to break RSA-2048 with fewer than 100,000 physical qubits, which is an order-of-magnitude improvement over the previous estimate by friend-of-the-blog Craig Gidney. I’ve now gotten sufficiently many inquiries about it that it’s passed the threshold of blog-necessity.

A few quick remarks, and then we can discuss more in the comments section:

  • Yes, this is serious work. The claim seems entirely plausible to me, although it would be an understatement to say that I haven’t verified the details. The main worry I’d have is simply that LDPC codes are harder to engineer than the surface code (especially for superconducting qubits, less so for trapped-ion), because you need wildly nonlocal measurements of the error syndromes. Experts (including Gidney himself, if he likes!) should feel free to opine in the comments.
  • I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno. I, for one, had already “baked in” the assumption that further improvements were surely possible by using better error-correcting codes. But it’s good to figure it out explicitly.
  • On my Facebook, I mused that it might be time for the QC community to start having a conversation about whether work like this should still be openly published—a concern that my friends in the engineering side of QC have expressed to me. I got strong pushback from cryptographer and longtime friend Nadia Heninger, who told me that the crypto community has already had this conversation for decades, and has come down strongly on the side of open publication, albeit with “responsible disclosure” waiting periods, which are often 90 days. While the stakes would surely be unusually high with a full break of RSA-2048, Nadia didn’t see that the basic principles there were any different. Nadia’s arguments updated me in the direction of saying that groups with further improvements to the resource requirements for Shor’s algorithm should probably just go ahead and disclose what they’ve learned, and the crypto community will have their backs as having done what they’ve learned over the decades was the right thing. Certainly, any advantage that such disclosure would give to hackers, who could take the new Shor circuits and simply submit them to the increasingly powerful QCs that will gradually come online via cloud services, needs to be balanced against the loud, clear, open warning the world will get to migrate faster to quantum-resistant encryption.
  • I’m told that these days, the biggest practical game is breaking elliptic curve cryptography, not breaking Diffie-Hellman or RSA. Somewhat ironically, elliptic curve crypto is likely to fall to quantum computers a bit before RSA and Diffie-Hellman will fall, because ECC’s “better security” (against classical attacks, that is) led people to use 256-bit keys rather than 2,048-bit keys, and Shor’s algorithm mostly just cares about the key size.
  • In the acknowledgments of the paper, I’m thanked for “thoughtful feedback on the title.” Indeed, their original title was about “breaking RSA-2048″ with 100,000 physical qubits. When they sent me a draft, I pointed out to them that they need to change it, since journalists would predictably misinterpret it to mean that they’d already done it, rather than simply saying that it could be done.

“My Optimistic Vision for 2050”

February 12th, 2026

The following are prepared remarks that I delivered by Zoom to a student group at my old stomping-grounds of MIT, and which I thought might interest others (even though much of it will be familiar to Shtetl-Optimized regulars). The students asked me to share my “optimistic vision” for the year 2050, so I did my best to oblige. A freewheeling discussion then followed, as a different freewheeling discussion can now follow in the comments section.


I was asked to share my optimistic vision for the future. The trouble is, optimistic visions for the future are not really my shtick!

It’s not that I’m a miserable, depressed person—I only sometimes am! It’s just that, on a local level, I try to solve the problems in front of me, which have often been problems in computational complexity or quantum computing theory.

And then, on a global level, I worry about the terrifying problems of the world, such as climate change, nuclear war, and of course the resurgence of populist, authoritarian strongmen who’ve turned their backs on the Enlightenment and appeal to the basest instincts of humanity. I won’t name any names.

So then my optimistic vision is simply that we survive all this—“we” meaning the human race, but also meaning communities that I personally care about, like Americans, academics, scientists, and my extended family. We survive all of it so that we can reach the next crisis, the one where we don’t even know what it is yet.


But I get the sense that you wanted more optimism than that! Since I’ve spent 27 years working in quantum computing, the easiest thing for me to do would be to spin an optimistic story about how QC is going to make our lives so much better in 2050, by, I dunno, solving machine learning and optimization problems much faster, curing cancer, fixing global warming, whatever.

The good news is that there has been spectacular progress over the past couple years toward actually building a scalable QC. We now have two-qubit gates with 99.9% accuracy, close to the threshold where quantum error-correction becomes a net win. We can now do condensed-matter physics simulations that give us numbers that we don’t know how to get classically. I think it’s fair to say that all the key ideas and hardware building blocks for a fault-tolerant quantum computer are now in place, and what remains is “merely” the staggeringly hard engineering problem, which might take a few years, or a decade or more, but should eventually be solved.

The trouble for the optimistic vision is that the applications, where quantum algorithms outperform classical ones, have stubbornly remained pretty specialized. In fact, the two biggest ones remain the two that we knew about in the 1990s:

  1. simulation of quantum physics and chemistry themselves, and
  2. breaking existing public-key encryption.

Quantum simulation could help with designing better batteries, or solar cells, or high-temperature superconductors, or other materials, but the road from improved understanding to practical value is long and uncertain. Meanwhile, breaking public-key cryptography could help various spy agencies and hackers and criminal syndicates, but it doesn’t obviously help the world.

The quantum speedups that we know outside those two categories—for example, for optimization and machine learning—tend to be either modest or specialized or speculative.

Honestly, the application of QC that excites me the most, by far, is just disproving all the people who said QC was impossible!

So much for QC then.


And so we come to the elephant in the room—the elephant in pretty much every room nowadays—which is AI. AI has now reached a place that exceeds the imaginations of many of the science-fiction writers of generations past—excelling not only at writing code and solving math competition problems but at depth of emotional understanding. Many of my friends are terrified of where this is leading us—and not in some remote future but in 5 or 10 or 20 years. I think they’re probably correct to be terrified. There’s an enormous range of possible outcomes on the table, including ones where the new superintelligences that we bring into being treat humans basically as humans treated the dodo bird, or the earlier hominids that used to share the earth with us.

But, within this range of outcomes, I think there are also some extremely good ones. Look, for millennia, people have prayed to God or gods for help, life, health, longevity, freedom, justice—and for millennia, God has famously been pretty slow to answer their prayers. A superintelligence that was aligned with human values would be nothing less than a God who did answer, who did deliver all those things, because we had created it to do so. Or for religious people, perhaps such an AI would be the means by which the old God was finally able to deliver all those things into the temporal world. These are the stakes here.

To switch metaphors, people sometimes describe the positive AI-enabled future as “luxury space communism.” AI would take care of all of our material needs, leaving us to seek value in our lives through family, friendships, competition, hobbies, humor, art, entertainment, or exploration. The super-AI would give us the freedom to pursue all those things, but would not give us the freedom to harm each other, to curtail each others’ freedoms, or to build a bad AI capable of overthrowing it. The super-AI would be a singleton, a monotheistic God or its emissary on earth.

Many people say that something would still be missing from this future. After all, we humans would no longer really be needed for anything—for building or advancing or defending civilization. To put a personal fine point on it, my students and colleagues and I wouldn’t needed any more to discover new scientific truths or to write about them. That would all be the AI’s job.

I agree that something would be lost here. But on the other hand, what fraction of us are needed right now for these things? Most humans already derive the meaning in their lives from family and community and enjoying art and music and food and things like that. So maybe the remaining fraction of us should just get over ourselves! On the whole, while this might not be the best future imaginable, I would accept it in a heartbeat given the realistic alternatives on offer. Thanks for listening.

Nate Soares visiting UT Austin tomorrow!

February 9th, 2026

This is just a quick announcement that I’ll be hosting Nate Soares—who coauthored the self-explanatorily titled If Anyone Builds It, Everyone Dies with Eliezer Yudkowsky—tomorrow (Tuesday) at 5PM at UT Austin, for a brief talk followed by what I’m sure will be an extremely lively Q&A about his book. Anyone in the Austin area is welcome to join us.

Luca Trevisan Award for Expository Work

February 6th, 2026

Friend-of-the-blog Salil Vadhan has asked me to share the following.


The Trevisan Award for Expository Work is a new SIGACT award created in memory of Luca Trevisan (1971-2024), with a nomination deadline of April 10, 2026.

The award is intended to promote and recognize high-impact work expositing ideas and results from the Theory of Computation. The exposition can have various target audiences, e.g. people in this field, people in adjacent or remote academic fields, as well as the general public. The form of exposition can vary, and can include books, surveys, lectures, course materials, video, audio (e.g. podcasts), blogs and other media products. The award may be given to a single piece of work or a series produced over time. The award may be given to an individual, or a small group who together produced this expository work.

The awardee will receive USD$2000 (to be divided among the awardees if multiple), as well as travel support if needed to attend STOC, where the award will be presented. STOC’2026 is June 22-26 in Salt Lake City, Utah.

The endowment for this prize was initiated by a gift from Avi Wigderson, drawing on his Turing Award, and has been subsequently augmented by other individuals.

For more details see here.