“My Optimistic Vision for 2050”

The following are prepared remarks that I delivered by Zoom to a student group at my old stomping-grounds of MIT, and which I thought might interest others (even though much of it will be familiar to Shtetl-Optimized regulars). The students asked me to share my “optimistic vision” for the year 2050, so I did my best to oblige. A freewheeling discussion then followed, as a different freewheeling discussion can now follow in the comments section.


I was asked to share my optimistic vision for the future. The trouble is, optimistic visions for the future are not really my shtick!

It’s not that I’m a miserable, depressed person—I only sometimes am! It’s just that, on a local level, I try to solve the problems in front of me, which have often been problems in computational complexity or quantum computing theory.

And then, on a global level, I worry about the terrifying problems of the world, such as climate change, nuclear war, and of course the resurgence of populist, authoritarian strongmen who’ve turned their backs on the Enlightenment and appeal to the basest instincts of humanity. I won’t name any names.

So then my optimistic vision is simply that we survive all this—“we” meaning the human race, but also meaning communities that I personally care about, like Americans, academics, scientists, and my extended family. We survive all of it so that we can reach the next crisis, the one where we don’t even know what it is yet.


But I get the sense that you wanted more optimism than that! Since I’ve spent 27 years working in quantum computing, the easiest thing for me to do would be to spin an optimistic story about how QC is going to make our lives so much better in 2050, by, I dunno, solving machine learning and optimization problems much faster, curing cancer, fixing global warming, whatever.

The good news is that there has been spectacular progress over the past couple years toward actually building a scalable QC. We now have two-qubit gates with 99.9% accuracy, close to the threshold where quantum error-correction becomes a net win. We can now do condensed-matter physics simulations that give us numbers that we don’t know how to get classically. I think it’s fair to say that all the key ideas and hardware building blocks for a fault-tolerant quantum computer are now in place, and what remains is “merely” the staggeringly hard engineering problem, which might take a few years, or a decade or more, but should eventually be solved.

The trouble for the optimistic vision is that the applications, where quantum algorithms outperform classical ones, have stubbornly remained pretty specialized. In fact, the two biggest ones remain the two that we knew about in the 1990s:

  1. simulation of quantum physics and chemistry themselves, and
  2. breaking existing public-key encryption.

Quantum simulation could help with designing better batteries, or solar cells, or high-temperature superconductors, or other materials, but the road from improved understanding to practical value is long and uncertain. Meanwhile, breaking public-key cryptography could help various spy agencies and hackers and criminal syndicates, but it doesn’t obviously help the world.

The quantum speedups that we know outside those two categories—for example, for optimization and machine learning—tend to be either modest or specialized or speculative.

Honestly, the application of QC that excites me the most, by far, is just disproving all the people who said QC was impossible!

So much for QC then.


And so we come to the elephant in the room—the elephant in pretty much every room nowadays—which is AI. AI has now reached a place that exceeds the imaginations of many of the science-fiction writers of generations past—excelling not only at writing code and solving math competition problems but at depth of emotional understanding. Many of my friends are terrified of where this is leading us—and not in some remote future but in 5 or 10 or 20 years. I think they’re probably correct to be terrified. There’s an enormous range of possible outcomes on the table, including ones where the new superintelligences that we bring into being treat humans basically as humans treated the dodo bird, or the earlier hominids that used to share the earth with us.

But, within this range of outcomes, I think there are also some extremely good ones. Look, for millennia, people have prayed to God or gods for help, life, health, longevity, freedom, justice—and for millennia, God has famously been pretty slow to answer their prayers. A superintelligence that was aligned with human values would be nothing less than a God who did answer, who did deliver all those things, because we had created it to do so. Or for religious people, perhaps such an AI would be the means by which the old God was finally able to deliver all those things into the temporal world. These are the stakes here.

To switch metaphors, people sometimes describe the positive AI-enabled future as “luxury space communism.” AI would take care of all of our material needs, leaving us to seek value in our lives through family, friendships, competition, hobbies, humor, art, entertainment, or exploration. The super-AI would give us the freedom to pursue all those things, but would not give us the freedom to harm each other, to curtail each others’ freedoms, or to build a bad AI capable of overthrowing it. The super-AI would be a singleton, a monotheistic God or its emissary on earth.

Many people say that something would still be missing from this future. After all, we humans would no longer really be needed for anything—for building or advancing or defending civilization. To put a personal fine point on it, my students and colleagues and I wouldn’t needed any more to discover new scientific truths or to write about them. That would all be the AI’s job.

I agree that something would be lost here. But on the other hand, what fraction of us are needed right now for these things? Most humans already derive the meaning in their lives from family and community and enjoying art and music and food and things like that. So maybe the remaining fraction of us should just get over ourselves! On the whole, while this might not be the best future imaginable, I would accept it in a heartbeat given the realistic alternatives on offer. Thanks for listening.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!