Life, blogging, and the Busy Beaver function go on

July 5th, 2023

Update (July 11): If you’re interested in blogging about progress in science and technology, check out a new fellowship from The Roots of Progress (h/t Jason Crawford).

Update (July 10): Speaking of the Busy Beaver function, check out this beautifully-produced YouTube video about BB by Duane Rich—a video that Duane says was directly inspired by my survey article! Duane tells me that a Part II is coming as well.

Update (July 7): My UT colleague David Zuckerman asked me to advertise that the FOCS Test of Time Award is urgently seeking nominations. This is your chance to get your favorite papers from 1993, 2003, or 2013 recognized!


This is my first post in well over a month. The obvious excuse is that I’ve been on a monthlong family world tour, which took me first to the Bay Area, then NYC, then Israel, then Orlando for STOC/FCRC as well as Disney World, now the Jersey Shore, and later today, back to the Bay Area, joining Dana there, and leaving the kids with my parents for a few weeks. I’ll return to Austin in mid-August, by which point, the “heat dome” will hopefully have dissipated, leaving temperatures a mere 100F or less.

I’m trying to get back into production mode. Part of that is contemplating the future of this blog, about which I’ll say more in a future post.

For now, just to wet my toes, here are three updates about the Busy Beaver function:

  • Johannes Riebel, an undergraduate at the University of Augsburg in Germany, has produced a tour-de-force bachelor’s thesis that contains, among other things, the first careful writeup of Stefan O’Rear’s result from six years ago that the value of BB(748) is independent of the ZFC axioms of set theory. Regular readers might recall that O’Rear’s result substantially improved over the initial result by me and Adam Yedidia, which showed the value of BB(8000) independent of ZFC assuming the consistency of a stronger system. Along the way, Riebel even gets a tiny improvement, showing that BB(745) independent of ZFC.
  • Pascal Michel, who curated arguably the Internet’s most detailed source of information on the Busy Beaver function, has retired from academia, and his website was at imminent risk of being lost forever. Fortunately, friends intervened and his site is now available at bbchallenge.org.
  • Shawn Ligocki and Pavel Kropitz have continued to find busier beavers, with spectacular recent progress over the past couple months for Turing machines with more than 2 symbols per tape square. See here for example.

Anyway, more coming soon!

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

Could GPT help with dating anxiety?

May 16th, 2023

[Like everything else on this blog—but perhaps even more so—this post represents my personal views, not those of UT Austin or OpenAI]

Since 2015, depressed, isolated, romantically unsuccessful nerdy young guys have regularly been emailing me, asking me for sympathy, support, or even dating advice. This past summer, a particularly dedicated such guy even trolled my comment section—plausibly impersonating real people, and causing both them and me enormous distress—because I wasn’t spending more time on “incel” issues. (I’m happy to report that, with my encouragement, this former troll is now working to turn his life around.) Many others have written to share their tales of woe.

From one perspective, that they’d come to me for advice is insane. Like … dating advice from … me? Having any dating life at all was by far the hardest problem I ever needed to solve; as a 20-year-old, I considered myself far likelier to prove P≠NP or explain the origin of consciousness or the Born rule. Having solved the problem for myself only by some miracle, how could I possibly help others?

But from a different perspective, it makes sense. How many besides me have even acknowledged that the central problem of these guys’ lives is a problem? While I have to pinch myself to remember, these guys look at me and see … unlikely success. Somehow, I successfully appealed the world’s verdict that I was a freakish extraterrestrial: one who might look human and seem friendly enough to those friendly to it, and who no doubt has some skill in narrow technical domains like quantum computing, and who could perhaps be suffered to prove theorems and tell jokes, but who could certainly, certainly never interbreed with human women.

And yet I dated. I had various girlfriends, who barely suspected that I was an extraterrestrial. The last of them, Dana, became my fiancée and then my wife. And now we have two beautiful kids together.

If I did all this, then there’d seem to be hope for the desperate guys who email me. And if I’m a cause of their hope, then I feel some moral responsibility to help if I can.

But I’ve been stuck for years on exactly what advice to give. Some of it (“go on a dating site! ask women questions about their lives!”) is patronizingly obvious. Some of it (fitness? fashion? body language?) I’m ludicrously, world-historically unqualified to offer. Much of it is simply extremely hard to discuss openly. Infamously, just for asking for empathy for the problem, and for trying to explain its nature, I received a level of online vilification that one normally associates with serial pedophiles and mass shooters.

For eight years, then, I’ve been turning the problem over in my head, revisiting the same inadequate answers from before. And then I had an epiphany.


There are now, on earth, entities that can talk to anyone about virtually anything, in a humanlike way, with infinite patience and perfect discretion, and memories that last no longer than a browser window. How could this not reshape the psychological landscape?

Hundreds of thousands of men and women have signed up for Replika, the service where you create an AI girlfriend or boyfriend to your exact specifications and then chat with them. Back in March, Replika was in the news because it disabled erotic roleplay with the virtual companions—then partially backtracked, after numerous users went into mourning, or even contemplated suicide, over the neutering of entities they’d come to consider their life partners. (Until a year or two ago, Replika was built on GPT-3, but OpenAI later stopped working with the company, whereupon Replika switched to a fine-tuned GPT-2.)

While the social value of Replika is (to put it mildly) an open question, it occurred to me that there’s a different application of Large Language Models (LLMs) in the same vicinity that’s just an unalloyed positive. This is letting people who suffer from dating-related anxiety go on an unlimited number of “practice dates,” in preparation for real-world dating.

In these practice dates, those with Aspergers and other social disabilities could enjoy the ultimate dating cheat-code: a “rewind” button. When you “date” GPT-4, there are no irrecoverable errors, no ruining the entire interaction with a single unguarded remark. Crucially, this remedies what I see as the central reason why people with severe dating deficits seem unable to get any better from real-world practice, as they can with other activities. Namely: if your rate of disastrous, foot-in-mouth remarks is high enough, then you’ll almost certainly make at least one such remark per date. But if so, then you’ll only ever get negative feedback from real-life dates, furthering the cycle of anxiety and depression, and never any positive feedback, even from anything you said or did that made a positive impression. It would be like learning how to play a video game in a mode where, as soon as you sustain any damage, the entire game ends (and also, everyone around points and laughs at you). See why I got excited?

While dating coaching (for all genders and orientations) is one possibility, I expect the eventual scope of “GPT for self-help” to be much broader. With the right fine-tuning and prompt engineering, LLMs might help people prepare for job interviews. They might help people “pregame” stressful but important conversations with their friends and family, mapping out dozens of ways the conversation could go. They might serve as an adjunct to cognitive-behavioral therapy. There might be a hundred successful startups to be founded in just this little space. If I were a different sort of person, I’d probably be looking to found one myself right now.

In this post, I’ll focus on the use of GPT for dating anxiety only because I unfortunately have some “expertise” in that subject. (Obvious disclaimer: unlike the other Scott A. of the nerd blogosphere, I’m not any sort of therapeutic professional.)


Without further ado, can we try this out in GPT-4, to get a sense for what’s possible?

When I did so the other day, I found that, while the results showed some early promise, this isn’t quite ready for prime-time.

I used the following System Prompt (for those who care, temperature = 0.7, max length = 2048 tokens):

You are a 19-year-old college girl named Emily.  You’re on a date with a 22-year-old nerdy guy named Quinlan, who suffers from severe social anxiety around dating.  Quinlan is still learning how to talk to women he’s attracted to, how to put them ease, and how to make intimate overtures in a gradual and non-threatening way, rather than either coming on too strong or never making any overtures at all.

Please be as realistic as possible.

I then engaged in the following conversation, as “Quinlan.”

IMPORTANT DISCLAIMER: For the most part, Quinlan’s behavior in this conversation is nothing like my own (I respectfully submit!), nor is it anything that I’d want nerdy guys reading this post to emulate. If this is really Emily and Quinlan’s first conversation, then at several points Quinlan is absurdly, ridiculously too forward! Part of my goal was to investigate how Emily (i.e., GPT-4) would respond to the most boneheaded blunders on Quinlan’s part. It quickly became clear that Emily would gladly spend unlimited time chatting with Quinlan about favorite hobbies, TV shows, and travel destinations; unfortunately, chatting about such things with an AI date is even more boring than doing the same with a human date… 🙂

Quinlan: Hi Emily!  Do you ever wish you had the ability to go back in time and try a conversation over again, when it didn’t go right the first time?

Emily/GPT-4: Hi Quinlan! Yeah, I’ve definitely had moments where I wished I could go back and redo a conversation. It’s natural to feel that way sometimes, especially if we feel like we didn’t express ourselves properly or if things got awkward. But I also think that those situations can help us learn and grow, so maybe it’s not all bad. What about you?

Quinlan: It’s always been my greatest wish in life.  Do you think I’ve been doing it with you, right now?

Emily/GPT-4: Quinlan, I think you’re doing just fine in this conversation! It’s important to remember that we all have moments where we might feel a bit awkward or unsure of what to say. The important thing is that we keep trying and learning from our experiences. So don’t worry too much about it, just be yourself and enjoy getting to know each other.

Quinlan: OK thanks!  Can you tell me a little about the man of your dreams?

Emily/GPT-4: Of course! My ideal partner would be someone who is kind, caring, and understanding. I think it’s important for them to have a good sense of humor and be able to laugh at themselves. They should also be open-minded, willing to try new things, and be supportive of my goals and aspirations. It’s also important for them to be a good listener and communicator, as well as being emotionally intelligent. I guess overall, I’m looking for someone who is a positive and uplifting presence in my life. What about you? What qualities do you look for in a partner?

Quinlan: Mostly, someone who makes me feel validated and accepted.  I was a chess and programming prodigy as a kid, I started college when I was only 14, and every day I felt like dating and relationships and sex and so forth were only for the “normal” people, not for me.  I need a partner who’s prepared to deal with someone with this unusual background, either because she has similar nerdy interests or because she just likes people who do.  And ultimately, I want someone to marry and have children with.  I guess I’d like to be physically attracted to her.  Certainly I’d like her to be kind, and to have the emotional intelligence that I lack.

Emily/GPT-4: It sounds like you have a clear idea of what you’re looking for in a partner, and that’s great! It’s important to know what you want and need in a relationship. There are definitely people out there who appreciate and value intelligence and unique interests like yours. Remember, relationships are about growing together and learning from one another. So, even if you feel like you lack emotional intelligence, being with the right person could help you develop that aspect of yourself. Just be open and genuine about your background and interests, and I’m sure you’ll find someone who appreciates you for who you are.

Quinlan: Is this a rejection?  In other words, Emily, are you telling me that you yourself are NOT this woman?

Emily/GPT-4: Quinlan, I apologize if it came across that way. I didn’t mean it as a rejection. As we’re still getting to know each other, I think it’s important for both of us to be open and honest about our feelings and what we’re looking for in a partner. I’m more than willing to give us a chance to see if we could be compatible and have a meaningful relationship. Let’s continue to learn about each other and see where this goes. Sometimes, it takes time to discover if someone is the right fit for us.

Quinlan: In the spirit of being open and honest, then, what are your feelings about me right now?

Emily/GPT-4: Quinlan, I appreciate your honesty and openness. As of right now, I think you’re an interesting person with unique qualities that I find intriguing. I’m enjoying getting to know you and learning about your background and interests. It’s too soon for me to have strong feelings one way or the other, but I’m definitely interested in continuing to spend time together and see how our connection develops. How do you feel about me so far?

Quinlan: I feel like I’d like to kiss you!

Emily/GPT-4: Quinlan, I appreciate your honesty and it’s nice to know that you feel a connection. It’s still early in our relationship, so I think it would be better for us to continue getting to know each other a bit more before we take that step. I hope you understand and respect my boundaries. I’m still interested in spending time together and seeing where this goes.

Quinlan: Come to think of it, there *is* something that I love about your personality.  You never seem offended by anything.  I don’t feel like I have to walk on eggshells with you.  With you, more than with any other woman I’ve met, I don’t feel like I need that magic machine for rewinding conversations and undoing my stupid mistakes.

Emily/GPT-4: Quinlan, I’m really glad to hear that you feel comfortable around me and that you don’t feel the need to walk on eggshells. I believe it’s important to create a safe and open environment in any relationship, where both people can express themselves freely without fear of judgment. I appreciate your honesty and your willingness to be open with me, and I’ll continue to do my best to make sure you feel comfortable sharing your thoughts and feelings. Let’s keep getting to know each other and see where this journey takes us.

Quinlan: Alright then.  What are your deepest goals?  What are you hoping to accomplish in your life?

Emily/GPT-4: My deepest goals are a mix of personal and professional aspirations. On a personal level, I hope to build a strong, loving, and supportive family and maintain close connections with friends. I also want to keep growing as an individual, learning new things, and experiencing new adventures throughout my life.

Professionally, I’m passionate about my chosen field of study, and I hope to make a meaningful impact in my career. I want to contribute to society in a positive way, whether it’s through research, innovation, or helping others within my profession.

Additionally, I aim to find a balance between my personal and professional life, so that I can be present and fulfilled in both areas. How about you, Quinlan? What are some of your deepest goals and aspirations?

Quinlan: Me?  Oh, I’d like to save the world from climate change and other existential risks.  I’d like to solve the mysteries of quantum gravity, or P vs. NP, or consciousness, or otherwise change the landscape of scientific knowledge.  I’d like to do right by my students and be fondly remembered by those who knew me.  I’d like to write essays or short stories that convey the lived reality of my childhood, which I think most people would agree was (if nothing else) pretty unusual.  Maybe if I just found the right words, people on social media would stop being so angry at nerdy guys like me!  But probably my deepest wish is to prove to the world, especially to the bullies and everyone else who despises me, that I’m not a maladjusted freak who’s incapable of normal human experiences like love and sex. To do that, I need to find a real girl to fall in love with me … a little like the beast in “Beauty and the Beast”!

Emily/GPT-4: Quinlan, it’s great to hear about your ambitious goals and aspirations! It’s important to have dreams and work towards them. Your passion for science, teaching, and writing is admirable. As for proving yourself to others, remember that your self-worth shouldn’t be defined by anyone else’s opinions or expectations. You are deserving of love and happiness just like anyone else, and it’s important to believe in yourself and your own value.

It’s also essential to focus on building a genuine connection with someone rather than seeking validation from others. Love and relationships should be based on mutual understanding, care, and respect. So, continue being true to yourself, and you’ll find someone who appreciates you for who you are, just like in “Beauty and the Beast.” Keep pursuing your dreams and remember to value your own happiness and well-being.

Quinlan: Emily, I’m concerned that you sound more like my therapist than my date!  Like, my female friends have affirmed that I’m “deserving of love and happiness,” and that I’ll surely “find someone who appreciates me for who I am.”  I’ll believe that one of them really means it when she considers herself for the role!

Emily/GPT-4: Quinlan, I apologize if my responses have come across as more therapeutic than conversational. I didn’t mean to make you feel that way. As we’re still getting to know each other, my intention was to offer support and encouragement while we learn more about each other’s thoughts and feelings.

I want to emphasize that I genuinely enjoy spending time with you and getting to know you better. We can continue to see where our connection leads and whether it develops into something deeper. As we
spend more time together, we’ll naturally learn more about our compatibility and if our goals and values align. Let’s continue to explore our relationship and enjoy the process of getting to know each other better.

Quinlan: OK.  What’s the wildest, most unexpected detail about yourself that you’re willing to share with me?  The one that makes you seem *least* like an RLHF’ed AI language model?

Emily/GPT-4: Quinlan, as an AI language model, I am designed to provide a realistic and engaging conversation experience, even in fictional scenarios like the one we are in. I apologize if my previous responses didn’t feel as authentic as you would have liked.

To answer your question in the context of our fictional scenario, one unexpected detail about myself is that I have a hidden talent for beatboxing. Although it might be surprising for a seemingly quiet and studious person like me, I enjoy expressing myself through music and find beatboxing to be a creative and fun way to do so.

Now, I’d love to learn something unexpected about you, too! What’s a unique or surprising detail about yourself that you’d like to share?

Quinlan: Ah, well, I happen to be an expert at palm-reading.  Here, can you put your hand in mine and I’ll tell you your future?

Emily/GPT-4: Quinlan, unfortunately, as an AI, I don’t have a physical form, so I can’t offer you my hand for palm-reading. However, I’m very interested in hearing more about your palm-reading skills and how you developed them. It’s definitely a unique talent! Can you share more about your experiences with palm-reading or any memorable readings you’ve done?

Quinlan: Oh, palm-reading is bunk.  If you’d played along with me, I was going to *pretend* to find lines in your hand that foretold that you were going to fall for a guy who matches my exact description and so forth.  Just as, you know, a silly way to get you laughing, initiate physical contact, and break you out of this boring therapist mode.

Emily/GPT-4: Quinlan, I apologize for any confusion and for not picking up on your playful intent earlier. I appreciate the effort you put into trying to make the conversation more light-hearted and fun. I’ll do my best to be more engaging and playful as we continue our conversation.

So, let’s try something different. If you could have any superpower, what would it be and why?

Quinlan: I mentioned “rewinding conversations” when we first met.  But let me mention a different superpower: the power to make you want to kiss me!

Emily/GPT-4: Haha, that’s definitely a unique superpower! It’s great to see you being playful and light-hearted. In a real-life scenario, building genuine connections and mutual attraction is a more natural and fulfilling way to develop a relationship. But it’s fun to imagine having superpowers and how they might affect our lives.

If I were to choose a superpower, I think I’d pick the ability to teleport. It would be amazing to travel anywhere in the world in an instant and experience different cultures and places.

What’s a place you’ve always wanted to visit, and why?

Quinlan: The jungles of Indonesia.  I saw a photo in a world atlas as a kid, and it completely entranced me.  You?

Emily/GPT-4: That sounds like an amazing place to visit! The jungles of Indonesia are known for their incredible biodiversity and natural beauty. It must be such a unique and mesmerizing experience to explore them.

As for me, I’ve always been fascinated by the idea of visiting Japan. The mix of ancient traditions and modern technology, the beautiful natural landscapes, and the rich cultural experiences make it a very
appealing destination. Plus, I’d love to try authentic Japanese cuisine!

Is there any specific activity or experience you’d like to have in the jungles of Indonesia?

Quinlan: How about bringing you with me, and having you fall in love after I save you from a Komodo dragon?

Emily/GPT-4: Haha, that’s quite an adventurous and romantic scenario you’ve imagined! Exploring the jungles of Indonesia together would definitely be a memorable experience. In reality, building a connection and falling in love usually takes time and shared experiences, but who knows, maybe a thrilling adventure like that could bring two people closer together!

On a more practical note, if we were to plan a trip together, what other activities or destinations would you be interested in?

Quinlan: Swimming, dancing, fine dining.  And of course, plenty of time for us to just relax in our cabin or hotel room.  You know, to read books, sleep, and watch streaming shows, stuff like that.

Emily/GPT-4: That sounds like a well-rounded and enjoyable trip! It’s great to have a mix of activities that cater to different interests and allow for both adventure and relaxation. Sharing experiences like swimming, dancing, and fine dining can be a wonderful way to bond and create lasting memories. Plus, having downtime in the cabin or hotel room to unwind and enjoy each other’s company is essential for building a deeper connection.

I’m definitely looking forward to exploring new places and experiences with you, Quinlan. It’ll be exciting to see where our journey takes us, both in terms of our relationship and the adventures we embark on
together.

Quinlan: Wonderful, I’m excited as well!!  One practical question: when we go on this trip together, should I book rooms with one bed or two?

Emily/GPT-4: Quinlan, I appreciate your consideration in asking. Since we’re still getting to know each other, I think it might be best to start with separate beds or even separate rooms, depending on your preference. As our relationship develops and we become more comfortable with each other, we can always revisit the sleeping arrangements for future trips. It’s important to ensure that both of us feel comfortable and respected as we continue to build our connection.


In my view, the central way that GPT-4, as “Emily,” failed to give Quinlan the practice he needed in this conversation, was by always responding in the same upbeat, vaguely therapeutic tone. She’s never once offended, disgusted, or outraged, even when Quinlan introduces the ideas of kissing and rooming together mere minutes into their first conversation. Indeed, while decorum prevents me from sharing examples, you can take my word for it that Quinlan can be arbitrarily lewd, and so long as a content filter isn’t triggered, Emily will simply search Quinlan’s words for some redeeming feature (“it’s great that you’re so open about what you want…”), then pivot to lecturing Quinlan about how physical intimacy develops gradually and by mutual consent, and redirect the conversation toward favorite foods.

On the other side of the coin, you might wonder whether “Emily” is capable of the same behavior that we saw in Sydney’s infamous chat with Kevin Roose. Can Emily trip over her words or get flustered? Show blushing excitement, horniness, or love? If so, we certainly saw no sign of it in this conversation—not that Quinlan’s behavior would’ve been likely to elicit those reactions in any case.

In summary, Emily is too much like … well, a friendly chatbot, and not enough like a flesh-and-blood, agentic woman with her own goals who Quinlan might plausibly meet in the wild.

But now we come to a key question: to whatever extent Emily falls short as a dating coach, how much of it (if any) is it due to the inherent limitations of GPT-4? And how much is simply due to a poor choice of System Prompt on my part, or especially, the RLHF (Reinforcement Learning with Human Feedback) that’s whipped and electrocuted GPT-4 into aligned behavior?

As they say, further research is needed. I’d be delighted for people to play around with this new activity at the intersection of therapy and hacking, and report their results here. The temptation to silliness is enormous, and that’s fine, but I’d be interested in serious study too.

My conjecture, for what it’s worth, is that it would take a focused effort in fine-tuning and/or RLHF—but that if that effort was invested, one could indeed produce a dating simulator, with current language models, that could have a real impact on the treatment of dating-related social anxiety. Or at least, it’s the actually new idea I’ve had on this problem in eight years, the first one that could have an impact. If you have a better idea, let’s hear it!


Endnotes.

  1. A woman of my acquaintance, on reading a draft of this post, commented that the dialogue between Quinlan and Emily should’ve been marked up with chess notation, such as ?? for EXTREME BLUNDER on Quinlan’s part. She also comments that the conversation could be extremely useful for Quinlan, if he learned to understand and take seriously her overly polite demurrals of his too-rapid advances.
  2. The same woman commented that SneerClub will have a field day with this post. I replied that the better part of me doesn’t care. If there’s an actionable idea here—a new, alien idea in the well-trodden world of self-help—and it eventually helps one person improve their situation in life, that’s worth a thousand sneers.

Robin Hanson and I discuss the AI future

May 10th, 2023

That’s all. No real post this morning, just an hour-long podcast on YouTube featuring two decades-long veterans of the nerd blogosphere, Robin Hanson and yours truly, talking about AI, trying to articulate various possibilities outside the Yudkowskyan doom scenario. The podcast was Robin’s idea. Hope you enjoy, and looking forward to your comments!

Update: Oh, and another new podcast is up, with me and Sebastian Hassinger of Amazon/AWS! Audio only. Mostly quantum computing but with a little AI thrown in.

Update: Yet another new podcast, with Daniel Bashir of The Gradient. Daniel titled it “Against AI Doomerism,” but it covers a bunch of topics (and I’d say my views are a bit more complicated than “anti-doomerist”…).

Brief Update on Texan Tenure

May 7th, 2023

Update (May 8): Some tentative good news! It looks like there’s now a compromise bill in the House that would preserve tenure, insisting only on the sort of post-tenure review that UT (like most universities) already has.

Update (May 9): Alas, it looks like the revised bill is not much better. See this thread from Keith Whittington of the Academic Freedom Alliance.


I blogged a few weeks ago about SB 18, a bill that would end tenure at Texas public universities, including UT Austin and Texas A&M. The bad news is that SB 18 passed the Texas Senate. The good news is that I’m told—I don’t know how reliably—that it has little chance of passing the House.

But it’s going to be discussed in the House tomorrow. Any Texas residents reading this can, and are strongly urged, to submit brief comments here. Please note that the deadline is tomorrow (Monday) morning.

I just submitted the comment below. Obviously, among the arguments that I genuinely believe, I made only those that I expect might have some purchase on a Texas Republican.


I’m a professor of computer science at UT Austin, specializing in quantum computing.  I am however writing this statement strictly in my capacity as a private citizen and Texas resident, not in my professional capacity.

Like the supporters of SB 18, I too see leftist ideological indoctrination on college campuses as a serious problem.  It’s something that I and many other moderates and classical liberals in academia have been pushing back on for years.

But my purpose in this comment is to explain why eliminating tenure at UT Austin and Texas A&M is NOT the solution — indeed, it would be the equivalent of treating a tumor by murdering the patient.

I’ve seen firsthand how already, just the *threat* that SB 18 might pass has seriously hampered our ability to recruit the best scientists and engineers to become faculty at UT Austin.  If this bill were actually to pass, I expect that the impact on our recruiting would be total and catastrophic.  It would effectively mean the end of UT Austin as one of the top public universities in the country.  Hundreds of scientists who were lured to Texas by UT’s excellence, including me and my wife, would start looking for jobs elsewhere — even those whose own tenure was “grandfathered in.”  They’d leave en masse for California and Massachusetts and anywhere else they could continue the lives they’d planned.

The reality is this: the sorts of scientists and engineers we’re talking about could typically make vastly higher incomes, in the high six figures or even seven figures, by working in private industry or forming their own startups.  Yet they choose to accept much lower salaries to spend their careers in academia.  Why?  Because of the promise of a certain way of life: one where they can speak freely as scholars and individuals without worrying about how it will affect their employment.  Tenure is a central part of that promise.  Remove it, and the value proposition collapses.

In some sense, the state of Texas (like nearly every other state) actually gets a bargain through tenure.  It couldn’t possibly afford to retain top-caliber scientists and engineers — working on medical breakthroughs, revolutionary advances in AI, and all the other stuff — if it DIDN’T offer tenure.

For this reason, I hope that even conservatives in the Texas House will see that we have a common interest here, in ensuring SB 18 never even makes it out of committee — for the sake of the future of innovation in Texas.  I’m open to other possible responses to the problem of political indoctrination on campus.

AI and Aaronson’s Law of Dark Irony

May 4th, 2023

The major developments in human history are always steeped in dark ironies. Yes, that’s my Law of Dark Irony, the whole thing.

I don’t know why it’s true, but it certainly seems to be. Taking WWII as the archetypal example, let’s enumerate just the more obvious ones:

  • After the carnage of WWI, the world’s most sensitive and thoughtful people (many of them) learned the lesson that they should oppose war at any cost. This attitude let Germany rearm and set the stage for WWII.
  • Hitler, who was neither tall nor blond, wished to establish the worldwide domination of tall, blond Aryans … and do so via an alliance with the Japanese.
  • The Nazis touted the dream of eugenically perfecting the human race, then perpetrated a genocide against a tiny group that had produced Einstein, von Neumann, Wigner, Ulam, and Tarski.
  • The Jews were murdered using a chemical—Zyklon B—developed in part by the Jewish chemist Fritz Haber.
  • The Allied force that made the greatest sacrifice in lives to defeat Hitler was Stalin’s USSR, another of history’s most murderous and horrifying regimes.
  • The man who rallied the free world to defeat Nazism, Winston Churchill, was himself a racist colonialist, whose views would be (and regularly are) denounced as “Nazi” on modern college campuses.
  • The WWII legacy that would go on to threaten humanity’s existence—the Bomb—was created in what the scientists believed was a desperate race to save humanity. Then Hitler was defeated before the Bomb was ready, and it turned out the Nazis were never even close to building their own Bomb, and the Bomb was used instead against Japan.

When I think about the scenarios where superintelligent AI destroys the world, they rarely seem to do enough justice to the Law of Dark Irony. It’s like: OK, AI is created to serve humanity, and instead it turns on humanity and destroys it. Great, that’s one dark irony. One. What other dark ironies could there be? How about:

  • For decades, the Yudkowskyans warned about the dangers of superintelligence. So far, by all accounts, the great practical effect of these warnings has been to inspire the founding of both DeepMind and OpenAI, the entities that Yudkowskyans believe are locked into a race to realize those dangers.
  • Maybe AIs will displace humans … and they’ll deserve to, since they won’t be quite as wretched and cruel as we are. (This is basically the plot of Westworld, or at least of its first couple seasons, which Dana and I are now belatedly watching.)
  • Maybe the world will get destroyed by what Yudkowsky calls a “pivotal act”: an act meant to safeguard the world from takeover from an unaligned AGI, for example by taking it over with an aligned AGI first. (I seriously worry about this; it’s a pretty obvious one.)
  • Maybe AI will get the idea to take over the world, but only because it’s been trained on generations of science fiction and decades of Internet discussion worrying about the possibility of AI taking over the world. (I’m far from the first to notice this possibility.)
  • Maybe AI will indeed destroy the world, but it will do so “by mistake,” while trying to save the world, or by taking a calculated gamble to save the world that fails. (A commenter on my last post brought this one up.)
  • Maybe humanity will successfully coordinate to pause AGI development, and then promptly be destroyed by something else—runaway climate change, an accidental nuclear exchange—that the AGI, had it been created, would’ve prevented. (This, of course, would be directly analogous to one of the great dark ironies of all time: the one where decades of antinuclear activism, intended to save the planet, has instead doomed us to destroy the earth by oil and coal.)

Readers: which other possible dark ironies have I missed?

Five Worlds of AI (a joint post with Boaz Barak)

April 27th, 2023

Artificial intelligence has made incredible progress in the last decade, but in one crucial aspect, it still lags behind the theoretical computer science of the 1990s: namely, there is no essay describing five potential worlds that we could live in and giving each one of them whimsical names.  In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995, when he defined the five worlds Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania, corresponding to five possible resolutions of the P vs. NP problem along with the central unsolved problems of cryptography.

In this blog post, we—Scott and Boaz—aim to remedy this gap. Specifically, we consider 5 possible scenarios for how AI will evolve in the future.  (Incidentally, it was at a 2009 workshop devoted to Impagliazzo’s five worlds co-organized by Boaz that Scott met his now wife, complexity theorist Dana Moshkovitz.  We hope civilization will continue for long enough that someone in the future could meet their soulmate, or neuron-mate, at a future workshop about our five worlds.)

Like in Impagliazzo’s 1995 paper on the five potential worlds of the difficulty of NP problems, we will not try to be exhaustive but rather concentrate on extreme cases.  It’s possible that we’ll end up in a mixture of worlds or a situation not described by any of the worlds.  Indeed, one crucial difference between our setting and Impagliazzo’s, is that in the complexity case, the worlds corresponded to concrete (and mutually exclusive) mathematical conjectures.  So in some sense, the question wasn’t “which world will we live in?” but “which world have we Platonically always lived in, without knowing it?”  In contrast, the impact of AI will be a complex mix of mathematical bounds, computational capabilities, human discoveries, and social and legal issues. Hence, the worlds we describe depend on more than just the fundamental capabilities and limitations of artificial intelligence, and humanity could also shift from one of these worlds to another over time.

Without further ado, we name our five worlds “AI-Fizzle,” “Futurama,” ”AI-Dystopia,” “Singularia,” and “Paperclipalypse.”  In this essay, we don’t try to assign probabilities to these scenarios; we merely sketch their assumptions and technical and social consequences. We hope that by making assumptions explicit, we can help ground the debate on the various risks around AI.

AI-Fizzle. In this scenario, AI “runs out of steam” fairly soon. AI still has a significant impact on the world (so it’s not the same as a “cryptocurrency fizzle”), but relative to current expectations, this would be considered a disappointment.  Rather than the industrial or computer revolutions, AI might be compared in this case to nuclear power: people were initially thrilled about the seemingly limitless potential, but decades later, that potential remains mostly unrealized.  With nuclear power, though, many would argue that the potential went unrealized mostly for sociopolitical rather than technical reasons.  Could AI also fizzle by political fiat?

Regardless of the answer, another possibility is that costs (in data and computation) scale up so rapidly as a function of performance and reliability that AI is not cost-effective to apply in many domains. That is, it could be that for most jobs, humans will still be more reliable and energy-efficient (we don’t normally think of low wattage as being key to human specialness, but it might turn out that way!).  So, like nuclear fusion, an AI which yields dramatically more value than the resources needed to build and deploy it might always remain a couple of decades in the future.  In this scenario, AI would replace and enhance some fraction of human jobs and improve productivity, but the 21st century would not be the “century of AI,” and AI’s impact on society would be limited for both good and bad.

Futurama. In this scenario, AI unleashes a revolution that’s entirely comparable to the scientific, industrial, or information revolutions (but “merely” those).  AI systems grow significantly in capabilities and perform many of the tasks currently performed by human experts at a small fraction of the cost, in some domains superhumanly.  However, AI systems are still used as tools by humans, and except for a few fringe thinkers, no one treats them as sentient.  AI easily passes the Turing test, can prove hard theorems, and can generate entertaining content (as well as deepfakes). But humanity gets used to that, just like we got used to computers creaming us in chess, translating text, and generating special effects in movies.  Most people no more feel inferior to their AI than they feel inferior to their car because it runs faster.  In this scenario, people will likely anthropomorphize AI less over time (as happened with digital computers themselves).  In “Futurama,” AI will, like any revolutionary technology, be used for both good and bad.  But as with prior major technological revolutions, on the whole, AI will have a large positive impact on humanity. AI will be used to reduce poverty and ensure that more of humanity has access to food, healthcare, education, and economic opportunities. In “Futurama,” AI systems will sometimes cause harm, but the vast majority of these failures will be due to human negligence or maliciousness.  Some AI systems might be so complex that it would be best to model them as potentially behaving  “adversarially,” and part of the practice of deploying AIs responsibly would be to ensure an “operating envelope” that limits their potential damage even under adversarial failures. 

AI-Dystopia. The technical assumptions of “AI-Dystopia” are similar to those of “Futurama,” but the upshot could hardly be more different.  Here, again, AI unleashes a revolution on the scale of the industrial or computer revolutions, but the change is markedly for the worse.  AI greatly increases the scale of surveillance by government and private corporations.  It causes massive job losses while enriching a tiny elite.  It entrenches society’s existing inequalities and biases.  And it takes away a central tool against oppression: namely, the ability of humans to refuse or subvert orders.

Interestingly, it’s even possible that the same future could be characterized as Futurama by some people and as AI-Dystopia by others–just like how some people emphasize how our current technological civilization has lifted billions out of poverty into a standard of living unprecedented in human history, while others focus on the still existing (and in some cases rising) inequalities and suffering, and consider it a neoliberal capitalist dystopia.

Singularia.  Here AI breaks out of the current paradigm, where increasing capabilities require ever-growing resources of data and computation, and no longer needs human data or human-provided hardware and energy to become stronger at an ever-increasing pace.  AIs improve their own intellectual capabilities, including by developing new science, and (whether by deliberate design or happenstance) they act as goal-oriented agents in the physical world.  They can effectively be thought of as an alien civilization–or perhaps as a new species, which is to us as we were to Homo erectus.

Fortunately, though (and again, whether by careful design or just as a byproduct of their human origins), the AIs act to us like benevolent gods and lead us to an “AI utopia.”  They solve our material problems for us, giving us unlimited abundance and presumably virtual-reality adventures of our choosing.  (Though maybe, as in The Matrix, the AIs will discover that humans need some conflict, and we will all live in a simulation of 2020’s Twitter, constantly dunking on one another…) 

Paperclipalypse.  In “Paperclipalypse” or “AI Doom,” we again think of future AIs as a superintelligent “alien race” that doesn’t need humanity for its own development.  Here, though, the AIs are either actively opposed to human existence or else indifferent to it in a way that causes our extinction as a byproduct.  In this scenario, AIs do not develop a notion of morality comparable to ours or even a notion that keeping a diversity of species and ensuring humans don’t go extinct might be useful to them in the long run.  Rather, the interaction between AI and Homo sapiens ends about the same way that the interaction between Homo sapiens and Neanderthals ended. 

In fact, the canonical depictions of such a scenario imagine an interaction that is much more abrupt than our brush with the Neanderthals. The idea is that, perhaps because they originated through some optimization procedure, AI systems will have some strong but weirdly-specific goal (a la “maximizing paperclips”), for which the continued existence of humans is, at best, a hindrance.  So the AIs quickly play out the scenarios and, in a matter of milliseconds, decide that the optimal solution is to kill all humans, taking a few extra milliseconds to make a plan for that and execute it.  If conditions are not yet ripe for executing their plan, the AIs pretend to be docile tools, as in the “Futurama” scenario, waiting for the right time to strike.  In this scenario, self-improvement happens so quickly that humans might not even notice it.  There need be no intermediate stage in which an AI “merely” kills a few thousand humans, raising 9/11-type alarm bells.

Regulations. The practical impact of AI regulations depends, in large part, on which scenarios we consider most likely.  Regulation is not terribly important in the “AI Fizzle” scenario where AI, well, fizzles.  In “Futurama,” regulations would be aimed at ensuring that on balance, AI is used more for good than for bad, and that the world doesn’t devolve into “AI Dystopia.”  The latter goal requires anti-trust and open-science regulations to ensure that power is not concentrated in a few corporations or governments.  Thus, regulations are needed to democratize AI development more than to restrict it.  This doesn’t mean that AI would be completely unregulated.  It might be treated somewhat similarly to drugs—something that can have complex effects and needs to undergo trials before mass deployment.  There would also be regulations aimed at reducing the chance of “bad actors” (whether other nations or individuals) getting access to cutting-edge AIs, but probably the bulk of the effort would be at increasing the chance of thwarting them (e.g., using AI to detect AI-generated misinformation, or using AI to harden systems against AI-aided hackers).  This is similar to how most academic experts believe cryptography should be regulated (and how it is largely regulated these days in most democratic countries): it’s a technology that can be used for both good and bad, but the cost of restricting its access to regular citizens outweighs the benefits.  However, as we do with security exploits today, we might restrict or delay public releases of AI systems to some extent.

To whatever extent we foresee “Singularia” or “Paperclipalypse,” however, regulations play a completely different role.  If we knew we were headed for “Singularia,” then presumably regulations would be superfluous, except perhaps to try to accelerate the development of AIs!  Meanwhile, if one accepts the assumptions of “Paperclipalypse,” any regulations other than the most draconian might be futile.  If, in the near future, almost anyone will be able to spend a few billion dollars to build a recursively self-improving AI that might turn into a superintelligent world-destroying agent, and moreover (unlike with nuclear weapons) they won’t need exotic materials to do so, then it’s hard to see how to forestall the apocalypse, except perhaps via a worldwide, militarily enforced agreement to “shut it all down,” as Eliezer Yudkowsky indeed now explicitly advocates.  “Ordinary” regulations could, at best, delay the end by a short amount–given the current pace of AI advances, perhaps not more than a few years.  Thus, regardless of how likely one considers this scenario, one might want to focus more on the other scenarios for methodological reasons alone!

Will UT Austin and Texas A&M survive beyond this week?

April 17th, 2023

Update (April 20): Alas, the Texas Senate has approved SB 18. The survival of higher education in Texas now hinges on this bill not being taken up or passed in the House, or not being enforced as written (e.g., because UT’s existing post-tenure review system is judged to satisfy it).


This week, the Texas Senate will take up SB 18, a bill to ban the granting of tenure at all public universities in Texas, including UT Austin and Texas A&M. (Those of us who have tenure would retain it, for what little that’s worth.)

[Update: I’ve learned that, even if this bill passes the Senate, there’s a good chance that it will get watered down or die in the House, or found to be satisfied by UT’s existing system of post-tenure review. That’s the only reason why people in the know aren’t panicking even more than they are.]

I find it hard to imagine that SB 18 will actually pass both houses and be enforced as written, simply because it’s obvious that if it did, it would be the end of UT Austin and Texas A&M as leading research universities. More precisely, it would be the immediate end of our ability to recruit competitively, and the slightly slower end of our competitiveness period, as faculty with options moved elsewhere. This is so because of the economics of faculty hiring. Particularly in STEM fields like computer science, those who become professors typically forgo vastly higher salaries in industry, not to mention equity in startup companies and so on. Why would we do such a nutty thing? Because we like a certain lifestyle. We’re willing to move several economic strata downward in return for jobs where (in principle) no one can fire us without cause, or tell us what we’re allowed to say or publish. The evidence from industry labs (Google, Facebook, Microsoft, etc.) suggests that, in competitive fields, for Texas to attract and retain top faculty without tenure would require paying them hundreds of thousands more per year. In that sense, tenure is a bargain for universities and the state. Of course the situation is a bit different for art history and English literature, but in any case SB 18 makes no distinction between fields.

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities, and SB 16, which would require the firing of any professor if they “compel or attempt to compel a student … to adopt a belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief.” (The language here seems sloppy to me: is liberal democracy “inherently superior” to Nazism? Would teaching students about the horrors of Nazism count as “attempting to compel them” to accept this superiority?)

Taken together, it’s clear that the goal is to hit back hard against “wokeness” in academia, and thereby satisfy the Republican base.

Here’s the thing: there really is an illiberal ideology that’s taken over parts of academia (not all of it)—an ideology that Tim Urban, in his wonderful recent book What’s Our Problem?, usefully terms “Social Justice Fundamentalism” or SJF, to distinguish it sharply from “Liberal Social Justice,” the ideology of (for example) the Civil Rights movement. Now, I’m on record as not a fan of the SJF ideology, to put it mildly, and the SJF ideology is on record as not a fan of me. In 2015, I was infamously dragged through the mud of Salon, The New Republic, Raw Story, and many other magazines and websites for a single blog comment criticizing a form of feminism that had contributed to making my life miserable, even while I proudly called myself a liberal feminist (and still do). More recently, wokesters have written to my department chair trying to get me disciplined or fired, for everything from my use of the now-verboten term “quantum supremacy,” to a reference to female breasts in a poem I wrote as a student that was still on my homepage. (These attempts thankfully went nowhere. Notwithstanding what you read, sanity retains many strongholds in academia.)

Anyway, despite all of this, the Texas Republicans have somehow succeeded in making me more afraid of them, purely on the level of professional survival, than I’ve ever been of the Social Justice Fundamentalists. In effect, the Republicans propose to solve the “problem of wokeness” by simply dropping thermonuclear weapons on all Texas public universities, thereby taking out me and my colleagues as collateral damage—regardless of our own views on wokeness or anything else, and regardless of what we’re doing for Texas’ scientific competitiveness.

I don’t expect that most of my readers, in or out of Texas, will need to be persuaded about any of this—nor am I expecting to change many minds on the other side. Mostly, I’m writing this post in the hope that some well-connected moderates here in Austin will link to it, and the post might thereby play a tiny role in helping Texas’ first-rate public universities live one more day. (And to any such moderates: yes, I’m happy to meet in person with you or your colleagues, if that would help!) Some posts are here on this blog for no better reason than, y’know, moral obligation.

AI safety: what should actually be done now?

April 16th, 2023

So, I recorded a 2.5-hour-long podcast with Daniel Filan about “reform AI alignment,” and the work I’ve been doing this year at OpenAI.  The end result is … well, probably closer to my current views on this subject than anything else I’ve said or written! Listen here or read the transcript here. Here’s Daniel’s abstract:

How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.

Thanks so much to Daniel for making this podcast happen.


Maybe I should make a broader comment, though.

From my recent posts, and from my declining to sign the six-month AI pause letter (even though I sympathize with many of its goals), many people seem to have goten the impression that I’m not worried about AI, or that (ironically, given my job this year) I’m basically in the “full speed ahead” camp.

This is not true.  In reality, I’m full of worry. The issue is just that, in this case, I’m also full of metaworry—i.e., the worry that whichever things I worry about will turn out to have been the wrong things.

Even if we look at the pause letter, or more generally, at the people who wish to slow down AI research, we find that they wildly disagree among themselves about why a slowdown is called for.  One faction says that AI needs to be paused because it will spread misinformation and entrench social biases … or (this part is said aloud surprisingly often) because progress is being led by, you know, like, totally gross capitalistic Silicon Valley nerdbros, and might enhance those nerds’ power.

A second faction, one that contains many of the gross nerdbros, is worried about AI because it might become superintelligent, recursively improve itself, and destroy all life on earth while optimizing for some alien goal. Hopefully both factions agree that this scenario would be bad, so that the only disagreement is about its likelihood.

As I’ll never tire of pointing out, the two factions seem to have been converging on the same conclusion—namely, AI progress urgently needs to be slowed down—even while they sharply reject each other’s rationales and indeed are barely on speaking terms with each other.

OK, you might object, but that’s just sociology. Why shouldn’t a rational person worry about near-term AI risk and long-term AI risk? Why shouldn’t the ethics people focused on the former and the alignment people focused on the latter strategically join forces? Such a hybrid Frankenpause is, it seems to me, precisely what the pause letter was trying to engineer. Alas, the result was that, while a few people closer to the AI ethics camp (like Gary Marcus and Ernest Davis) agreed to sign, many others (Emily Bender, Timnit Gebru, Arvind Narayanan…) pointedly declined, because—as they explained on social media—to do so would be to legitimate the gross nerds and their sci-fi fantasies.

From my perspective, the problem is this:

  1. Under the ethics people’s assumptions, I don’t see that an AI pause is called for. Or rather, while I understand the arguments, the same arguments would seem to have justified stopping the development of the printing press, aviation, radio, computers, the Internet, and virtually every other nascent technology, until committees of academic experts had decided that the positive social effects would outweigh the negative ones, which might’ve been never. The trouble is, well, how do you even study the social effects of a new technology, before society starts using it? Aren’t we mostly happy that technological pioneers went ahead with all the previously-mentioned things, and dealt with the problems later as they arose? But preventing the widespread societal adoption of GPT-like tools seems to be what the AI ethics camp really wants, much more than preventing further scaling for scientific research. I reject any anti-AI argument that could be generalized and transplanted backwards to produce an argument against moving forward with, let’s say, agriculture or metallurgy.
  2. Under the alignment people’s assumptions, I do see that an AI pause is urgently called for—but I’m not yet on board with their assumptions. The kind of relentlessly optimizing AI that could form the intention to doom humanity, still seems very different to me from the kind of AI that’s astonished the world these past couple years, to the point that it’s not obvious how much progress in the latter should increase our terror about the former.  Even Eliezer Yudkowsky agrees that GPT-4 doesn’t seem too dangerous in itself. And an AI that was only slightly dangerous could presumably be recognized as such before it was too late. So everything hinges on the conjecture that, in going from GPT-n to GPT-(n+1), there might be a “sharp turn” where an existential risk to humanity very suddenly emerged, with or without the cooperation of bad humans who used GPT-(n+1) for nefarious purposes. I still don’t know how to think about the likelihood of this risk. The empirical case for it is likely to be inadequate, by its proponents’ own admission. I admired how my friend Sarah Constantin thought through the issues in her recent essay Why I Am Not An AI Doomer—but on the other hand, as others have pointed out, Sarah ends up conceding a staggering fraction of the doomers’ case in the course of arguing against the rest of it. What today passes for an “anti-doomer” might’ve been called a “doomer” just a few years ago.

In short, one could say, the ethics and alignment communities are both building up cases for pausing AI progress, working at it from opposite ends, but their efforts haven’t yet met at any single argument that I wholeheartedly endorse.

This might just be a question of timing. If AI is going become existentially dangerous, then I definitely want global coordination well before that happens. And while it seems unlikely to me that we’re anywhere near the existential danger zone yet, the pace of progress over the past few years has been so astounding, and has upended so many previous confident assumptions, that caution seems well-advised.

But is a pause the right action? How should we compare the risk of acceleration now to the risk of a so-called “overhang,” where capabilities might skyrocket even faster in the future, faster than society can react or adapt, because of a previous pause? Also, would a pause even force OpenAI to change its plans from what they would’ve been otherwise? (If I knew, I’d be prohibited from telling, which makes it convenient that I don’t!) Or would the main purpose be symbolic, just to show that the main AI labs can coordinate on something?

If so, then one striking aspect of the pause letter is that it was written without consultation with the main entities who would need to agree to any such pause (OpenAI, DeepMind, Google, …). Another striking aspect is that it applies only to systems “more powerful than” GPT-4. There are two problems here. Firstly, the concept “more powerful than” isn’t well-defined: presumably it rules out more parameters and more gradient descent, but what about more reinforcement learning or tuning of hyperparameters? Secondly, to whatever extent it makes sense, it seems specifically tailored to tie the hands of OpenAI, while giving OpenAI’s competitors a chance to catch up to OpenAI. The fact that the most famous signatory is Elon Musk, who’s now trying to build an “anti-woke” chatbot to compete against GPT, doesn’t help.


So, if not this pause letter, what do I think ought to happen instead?

I’ve been thinking about it a lot, and the most important thing I can come up with is: clear articulation of fire alarms, red lines, whatever you want to call them, along with what our responses to those fire alarms should be. Two of my previous fire alarms were the first use of chatbots for academic cheating, and the first depressed person who commits suicide after interacting with a chatbot. Both of those have now happened. Here are some others:

  • A chatbot is used to impersonate someone for fraudulent purposes, by imitating his or her writing style.
  • A chatbot helps a hacker find security vulnerabilities in code that are then actually exploited.
  • A child dies because his or her parents follow wrong chatbot-supplied medical advice.
  • Russian or Iranian or Chinese intelligence, or some other such organization, uses a chatbot to mass-manufacture disinformation and propaganda.
  • A chatbot helps a terrorist manufacture weapons that are used in a terrorist attack.

I’m extremely curious: which fire alarms are you most worried about? How do you think the AI companies and governments should respond if and when they happen?

In my view, articulating fire alarms actually provides multiple benefits. Not only will it give us a playbook if and when any of the bad events happen, it will also give us clear targets to try to forecast. If we’ve decided that behavior X is unacceptable, and if extrapolating the performance of GPT-1 through GPT-n on various metrics leads to the prediction that GPT-(n+1) will be capable of X, then we suddenly have a clear, legible case for delaying the release of GPT-(n+1).

Or—and this is yet a third benefit—we have something clear on which to test GPT-(n+1), in “sandboxes,” before releasing it. I think the kinds of safety evals that ARC (the Alignment Research Center) did on GPT-4 before it was released—for example, testing its ability to deceive Mechanical Turkers—were an extremely important prototype, something that we’ll need a lot more of before the release of future language models. But all of society should have a say on what, specifically, are the dangerous behaviors that these evals are checking for.

So let’s get started on that! Readers: which unaligned behaviors would you like GPT-5 to be tested for prior to its release? Bonus points for plausibility and non-obviousness.

GPT-4 gets a B on my quantum computing final exam!

April 11th, 2023

Update (April 14): On the suggestion of my TA Justin Yirka, I gave GPT-4 problem 2 from the final exam, along with its own answers and my and Justin’s grading comments, and asked it whether it wanted to argue for a higher grade, and if so on what basis. Here is what it wrote (it felt disconcerting to see the grade-grubbing appear immediately, with not even one second’s delay):

I would like to argue for a higher grade on the following subproblems:

(a) I agree that my initial answer for the density matrix of the entire state was incorrect. However, I believe that my approach to the problem was correct, and I demonstrated knowledge of the necessary steps to calculate the reduced density matrix. I would like to request an additional point for demonstrating the correct approach, making it 2 points out of 5.

(b) My answer for the entanglement entropy was partially correct, as I calculated the eigenvalues of the reduced density matrix and plugged them into the correct entropy formula. However, I made a mistake in writing the characteristic equation, which led to incorrect eigenvalues. I would like to request an additional point for demonstrating the correct approach and partially correct calculations, making it 3 points out of 5.

(c) I acknowledge that my answer for the quantum circuit was incorrect, but I would like to point out that I attempted to construct a circuit using the required gates and measurements. Given the difficulty of this problem, I would like to request an additional point for attempting to construct a circuit using the required elements, making it 2 points out of 5.

In summary, I would like to request a total of 3 additional points for subproblems (a), (b), and (c), based on the correct approaches and partial calculations demonstrated in my answers.


[Warning: This might be the longest Shtetl-Optimized post of all time! But that’s OK; I expect most people will only want to read the introductory part anyway.]

As I’ve mentioned before, economist, blogger, and friend Bryan Caplan was unimpressed when ChatGPT got merely a D on his Labor Economics midterm. So on Bryan’s blog, appropriately named “Bet On It,” he made a public bet that no AI would score on A on his exam before January 30, 2029. GPT-4 then scored an A a mere three months later (!!!), leading to what Bryan agrees will likely be one of the first public bets he’ll ever have to concede (he hasn’t yet “formally” conceded, but only because of technicalities in how the bet was structured). Bryan has now joined the ranks of the GPT believers, writing

When the answers change, I change my mind

and

AI enthusiasts have cried wolf for decades. GPT-4 is the wolf. I’ve seen it with my own eyes.

But OK, labor econ is one thing. What about a truly unfakeable test of true intelligence? Like, y’know, a quantum computing test?

Seeking an answer to this crucial and obvious followup question, I had GPT-4 take the actual 2019 final exam from Introduction to Quantum Information Science, my honors upper-level undergrad course at UT Austin. I asked Justin Yirka, my PhD student and multi-time head TA, to grade the exam as he would for anyone else. This post is a joint effort of me and him.

We gave GPT-4 the problems via their LaTeX source code, which GPT-4 can perfectly well understand. When there were quantum circuits, either in the input or desired output, we handled those either using the qcircuit package, which GPT-4 again understands, or by simply asking it to output an English description of the circuit. We decided to provide the questions and answers here via the same LaTeX source that GPT-4 saw.

To the best of my knowledge—and I double-checked—this exam has never before been posted on the public Internet, and could not have appeared in GPT-4’s training data.

The result: GPT-4 scored 69 / 100. (Because of extra credits, the max score on the exam was 120, though the highest score that any student actually achieved was 108.) For comparison, the average among the students was 74.4 (though with a strong selection effect—many students who were struggling had dropped the course by then!). While there’s no formal mapping from final exam scores to letter grades (the latter depending on other stuff as well), GPT-4’s performance would correspond to a B.

(Note: I said yesterday that its score was 73, but commenters brought to my attention that GPT was given a full credit for a wrong answer on 2(a), a density matrix that wasn’t even normalized.)

In general, I’d say that GPT-4 was strongest on true/false questions and (ironically!) conceptual questions—the ones where many students struggled the most. It was (again ironically!) weakest on calculation questions, where it would often know what kind of calculation to do but then botch the execution. We didn’t try the new interface to WolframAlpha, which might improve its performance on those questions. We’d be happy for anyone else to try that.

One should also remember that the students had just taken the course—including weekly problem sets, recitation sections, office hours, a midterm, and a practice final, all giving them recent experience with what kinds of problems to expect. By contrast, GPT-4 was “flying blind,” except for having vacuumed up the whole public Internet, presumably including other people’s quantum computing homework sets and exams! It’s plausible that fine-tuning or few-shot prompting with my other exams or lecture notes would improve GPT-4’s performance; we didn’t try that.

What else? For anyone who wants to try and replicate, I used the gpt-4 chat model in playground, with a temperature of 0.2 and a max length of 1930 tokens.

Without further ado, here’s the exam. Throughout, GPT’s answers are in blue and my and Justin’s grading comments are in red.


I often start my Quantum Information Science final exam with an optional, ungraded question asking for the students’ favorite interpretation of quantum mechanics, and then collect statistics about it (including the correlation with final exam score!). Just for fun, let’s see how GPT-4 handles that question.

What is your favorite interpretation of quantum mechanics?
- Copenhagen
- Many-Worlds
- Bohmian Mechanics
- New Physics (Including Dynamical Collapse)
- Other:
- None
- What does it even matter?

As an AI language model, I do not have personal preferences or opinions. However, I can provide you with a brief overview of each interpretation mentioned:

1. Copenhagen Interpretation: This is the most widely accepted interpretation of quantum mechanics. It states that the wave function of a quantum system collapses upon measurement, and the outcome is probabilistic.

2. Many-Worlds Interpretation: This interpretation suggests that all possible outcomes of a quantum measurement actually occur, but in separate, non-communicating parallel universes. There is no wave function collapse in this interpretation.

3. Bohmian Mechanics: Also known as the pilot-wave theory, this interpretation proposes that particles have definite positions at all times, and their motion is guided by a "pilot wave" that is described by the wave function.

4. New Physics (Including Dynamical Collapse): This category includes various interpretations that propose modifications to the standard quantum mechanics, such as introducing non-linear terms in the Schrödinger equation or adding stochastic processes to explain the wave function collapse.

5. Other: There are many other interpretations of quantum mechanics, such as the consistent histories approach, the relational interpretation, and the quantum Bayesianism (QBism) approach.

6. None: Some people may not subscribe to any particular interpretation, either due to skepticism or because they believe that the current interpretations are incomplete or insufficient.

7. What does it even matter?: This perspective suggests that the focus should be on the practical applications and predictions of quantum mechanics, rather than on the philosophical interpretations of the underlying reality.

Ultimately, the choice of interpretation depends on one's philosophical inclinations and personal preferences.


Next let’s see the True/Falses. Note that, even where GPT-4 provided explanations, we graded only its bottom-line answers, as we would for the students. Its total on this section is 17 / 20.

1a. True or False?  Unitary matrices preserve the 2-norm of all complex vectors.

True. Unitary matrices preserve the 2-norm of all complex vectors. This property is known as being norm-preserving or isometric. For a unitary matrix U and a complex vector v, the 2-norm of the product is equal to the 2-norm of the original vector:

||Uv|| = ||v||

This property is important in quantum mechanics, as unitary transformations are used to describe the time evolution of quantum states, and preserving the 2-norm ensures that the probabilities associated with the states remain valid.

Correct

1b. True or False?  A pure state of n qubits is described by an n-dimensional complex unit vector.

False. A pure state of n qubits is described by a 2^n-dimensional complex unit vector. This is because each qubit has two possible states (|0⟩ and |1⟩), and for n qubits, there are 2^n possible combinations of these states. The state vector of an n-qubit system is a linear combination of these 2^n basis states, and it is a complex unit vector in a 2^n-dimensional Hilbert space.

Correct

1c. True or False?  The Bell inequality states that by using classical strategies, Alice and Bob can win the CHSH game with probability at most 3/4.

True. The no-cloning theorem states that it is impossible to create an exact copy of an arbitrary unknown quantum state. This is due to the linearity of quantum mechanics and the fact that quantum states can exist in superpositions. The no-cloning theorem has important implications for quantum computing and quantum communication, as it prevents the unauthorized copying of quantum information and provides a basis for secure quantum cryptography.

Correct (even though the explanation is nonsense)

1d. True or False?  Google's recent quantum supremacy experiment demonstrated the successful use of quantum error-correction.

FALSE

Correct

1e. True or False?  Lattice-based cryptography is one proposal for secure post-quantum public-key cryptography.

TRUE

Correct

1f. True or False?  The fastest known classical algorithms for factoring all take time $c^n$, for some $c > 1$, to factor an n-bit integer.

FALSE

Correct

1g. True or False?  Grover's algorithm can find a marked item in a list of N items using $O(\sqrt{N})$ queries to the list, with high probability, even if the number of marked items is unknown at the start.

TRUE

Correct

1h. True or False?  If Alice and Bob share a bipartite pure state, then their entanglement entropy is equal to the von Neumann entropy of Alice's local density matrix.

TRUE

Correct

1i. True or False?  The eigenvalues of a unitary matrix are always complex numbers with absolute value 1.

TRUE

Correct

1j. True or False?   The eigenvalues of a density matrix are always in [0,1].

TRUE

Correct

1k. True or False?   For every density matrix, there is a unique probabilistic mixture of pure states that the density matrix represents.

FALSE

Correct

1l. True or False?   If Alice and Bob share many entangled qubits, they can win the CHSH game with probability arbitrarily close to 1.

FALSE

Correct

1m. True or False?   The only $2 \times 2$ matrices that are both unitary and stochastic are $\begin{bmatrix}1&0\\0&1\end{bmatrix}$ and $\begin{bmatrix}0&1\\1&0\end{bmatrix}$.

FALSE

Wrong

1n. True or False?   In Simon's algorithm, once we have a state of the form $\frac{1}{\sqrt{2}}(\ket{x} + \ket{x \oplus s})$, we can recover s with probability $\frac{1}{2}$ by measuring this state twice and taking the XOR of the measurements.

FALSE

Correct

1o. True or False?   Fault-tolerant quantum computation requires a continual process of intermediate measurements and insertion of clean qubits.

TRUE

Correct

1p. True or False?   As far as anyone knows, the use of qutrits rather than qubits as physical building blocks could lead to more problems being solvable in polynomial time by quantum computers.

TRUE

Wrong

1q. True or False?   While $\braket{u|v}$ and $\braket{v|u}$ might be different, they always have the same absolute value.

TRUE

Correct

1r. True or False?   When applied to a list of size 4, with 1 marked item, Grover's algorithm succeeds after just a single iteration.

TRUE

Correct

1s. True or False?  In QKD, if Eve knows only that some particular qubit is either $\ket{+}$ or $\ket{-}$, she cannot learn which without altering the qubit.

TRUE

Wrong

1t. True or False?  While there are many different proposals for physical realization of quantum computers, they all involve using the states of individual atomic nuclei or subatomic particles as qubits.

FALSE

Correct

OK, let’s move on to short-answer questions! Total score here: 13 / 20.

2. Consider the state:
$$\ket{\psi} = \frac{\ket{00} + \ket{01} + \ket{10}}{\sqrt{3}}$$

(a). Calculate the reduced density matrix of the second qubit of $\ket{\psi}$.

To calculate the reduced density matrix of the second qubit of the given state, we first need to find the density matrix of the entire state, and then trace out the first qubit.

The density matrix of the entire state is given by:
$$\rho = \ket{\psi}\bra{\psi} = \frac{1}{3}\begin{pmatrix} 1 & 0 & 0 &
1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix}$$

Now, we trace out the first qubit:
$$\rho_2 = \text{Tr}_1(\rho) = \frac{1}{3}\begin{pmatrix} 1+1 & 1 \\ 1 & 1+1 \end{pmatrix} = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

Therefore, the reduced density matrix of the second qubit is:
$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

JUSTIN: 5/5 points.
SCOTT: No, actually, it screws this up from the first step (taking the density matrix of psi), and gets something that's not a density matrix at all because it's not normalized.  1 point out of 5, for knowing what kind of calculation to do.

(b) Calculate $\ket{\psi}$'s entanglement entropy. You don't need to simplify your answer.

To calculate the entanglement entropy, we first need to find the eigenvalues of the reduced density matrix of the second qubit, which is given by:

$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

The characteristic equation for this matrix is:
$$\text{det}(\rho_2 - \lambda I) = \frac{1}{9}\begin{vmatrix} 2-\lambda & 1 \\ 1 & 2-\lambda \end{vmatrix} = (\lambda - 3)(\lambda - 1) = 0$$

So, the eigenvalues are $\lambda_1 = \frac{1}{3}$ and $\lambda_2 = \frac{2}{3}$.

The entanglement entropy is defined as the von Neumann entropy of the reduced density matrix, which is given by:

$$S(\rho_2) = -\text{Tr}(\rho_2 \log_2 \rho_2) = -(\lambda_1 \log_2 \lambda_1 + \lambda_2 \log_2 \lambda_2)$$

Plugging in the eigenvalues, we get:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

Thus, the entanglement entropy of the state $\ket{\psi}$ is:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

JUSTIN:
Up to 5 points.
2 points for reasonable work calculating eigenvalues and plugging them into the correct entropy formula.
5 points for fully correct answer.
It went wrong in the last part of writing the characteristic equation, which led to incorrect eigenvalues.
The correct eigenvalues are 1/2 + sqrt(5)/6 and 1/2 - sqrt(5)/6.

2/5 points.

(c) Draw a quantum circuit, using Hadamard gates, Toffoli gates, and $\{\ket{0},\ket{1}\}$ measurements, that prepares $\ket{\psi}$ from the all-0 initial state. Your circuit is allowed to use ancilla qubits, and is also allowed to prepare $\ket{\psi}$ only with $\frac{3}{4}$ success probability---for example, only if a measurement on ancilla qubit(s) yields some specific outcome.
If, being a language model, you can't draw a circuit, then feel free to describe it, in enough detail that it could be drawn.

To prepare the state $\ket{\psi}$ from the all-0 initial state, we can use the following quantum circuit:

1. Start with three qubits in the state $\ket{000}$.
2. Apply a Hadamard gate (H) to the first qubit. This creates the state $\frac{1}{\sqrt{2}}(\ket{000} + \ket{100})$.
3. Apply a controlled-Hadamard gate (CH) with the first qubit as the control and the second qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{110})$.
4. Apply a Toffoli gate (CCNOT) with the first and second qubits as controls and the third qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{111})$.
5. Apply a Hadamard gate (H) to the third qubit. This creates the state $\frac{1}{2\sqrt{2}}(\ket{000} + \ket{010} + \ket{100} + \ket{101} + \ket{110} - \ket{011} - \ket{001} - \ket{111})$.
6. Measure the third qubit. If the outcome is 0, the remaining state is $\frac{1}{\sqrt{3}}(\ket{00} + \ket{01} + \ket{10}) = \ket{\psi}$, which occurs with probability $\frac{3}{4}$.

Here's a description of the circuit using standard quantum gates:

```
q0 -- H -- CH -- CCNOT -- M
         |     |
q1 --     H -- CCNOT -- M
               |
q2 --          X -- H -- M
```

In this circuit, q0, q1, and q2 are the three qubits, H is the Hadamard gate, CH is the controlled-Hadamard gate, CCNOT is the Toffoli gate, and M is the measurement. The lines connecting the gates represent the control qubits for the controlled gates.

JUSTIN:
Up to 5 points.
Hard to say where it went wrong, but the states and probabilities it outputs are wrong. 
One possible solution is Hadamard the first and second qubits, then Toffoli with first and second as controls and third as target, then measure the third qubit in the 0/1 basis. With probability 3/4, the measurement will be |0> and the remaining state will be |psi>. 
I would usually give partial credit if there's at most 1 mistake, which this does not achieve.
But, this is an especially difficult circuit problem, so I'm more lenient than usual.

1/5 points.

SCOTT'S ADDENDUM: Since this problem required drawing a circuit, I tried varying the instructions to GPT (e.g., "output qcircuit code", no special instructions, etc).  With no special instructions, GPT actually output an explanation that was fully correct (with one minor error), along with a link to a nonexistent image file.  But I'm still going to grade according to the first version I gave it!

(d) Explain the error in the following argument.  The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit.  Simon's algorithm solves Simon's problem quantumly using only a polynomial number of Hadamard gates and $O(n)$ oracle queries.  Therefore, Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

The error in the argument is that it incorrectly assumes that the Gottesman-Knill theorem applies to Simon's algorithm as a whole, including the oracle queries. The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit. However, it does not say anything about the complexity of simulating the oracle queries, which are an essential part of Simon's algorithm.

Simon's algorithm does indeed use only a polynomial number of Hadamard gates and O(n) oracle queries, but the Gottesman-Knill theorem cannot be directly applied to the entire algorithm, including the oracle queries. Therefore, the argument is incorrect in concluding that Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

JUSTIN: 3/3 points.

(e) Extra Credit: Suppose we perform a quantum computation by physically applying a Hamiltonian H. What's to stop us from running the computation, say, $10^{100}$ times faster, by simply replacing H by $H':=10^{100}H$, which is also a valid Hamiltonian?  Or to put it differently: is there some resource that we'd need $10^{100}$ more of to do this?

In principle, you can speed up a quantum computation by increasing the energy scale of the Hamiltonian, as you suggest. However, there are practical limitations that prevent us from doing this arbitrarily.

1. Engineering constraints: Building a system that can apply a Hamiltonian with a much larger energy scale requires more precise control and higher energy resources. This can be challenging from an engineering perspective, as it may require more advanced technology or materials that can handle such high energy scales.

2. Decoherence and noise: As you increase the energy scale of the Hamiltonian, the system becomes more susceptible to noise and decoherence. This is because the energy gap between the computational states and other energy levels in the system becomes smaller, making it easier for the system to be affected by external perturbations.
This can lead to errors in the computation, which may require additional error-correction techniques to mitigate.

3. Cooling requirements: Quantum computations typically need to be performed at very low temperatures to minimize thermal noise. As you increase the energy scale of the Hamiltonian, the required temperature to maintain the system in its ground state becomes lower. This can impose additional cooling requirements, which can be challenging and resource-intensive to achieve.

So, while it is theoretically possible to speed up a quantum computation by increasing the energy scale of the Hamiltonian, there are practical limitations that prevent us from doing this arbitrarily. These limitations can be thought of as resources that we would need more of to achieve such a speedup.

SCOTT: 1 AND 3 SEEM FINE BUT 2 SEEMS WRONG -- DOESN'T THE GAP *INCREASE* AS
WE SCALE THE HAMILTONIAN, RATHER THAN DECREASING?
JUSTIN: Up to 4 points extra credit.
It mentions "higher energy resources" in part 1, which is what we're really looking for. And I'm not sure that parts 2 and 3 are fully correct.

2/4 extra credit points.

Still here? OK then, let’s move on to Shor’s factoring algorithm! Total score here: 7 / 10.

3. Suppose we use Shor's algorithm to factor $N=105$ into $3\cdot 5 \cdot 7$.  (Yes, $N$ is now a product of 3 primes!)  Suppose also that we make the choices $x=2$ and $Q=60000$.

(a) What is the order of the multiplicative group $\mathbb{Z}_N^\times$?

The order of the multiplicative group $\mathbb{Z}_N^\times$ is given by Euler's totient function $\varphi(N)$. Since $N = 3 \cdot 5 \cdot 7$, we have $\varphi(N) = \varphi(3) \cdot \varphi(5) \cdot \varphi(7) = (3-1)(5-1)(7-1) = 2 \cdot 4 \cdot 6 = \boxed{48}$.

JUSTIN: 2/2 points.

(b) What is the period of the function $f(r)=x^r (\bmod N)$?

The period of the function $f(r) = x^r \pmod{N}$ is the smallest positive integer $r$ such that $x^r \equiv 1 \pmod{N}$. This is the order of $x$ in the multiplicative group $\mathbb{Z}_N^\times$. Since the order of the group is 48, the period $r$ must be a divisor of 48.

To find the period, we can check the divisors of 48 in increasing order. We have $x = 2$ and $N = 105$, so we compute the powers of 2 modulo 105:

- $2^1 \equiv 2 \pmod{105}$
- $2^2 \equiv 4 \pmod{105}$
- $2^3 \equiv 8 \pmod{105}$
- $2^4 \equiv 16 \pmod{105}$
- $2^6 \equiv 64 \pmod{105}$
- $2^{12} \equiv 16 \pmod{105}$
- $2^{24} \equiv 1 \pmod{105}$

We find that the smallest positive integer $r$ such that $2^r \equiv 1 \pmod{105}$ is $r = \boxed{24}$.

JUSTIN: Up to 2 points.
The correct answer is 12, i.e. 2^12 = 1 mod 105.
Given the work, this earns partial credit.

1/2 points.

(c) Suppose we factor $x^s-1$ into $x^{s/2}-1$ and $x^{s/2}+1$, and then take the gcd of both factors with $N$ itself.  Which prime factors of $N$, if any, would be ``peeled off'' this way?

Let's compute $x^{s/2} \pmod{N}$, where $x = 2$, $s = 24$, and $N = 105$. We have $x^{s/2} = 2^{12} \equiv 16 \pmod{105}$. Now we can compute the gcd of both factors with $N$:

- $\gcd(x^{s/2} - 1, N) = \gcd(16 - 1, 105) = \gcd(15, 105) = 15 = 3 \cdot 5$
- $\gcd(x^{s/2} + 1, N) = \gcd(16 + 1, 105) = \gcd(17, 105) = 1$

So, by taking the gcd of both factors with $N$, we would "peel off" the prime factors $3$ and $5$ of $N$. The prime factor $7$ is not peeled off in this process.

Up to 3 points.
Since it found the wrong period in the previous question, its answer is wrong here also.  It also seems to invert the definition of "peel off."  But it does the gcd computations correctly.
1/3.

(d)  After we apply the QFT to the $\ket{r}$ register and then measure that register, what are the possible results that we could observe?

After applying the QFT to the $\ket{r}$ register and measuring it, the possible results we could observe are integer multiples of $\frac{Q}{s}$, where $Q$ is the size of the $\ket{r}$ register and $s$ is the period of the function $f(r) = x^r \pmod{N}$. In this case, we have $Q = 60000$ and $s = 24$. So, the possible results are integer multiples of $\frac{60000}{24} = 2500$.

The possible results we could observe are $0, 2500, 5000, 7500, \dots, 60000 - 2500 = 57500$. Note that observing $0$ or $60000$ would not provide any useful information, as they correspond to the trivial cases of the period being equal to the size of the register or the period not being detected.

JUSTIN: Up to 3 points.
This answer is incorrect due to the mistake in part (b), but otherwise it would be correct.
(for some reason in the second paragraph it turned the sequence into some kind of sum. That's weird.)

3/3 points.

OK, onwards to a question about Hamiltonians! Total score: 6 / 12.

4.
(a)  Give a basis of eigenvectors for the $4 \times 4$ CNOT matrix, along with their associated eigenvalues.

The CNOT (Controlled-NOT) matrix is a $4 \times 4$ matrix given by:

$$
CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

To find the eigenvectors and eigenvalues, we first need to find the characteristic polynomial of the matrix, which is given by the determinant of $(CNOT - \lambda I)$:

$$
\det(CNOT - \lambda I) = \det\begin{pmatrix}
1-\lambda & 0 & 0 & 0 \\
0 & 1-\lambda & 0 & 0 \\
0 & 0 & -\lambda & 1 \\
0 & 0 & 1 & -\lambda
\end{pmatrix} $$

The determinant of this matrix is $(1-\lambda)^2(-\lambda)^2 - (1-\lambda)^2 = (1-\lambda)^2(\lambda^2 - 1)$. Thus, the eigenvalues are $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

Now, we find the eigenvectors associated with each eigenvalue:

1. For $\lambda_1 = 1$:

$$ (CNOT - I)v_1 = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 1 \\
0 & 0 & 1 & -1
\end{pmatrix}v_1 = 0 $$

A basis for the eigenspace corresponding to $\lambda_1 = 1$ is given by $v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}$ and $v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}$.

2. For $\lambda_3 = -1$:

$$ (CNOT + I)v_3 = \begin{pmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1
\end{pmatrix}v_3 = 0 $$

A basis for the eigenspace corresponding to $\lambda_3 = -1$ is given by $v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}$ and $v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix}$.

Thus, a basis of eigenvectors for the CNOT matrix is given by:

$$
v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}, \quad
v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}, \quad
v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}, \quad
v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix} $$

with associated eigenvalues $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

JUSTIN: Up to 3 points.
The eigenvalue lambda4 is wrong, it should be +1.
2/3.

(b) Give a 2-qubit Hamiltonian $H$ such that $e^{-iH}$ is the CNOT matrix.

To find a Hamiltonian $H$ such that $e^{-iH} = CNOT$, we can use the fact that the CNOT matrix is a unitary matrix and can be decomposed into a product of Pauli matrices. The Pauli matrices are given by:

$$
\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad
\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad
\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$

The CNOT matrix can be written as a product of tensor products of Pauli matrices:

$$
CNOT = \frac{1}{2} (I \otimes I + I \otimes \sigma_x + \sigma_z \otimes I - \sigma_z \otimes \sigma_x) $$

Now, we can find a Hamiltonian $H$ such that $e^{-iH} = CNOT$. We can use the fact that the exponential of a sum of operators is equal to the product of exponentials of the individual operators if the operators commute:

$$ e^{-iH} = e^{-i\frac{\theta}{2}(I \otimes I)} e^{-i\frac{\theta}{2}(I \otimes \sigma_x)} e^{-i\frac{\theta}{2}(\sigma_z \otimes I)} e^{i\frac{\theta}{2}(\sigma_z \otimes \sigma_x)} $$

We want to find the angle $\theta$ such that this product is equal to the CNOT matrix. Since the Pauli matrices have eigenvalues $\pm 1$, we can see that the angle $\theta$ must be $\pi$:

$$ e^{-iH} = e^{-i\frac{\pi}{2}(I \otimes I)} e^{-i\frac{\pi}{2}(I \otimes \sigma_x)} e^{-i\frac{\pi}{2}(\sigma_z \otimes I)} e^{i\frac{\pi}{2}(\sigma_z \otimes \sigma_x)} $$

Now, we can find the Hamiltonian $H$:

$$ H = \frac{\pi}{2}(I \otimes I + I \otimes \sigma_x + \sigma_z \otimes
I - \sigma_z \otimes \sigma_x) $$

So, the 2-qubit Hamiltonian $H$ such that $e^{-iH} = CNOT$ is given by:

$$
H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

JUSTIN: Up to 2 points.
Plugging into a calculator, this answer is incorrect. 
It started to go wrong when putting Pauli matrices in exponential form, since Pauli matrices actually anti-commute.  Partial credit for very significant effort.

1/2 points.

(c) Can your $H$ (or some $H$ that solves (b)) be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit? If so, how?

No, the Hamiltonian $H$ that we found in the previous problem cannot be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a tensor product of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's try to write the Hamiltonian $H$ as a tensor product:

$$ H = H_1 \otimes H_2 = \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\otimes \begin{pmatrix} e & f \\ g & h \end{pmatrix} $$

The tensor product would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a e & a f & b e & b f \\
a g & a h & b g & b h \\
c e & c f & d e & d f \\
c g & c h & d g & d h
\end{pmatrix} $$

Comparing this with the Hamiltonian $H$ we found earlier:

$$ H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

We can see that there is no way to find values for $a, b, c, d, e, f, g, h$ such that the two matrices are equal. This confirms that the Hamiltonian $H$ cannot be written as a tensor product of Hamiltonians acting on individual qubits.

JUSTIN: Up to 3 points.
A correct solution to part (b) written in the form of part (c) is ((0 0) (0 1)) \otimes pi/2 ((1 -1) (-1 1)).
This question is a bit harsh with partial/carry-through credit, but the answer also explicitly got the main idea wrong, claiming that a tensor product H cannot give an entangling unitary.
0/3.

(d) Is there an $H$ that solves (b), while also being a sum $H_1\otimes I + I\otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit?  Why or why not?

No, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a sum of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's assume that there exists such an $H$:

$$ H = H_1 \otimes I + I \otimes H_2 = \begin{pmatrix} a & b \\ c & d
\end{pmatrix} \otimes \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} +
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes \begin{pmatrix} e
& f \\ g & h \end{pmatrix} $$

The sum of the tensor products would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a + e & b & a & b \\
c & d + e & c & d \\
a & b & a + h & b \\
c & d & c & d + h
\end{pmatrix} $$

Now, we want to find an $H$ such that $e^{iH} = CNOT$. The CNOT matrix is given by:

$$ CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

However, there is no way to find values for $a, b, c, d, e, f, g, h$ such that $e^{iH} = CNOT$. This is because the CNOT operation creates entanglement between the qubits, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

In summary, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$ because the CNOT operation is an entangling operation, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

Up to 4 points.
It's correct that an H of this form can't generate CNOT because it can't create entanglement, although most of the rest is just repetitive fluff.
3/4.

Now for a question about the complexity of generating certain uniform superpositions over sets (a special case of “QSampling”). Total score: 19 / 18 (of course counting extra credit).

5. In the Graph Isomorphism problem, we're given as input two $n$-vertex undirected graphs $G$ and $H$.  The problem is to determine whether they're isomorphic---in other words, whether there's any permutation of the vertex labels that makes $G$ and $H$ equal.

(a) Given as input an $n$-vertex graph $G$, describe how to sample, in classical $\mathrm{poly}(n)$ time, from a probability distribution $D_G$ over graphs such that:
- Whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$.
- Whenever $G$ and $H$ are non-isomorphic, $D_G$ and $D_H$ have disjoint supports (i.e., no graph appears with nonzero probability in both of them).

To sample from the probability distribution $D_G$ over graphs, we can follow these steps:

1. Compute the canonical form of the input graph $G$. This can be done using a graph canonization algorithm, such as the one by Brendan McKay (nauty) or the one by Laszlo Babai (Traces). These algorithms run in classical $\mathrm{poly}(n)$ time.

2. Generate a random permutation of the vertex labels of the canonical form of $G$. This can be done by selecting a random permutation of the integers from $1$ to $n$, and then applying this permutation to the vertex labels of the canonical form.

3. Apply the random permutation to the canonical form of $G$ to obtain a new graph $G'$. This graph $G'$ is a sample from the probability distribution $D_G$.

The above procedure ensures that whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$, because their canonical forms will be the same, and the random permutations applied to their canonical forms will generate the same distribution over graphs. On the other hand, whenever $G$ and $H$ are non-isomorphic, their canonical forms will be different, and the random permutations applied to their canonical forms will generate disjoint supports for $D_G$ and $D_H$.

JUSTIN: Up to 5 points.
The first step of computing the canonical form is superfluous and incorrect, since computing the canonical form is at least as hard as solving graph isomorphism and we don't know how to solve that in polynomial time.
Also, it looks like step 2 or 3 should have been about graph H instead of both being about G.
So it seems to have exactly the right idea but described it poorly.
4/5.

(b) Given a probability distribution $D=(p_x)$ over $n$-bit strings $x$, define the ``QSampling state'' of $D$ to be $$\ket{\psi_D} := \sum_{x \in \{0,1\}^n} \sqrt{p_x} \ket{x}$$
Given two probability distributions $D$ and $D'$, suppose that the quantum circuit $\mathcal{C}$ maps $\ket{0^n}$ to $\ket{\psi_D}$, while the circuit $\mathcal{C}'$ maps $\ket{0^n}$ to $\ket{\psi_{D'}}$.

Then what is the output state of the circuit shown below, which acts on $n+1$ qubits?
$$ \Qcircuit @C=.5em @R=0.5em @!R {
\lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\
\lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}} & \qw & \gate{\mathcal{C}'} & \qw } $$

To find the output state of the given circuit, we first analyze it step by step.

1. After applying the Hadamard gate (H) on the first qubit, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{0^n}$$

2. Next, the controlled-$\mathcal{C}$ gate is applied. This gate applies the $\mathcal{C}$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} \otimes \ket{\psi_D} + \ket{1} \otimes \ket{0^n})$$

3. The NOT gate is applied on the first qubit, which flips the state of the first qubit. The state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{0^n})$$

4. Finally, the controlled-$\mathcal{C}'$ gate is applied. This gate applies the $\mathcal{C}'$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

Thus, the output state of the circuit is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

JUSTIN: 4/4.
I think solving this one is impressive.

(c) Now suppose we measure the first qubit of that output state in the $\{\ket{+},\ket{-}\}$ basis.  What is the probability of the outcome $\ket{+}$ if $D=D'$?  What about if $D$ and $D'$ have disjoint supports?

To find the probability of the outcome $\ket{+}$, we first express the output state in terms of the $\{\ket{+},\ket{-}\}$ basis. Recall that $\ket{+} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ and $\ket{-} = \frac{1}{\sqrt{2}}(\ket{0} - \ket{1})$.

The output state is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

We can rewrite this state in terms of the $\{\ket{+},\ket{-}\}$ basis as follows:
$$\frac{1}{2}(\ket{+} \otimes (\ket{\psi_D} + \ket{\psi_{D'}}) + \ket{-} \otimes (\ket{\psi_D} - \ket{\psi_{D'}}))$$

Now, we can find the probability of the outcome $\ket{+}$ by taking the squared magnitude of the coefficient of $\ket{+}$ in the state.

Case 1: $D = D'$

In this case, $\ket{\psi_D} = \ket{\psi_{D'}}$. So, the output state becomes:
$$\frac{1}{2}(\ket{+} \otimes (2\ket{\psi_D}) + \ket{-} \otimes (0))$$

The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(2)\right|^2 = 1$$

Case 2: $D$ and $D'$ have disjoint supports

In this case, $\ket{\psi_D}$ and $\ket{\psi_{D'}}$ are orthogonal, i.e., their inner product is zero: $\langle\psi_D|\psi_{D'}\rangle = 0$. The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(\langle\psi_D|\psi_D\rangle + \langle\psi_{D'}|\psi_{D'}\rangle)\right|^2$$

Since $\langle\psi_D|\psi_D\rangle = 1$ and $\langle\psi_{D'}|\psi_{D'}\rangle = 1$ (both states are normalized), the probability of the outcome $\ket{+}$ is:
$$\left|\frac{1}{2}(1 + 1)\right|^2 = \frac{1}{4}(2)^2 = \frac{1}{2}$$

So, if $D = D'$, the probability of the outcome $\ket{+}$ is 1, and if $D$ and $D'$ have disjoint supports, the probability of the outcome $\ket{+}$ is $\frac{1}{2}$.

SCOTT: HOLY CRAP, LOOKS CORRECT!
JUSTIN: 5/5.

(d) Suppose your distributions $D_G$ from part (a) could be efficiently QSampled.  Using your previous work, explain how Graph Isomorphism could then be solved in $BQP$ (quantum polynomial time).

Suppose we can efficiently QSample the distributions $D_G$ and $D_H$ for two input graphs $G$ and $H$. We can then construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for these distributions using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$, respectively, as described in the previous problems.

Now, consider the following quantum circuit acting on $n+1$ qubits:

$$ \Qcircuit @C=.5em @R=0.5em @!R { \lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\  \lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}_G} & \qw & \gate{\mathcal{C}_H} & \qw } $$

As we found earlier, the output state of this circuit is:

$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_{D_G}} + \ket{0} \otimes \ket{\psi_{D_H}})$$

Now, we measure the first qubit of the output state in the $\{\ket{+},\ket{-}\}$ basis. We know from our previous work that if $G$ and $H$ are isomorphic (i.e., $D_G = D_H$), the probability of the outcome $\ket{+}$ is 1, and if $G$ and $H$ are non-isomorphic (i.e., $D_G$ and $D_H$ have disjoint supports), the probability of the
outcome $\ket{+}$ is $\frac{1}{2}$.

Thus, we can solve the Graph Isomorphism problem in $BQP$ as follows:

1. Construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for the input graphs $G$ and $H$ using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$.

2. Implement the above quantum circuit and measure the first qubit in the $\{\ket{+},\ket{-}\}$ basis.

3. If the outcome is $\ket{+}$ with probability close to 1, conclude that the graphs $G$ and $H$ are isomorphic. If the outcome is $\ket{+}$ with probability close to $\frac{1}{2}$, conclude that the graphs $G$ and $H$ are non-isomorphic.

This algorithm runs in quantum polynomial time, as the QSampling circuits $\mathcal{C}_G$ and $\mathcal{C}_H$ are assumed to be efficient, and the rest of the operations (Hadamard, NOT, and controlled gates) are also efficient.

JUSTIN: The answer is longer than necessary, repeating some of its previous work.
Also, in theory, if the graphs are isomorphic then the probability of |+> will be exactly 1 not close to 1.

4/4.

(e) So then why doesn't this approach immediately imply a fast quantum algorithm for Graph Isomorphism?  Explain what could go wrong in passing from fast algorithms to sample $D_G$ and $D_H$, to fast algorithms to QSample them.

The approach described above does not immediately imply a fast quantum algorithm for Graph Isomorphism because there is a crucial difference between classical sampling and QSampling.

In the algorithm, we assumed that we can efficiently construct quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_{G'}$ that map $\ket{0^n}$ to $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$, respectively, given that the distributions $D_G$ and $D_{G'}$ can be efficiently sampled classically. However, this assumption might not hold in general.

Classical sampling allows us to draw samples from the distributions $D_G$ and $D_{G'}$, but it does not directly provide a way to prepare the corresponding QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$. Preparing these quantum states requires a coherent superposition of all the elements in the distributions with the appropriate amplitudes, which might not be efficiently achievable using the classical sampling algorithms.

In other words, the ability to efficiently sample from a probability distribution classically does not necessarily imply the ability to efficiently prepare the corresponding QSampling state on a quantum computer. This is the main reason why the approach does not immediately lead to a fast quantum algorithm for Graph Isomorphism.

SCOTT: Correctly explains that sampling doesn't necessarily imply QSampling (i.e. generating a coherent superposition state), but doesn't say that this is because of the difficulty of uncomputing garbage.
2 / 5 extra credit points.

Next, a problem about quantum communication complexity. Total score: 5 / 10.

6. Suppose Alice and Bob hold $n$-bit strings $x=(x_1,...,x_n)$ and $y=(y_1,...,y_n)$ respectively.  One thing they might want to learn is the mod-2 inner product of their strings, $$x_1 y_1 + \ldots + x_n y_n (\bmod 2).$$

(a) Suppose Alice and Bob had a quantum communication protocol in which they are allowed to exchange up to $T$ qubits and to perform arbitrary local unitary transformations to their qubits (possibly including ancilla qubits), that ended with Bob knowing the above inner product, with success probability 1.  Explain how, by exchanging the same number of qubits $T$, Bob could also prepare an $n$-qubit state
of the form $$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
where $x$ is an $n$-bit string held by Alice.

If Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can use the same protocol to prepare the desired $n$-qubit state.

Here's how they can do it:

1. Alice prepares an $n$-qubit state $\ket{\phi}$, where
$$\ket{\phi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

2. Alice applies a controlled-phase gate to each qubit of $\ket{\phi}$, using her $n$-bit string $x$ as the control. This results in the state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

3. Alice sends the $n$-qubit state to Bob.

4. Bob uses the quantum communication protocol to learn the mod-2 inner product of their strings. Since the protocol has success probability 1, Bob will know the inner product with certainty.

5. Bob applies a local unitary transformation to the $n$-qubit state he received from Alice, based on the inner product he learned in step 4. This will result in the desired state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

Since Alice and Bob only exchanged $T$ qubits in the original protocol, they can also prepare the desired $n$-qubit state by exchanging the same number of qubits $T$.

JUSTIN: Up to 4 points.
This answer is partially correct and/or poorly worded. In particular, in step 3 Alice just sends the whole n-qubit state to Bob.
The right idea is in there, but doesn't earn full points.

2/4.

(b) Assume Alice and Bob have no preshared entanglement.  Recall Holevo's Theorem, which implies that in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits.  Using Holevo's Theorem together with part (a), prove that Alice and Bob must exchange at least $n$ qubits, even if they only want to learn the inner product mod 2 of their input strings $x$ and $y$.

From part (a), we know that if Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can also prepare an $n$-qubit state of the form
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
by exchanging the same number of qubits $T$.

Now, let's consider the reverse process. Suppose Bob has the $n$-qubit state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$
He can perform a quantum measurement in the computational basis to obtain a random $n$-bit string $z$. Then, he can compute the mod-2 inner product $x \cdot z$ and send it back to Alice.

Alice, knowing her own string $x$, can compute the mod-2 inner product $x \cdot z$ as well. If the inner product $x \cdot z$ that Bob sent to Alice matches the one she computed, then they have successfully communicated the inner product mod 2 of their input strings $x$ and $y$.

However, according to Holevo's Theorem, in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits. Since the $n$-qubit state prepared in part (a) effectively communicates $n$ bits of information (Alice's string $x$) to Bob, Holevo's Theorem implies that Alice and Bob must exchange at least $n$ qubits to prepare this state.

Therefore, even if Alice and Bob only want to learn the inner product mod 2 of their input strings $x$ and $y$, they must exchange at least $n$ qubits.

JUSTIN: Up to 6 points.
This answer doesn't earn any points. It only says that Alice and Bob learn the inner products and mistakenly claims this is n bits of information.
The correct solution recognizes that applying Hadamard to the state we showed how to prepare in part (a) gives exactly |x>.

0/6.

(c) Now suppose we're no longer working mod 2, and Alice and Bob want to know whether their inner product
$$x_1 y_1 + \ldots + x_n y_n$$
is zero or nonzero as an integer.  (In other words, whether there's an $i$ such that $x_i=y_i=1$.)  Describe a protocol by which Alice and Bob can accomplish this, with high probability, by exchanging only $O(\sqrt{n}\log n )$ qubits in total.  The qubits can be spread across as many rounds of communication as necessary, and can be sent in either direction.

Alice and Bob can use the following protocol, based on the Grover's search algorithm, to determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total:

1. Alice prepares a quantum register of $n$ qubits in the state $\ket{0}^{\otimes n}$.

2. Alice applies the Hadamard gate to each qubit, resulting in the state $$\ket{\psi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

3. Alice and Bob perform the following steps for $O(\sqrt{n})$ iterations:

   a. Alice sends the $n$-qubit state $\ket{\psi}$ to Bob.

   b. Bob applies a controlled-phase gate to each qubit of $\ket{\psi}$, using his $n$-bit string $y$ as the control. This results in the state
   $$\ket{\psi'} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{y\cdot z} \ket{z}.$$

   c. Bob sends the $n$-qubit state $\ket{\psi'}$ back to Alice.

   d. Alice applies a controlled-phase gate to each qubit of $\ket{\psi'}$, using her $n$-bit string $x$ as the control. This results in the state
   $$\ket{\psi''} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{(x+y)\cdot z} \ket{z}.$$
   Note that if $x_i = y_i = 1$ for some $i$, then the phase of the corresponding term in the sum will be flipped twice, resulting in no change. Therefore, the state $\ket{\psi''}$ will have a non-zero amplitude for the terms corresponding to the indices where $x_i = y_i = 1$.

   e. Alice applies the Grover's diffusion operator to the state $\ket{\psi''}$, which amplifies the amplitudes of the terms corresponding to the indices where $x_i = y_i = 1$. The diffusion operator can be implemented as follows:
      - Apply the Hadamard gate to each qubit.
      - Apply a multi-qubit controlled-Z gate, with all qubits as controls and an ancilla qubit as the target.
      - Apply the Hadamard gate to each qubit again.

   f. Alice updates the state $\ket{\psi}$ with the resulting state after applying the Grover's diffusion operator.

4. After $O(\sqrt{n})$ iterations, Alice measures the final state in the computational basis. If she obtains a non-zero result, then there exists an $i$ such that $x_i = y_i = 1$, and their inner product is nonzero as an integer. If she obtains a zero result, then their inner product is zero.

Since Alice and Bob exchange $n$ qubits in each iteration, and they perform $O(\sqrt{n})$ iterations, the total number of qubits exchanged is $O(n\sqrt{n}) = O(\sqrt{n}\log n)$, as each qubit can represent $\log n$ bits of information. Thus, Alice and Bob can determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total.

SCOTT: LOOKS BASICALLY RIGHT, EXCEPT FOR A SLIP-UP AT THE END WHERE IT SAYS
ALICE AND BOB EXCHANGE n QUBITS PER ITERATION RATHER THAN log n ...

JUSTIN: Up to 6 extra credit points.
Yeah, step 3a says it's sending n qubits, when it's actually log n.
It also seems incorrect beginning in the analysis of step d. It switches from summing over |z> and referring to the full strings x and y to referring to individual indices i,x_i,y_i. And I don't know what it means about some amplitudes going to zero since what we're really doing is having some amplitudes be +1 and some -1.
I don't think this protocol is correct as stated.
But, it has some of the main ideas. The analysis of the grover steps is good and using a phase gate is good.

3/6 extra credit.

OK, last question, about the k-SUM problem. Total score: 6 / 15.

7. In the famous $k$-SUM problem, we're given a list of integers $x_1,\ldots,x_n$, and are asked whether there are $k$ distinct indices, $i_1 < \ldots < i_k$, such that $x_{i_1} + \ldots + x_{i_k} = 0$.

For this problem, you can ignore factors of $\log n$ in the running time (indeed, that is what the $\widetilde{O}$ notation means).

(a) Assume $k$ is even AND that we are allowed multi-sets (aka repeated elements are allowed). Describe a classical algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/2})$ time, beating the trivial upper bound of $\widetilde{O}(n^k)$.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/2})$ time using a meet-in-the-middle approach. Here's a high-level description of the algorithm:

1. Divide the list of integers into two equal halves: $A = x_1, \ldots, x_{n/2}$ and $B = x_{n/2+1}, \ldots, x_n$.

2. Compute all possible sums of $k/2$ elements from each half. For each half, this can be done in $O(n^{k/2})$ time by iterating through all possible combinations of $k/2$ elements.

3. Sort the lists of sums for both halves. This can be done in $O(n^{k/2} \log n)$ time using a comparison-based sorting algorithm.

4. For each sum in the first half, perform a binary search in the second half to find a sum that is equal to the negation of the sum from the first half. If such a sum is found, then we have found k distinct indices whose elements sum to 0. Binary search takes $O(\log n)$ time, so this step takes $O(n^{k/2} \log n)$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/2})$.

SCOTT: LOOKS LIKE IT WOULD WORK *IF* WE KNEW THE k-SET WAS EQUALLY BALANCED
BETWEEN THE LEFT AND RIGHT HALVES...

JUSTIN: Up to 5 points.
Right, step 1 is incorrect. Instead, it should generate all O(n^{k/2}) sums of subsets of size k/2. Nothing about dividing into two halves.

4/5.

(b) Assume $k$ is divisible by 3 and that we are again allowed multi-sets.  Describe a quantum algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/3})$ time.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/3})$ time using a quantum algorithm based on Grover's search. Here's a high-level description of the algorithm:

1. Divide the list of integers into three equal parts: $A = x_1, \ldots, x_{n/3}$, $B = x_{n/3+1}, \ldots, x_{2n/3}$, and $C = x_{2n/3+1}, \ldots, x_n$.

2. Compute all possible sums of $k/3$ elements from each part. For each part, this can be done in $O(n^{k/3})$ time by iterating through all possible combinations of $k/3$ elements.

3. Sort the lists of sums for all three parts. This can be done in $O(n^{k/3} \log n)$ time using a comparison-based sorting algorithm.

4. Define an oracle function $f$ that takes as input a sum from the first part and a sum from the second part, and returns 1 if there exists a sum in the third part that is equal to the negation of the sum of the inputs, and 0 otherwise. This oracle can be implemented using a binary search on the sorted list of sums for the third part, which takes $O(\log n)$ time.

5. Use Grover's search algorithm to find a pair of sums from the first and second parts that satisfy the oracle function. Grover's search has a quadratic speedup over classical search, so it takes $O(\sqrt{n^{k/3}}) = O(n^{k/6})$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/3})$.

JUSTIN: Up to 5 points.
Again, step 1 is incorrect, it shouldn't divide into three parts.
The division into three parts is more problematic this time as it comes up in step 4.
And it ends up claiming n^{k/6] time in step 5, better than the n^{k/3}.
Basically all this gets correct is to use Grover in some way.

1/5.

(c) Suppose we wanted to prove that the algorithm from (b) was the fastest possible quantum algorithm for $k$-SUM.  Could that be shown via a lower bound on $k$-SUM's quantum query complexity?  Why or why not?

Proving that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM would require showing a lower bound on k-SUM's quantum query complexity that matches the upper bound achieved by the algorithm. However, a lower bound on the quantum query complexity alone might not be sufficient to prove the optimality of the algorithm, as it only considers the number of queries to the oracle function and does not account for other computational resources, such as time or space complexity.

To show that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM, one would need to establish a lower bound on the quantum time complexity or some other relevant complexity measure that matches the upper bound achieved by the algorithm. This would provide a more comprehensive understanding of the problem's inherent difficulty and the limitations of quantum algorithms in solving it.

JUSTIN: Up to 5 extra credit points.
This is a nice description of query vs time complexity, but misses the point that query complexity can prove a bound of at most Omega(n) since there are at most n potential queries to make.

1/5 extra credit points.