Summer 2022 Quantum Supremacy Updates

Update: We’re now finalizing the lecture notes—basically, a textbook—for the brand-new Quantum Information Science II course that I taught this past spring! The notes will be freely shared on this blog. But the bibliographies for the various lectures need to be merged, and we don’t know how. Would any TeXpert like to help us, in exchange for a generous acknowledgment? A reader named Pablo Cingolani has now graciously taken care of this. Thanks so much, Pablo!


I returned last week from the NSF Workshop on Quantum Advantage and Next Steps at the University of Chicago. Thanks so much to Chicago CS professor (and my former summer student) Bill Fefferman and the other organizers for making this workshop a reality. Given how vividly I remember the situation 12 years ago, when the idea of sampling-based quantum supremacy was the weird obsession of me and a few others, it was particularly special to attend a workshop on the topic with well over a hundred participants, some in-person and some on Zoom, delayed by covid but still excited by the dramatic experimental progress of the past few years.

Of course there’s a lot still to do. Many of the talks drew an exclamation point on something I’ve been saying for the past couple years: that there’s an urgent need for better quantum supremacy experiments, which will require both theoretical and engineering advances. The experiments by Google and USTC and now Xanadu represent a big step forward for the field, but since they started being done, the classical spoofing attacks have also steadily improved, to the point that whether “quantum computational supremacy” still exists depends on exactly how you define it.

Briefly: if you measure by total operations, energy use, or CO2 footprint, then probably yes, quantum supremacy remains. But if you measure by number of seconds, then it doesn’t remain, not if you’re willing to shell out for enough cores on AWS or your favorite supercomputer. And even the quantum supremacy that does remain might eventually fall to, e.g., further improvements of the algorithm due to Gao et al. For more details, see, e.g., the now-published work of Pan, Chen, and Zhang, or this good popular summary by Adrian Cho for Science.

If the experimentalists care enough, they could easily regain the quantum lead, at least for a couple more years, by (say) repeating random circuit sampling with 72 qubits rather than 53-60, and hopefully circuit depth of 30-40 rather than just 20-25. But the point I made in my talk was that, as long as we remain in the paradigm of sampling experiments that take ~2n time to verify classically and also ~2n time to spoof classically (where n is the number of qubits), all quantum speedups that we can achieve will be fragile and contingent, however interesting and impressive. As soon as we go way beyond what classical computers can keep up with, we’ve also gone way beyond where classical computers can check what was done.

I argued that the only solution to this problem is to design new quantum supremacy experiments: ones where some secret has been inserted in the quantum circuit that lets a classical computer efficiently verify the results. The fundamental problem is that we need three properties—

  1. implementability on near-term quantum computers,
  2. efficient classical verifiability, and
  3. confidence that quantum computers have a theoretical asymptotic advantage,

and right now we only know how to get any two out of the three. We can get 1 and 2 with QAOA and various other heuristic quantum algorithms, 1 and 3 with BosonSampling and Random Circuit Sampling, or 2 and 3 with Shor’s algorithm or Yamakawa-Zhandry or recent interactive protocols. To get all three, there are three obvious approaches:

  1. Start with the heuristic algorithms and find a real advantage from them,
  2. Start with BosonSampling or Random Circuit Sampling or the like and figure out how to make them efficiently verifiable classically, or
  3. Start with protocols like Kahanamoku-Meyer et al.’s and figure out how to run them on near-term devices.

At the Chicago workshop, I’d say that the most popular approach speakers talked about was 3, although my own focus was more on 2.

Anyway, until a “best-of-all-worlds” quantum supremacy experiment is discovered, there’s plenty to do in the meantime: for example, better understand the classical hardness of spoofing Random Circuit Sampling with a constant or logarithmic number of gate layers. Understand the best that classical algorithms can do to spoof the Linear Cross-Entropy Benchmark for BosonSampling, and build on Grier et al. to understand the power of BosonSampling with a linear number of modes.

I’ll be flying back with my family to Austin today after seven weeks at the Jersey shore, but I’ll try to field any questions about the state of quantum supremacy in general or this workshop in particular in the comments!

32 Responses to “Summer 2022 Quantum Supremacy Updates”

  1. Ordinary computers can beat Google’s quantum computer after all | Combinatorics and more Says:

    […] There is a recent post on Shtetl Optimized with Scott Aaronson’s take on the supremacy situation. Overall our assessments are not very far apart. I don’t […]

  2. Rand Says:

    It was a great workshop (thanks, Bill!) and it was an excellent talk! In retrospect, it would have been nice to have a panel with all the quantum supremacy groups (and maybe the spoofers as well) moderated by you or Bill. A comparison of the different experiments and how robust they are would have been really interesting.

    I wouldn’t call “all quantum speedups that we can achieve… fragile and contingent” though. I can’t verify that the billionth digit of pi is whatever my computer says it is, but that doesn’t mean it isn’t doing something I can’t do. Verifiability is important, but I wouldn’t bake it into the definition of quantum supremacy.

  3. Mark S. Says:

    This might be dumb and easily dismissed, but following Mahadev-Vazirani-Vidick and Kahanamoku-Meyer et al., would there be any way to leverage all the leftover ASIC bitcoin or other cryptocurrency mining rigs and use them to invert SHA hashes to find preimages of some known and sampled image?

    For example, we could prepare a uniform superposition over, say, n qubits in a first register, *calculate the SHA of the first register into a second register*, measure and record the second register in the computational basis, and Hadamard and measure the first register. We could learn *one bit* of information about all the n-bit preimages that collide onto the measured second register. We then set our mining rigs to work to invert and find all these n-bit preimages of the measured second register to prove that our first register was in such a superposition.

    Well, we wouldn’t know *a-priori* how many collisions there would be, and it is far from clear to me whether SHA256, or even a much simpler or even specially optimized SHA, is anywhere near NISQ-implementable. But we do have a lot of cheap and spent mining rigs on the market that are good at finding such preimages, and a lot of dedicated folks in the crypto-world nervous about and/or interested in quantum computing and perhaps willing to put in resources.

  4. Scott Says:

    Mark S. #3: Neat idea but I don’t see how to make it work. The problem is that measuring a superposition over preimages in the Hadamard basis will give you useful information, only if the hash function satisfies the Simon promise or something of that kind. But it’s been an open problem for 30 years to find an explicit, hard hash function that satisfies the Simon promise.

    The recent protocols get around this problem by cleverly relying on an interactive protocol between the QC and the verifier, combined with a trapdoor hash function. The Kahanamoku-Meyer et al. paper even shows that f(x)=x2 mod N would work for the hash function. Alas, even this would probably require a number of logical qubits pushing 1000, and maybe 100,000 gates, and it’s an open problem whether we’ll be able to get that in the NISQ era (i.e., without error-correction).

  5. Mark S. Says:

    Scott #4: Thanks! Well, if we had n qubits in a superposition over any two basis states x_1 and x_2, and we Hadamard’ed them and measured, I thought our output string d would be orthogonal to the bitwise XOR of those two states x_1 and x_2, regardless of whether there’s any Simon promise? We can use a trapdoor but if we could invert a hash to find those two preimages x_1 and x_2, we could classically take the XOR of those states, and determine whether our measured string d is orthogonal to this XOR.

    At any rate it’s also neat that, if I’m not mistaken, Kahanamoku-Meyer et al.’s function is not necessarily post-quantum! My understanding is that with a Kahanamoku-Meyer et al. witness you either prove you’ve calculated x^2 mod N in superposition with a quantum computer, or you’ve inverted x^2 mod N, likely with a quantum computer!

    But, 1000’s of logical qubits is too unrealistic now. 🙁

  6. Sam Scholten Says:

    Here’s how I’d solve the bibliography problem, although I’m not sure how you’ve organised your lectures.

    Use bibexport tool (https://tex.stackexchange.com/a/41823) to generate a .bib file that contains all (& only) the files you cited (do this for each lecture).

    Then use bibtool (https://tex.stackexchange.com/a/20040) to merge the per-lecture .bib files together.

    Finally you can create a bibliography out of the merged .bib file, without having to individually cite all entries, with \nocite{*}.

  7. Scott Says:

    Mark S. #5: Yes, I thought that was the cleverest idea in the Kahanamoku et al paper!

    And yes, you’ll always observe something orthogonal to the XOR of the two preimages. The question is, how do you exploit that? If the function satisfies the Simon promise, you can exploit it because the orthogonal subspace is always the same. If it’s an interactive protocol involving a trapdoor hash function, you can exploit it because the verifier secretly knows both preimages. Can you invent some other scenario where you could exploit it?

  8. Rand Says:

    Specifics on the bibliography issue?

    Bibtex or Biblatex will happily accept multiple .bib files: https://tex.stackexchange.com/questions/84099/bibliographies-from-multiple-bib-files

  9. Mark S. Says:

    Scott #7: Thanks for letting me espouse my hacky thoughts! If we had some 2-to-1 cryptographic function f(x) that doesn’t have the Simon promise nor necessarily has a trapdoor, but is merely hard to invert, I guess you could combine “proof-of-quantumness” with “proof-of-work”.

    For example a quantum computer (Peggy) can measure the second register y=f(x) in the computational basis and measure the first register d in the Hadamard basis, and broadcast both. She commits in her broadcast to the image y and the string d that is orthogonal to the XOR of the two preimages x_1 and x_2.

    Miners (Vicky) could then set their rigs to work to invert y=f(x) by finding both x_1 and x_2, and Bob’s-your-uncle, as they broadcast a proof-of-work when f(x_1)=f(x_2)=y, while also verifying the proof-of-quantumness when d.(x_1 XOR x_2)=0. Peggy the quantum computer had possession of only one bit of information that Vicky the miners could only get by finding both preimages.

    (You could build a blockchain by salting f(x) with the string d of the previous block… If Peggy is consistent in giving d that are orthogonal to the XOR of the preimages found by Vicky, then after a height h of the chain you can reject the null hypothesis that Peggy is classical, with confidence 1-2^-h…)

    But, such a proof-of-quantumness may be a flight-of-fancy in the NISQ-era, and doesn’t meet hardly any of your clear desiderata as we trade “efficient classical verifiability” with “verifiable by mining rigs spitting out tons of CO2”. It’s also not clear how it would scale. I also think that the hash function f(x) might need to have the adaptive hardcore-bit property, which, I’m not sure how generically is satisfied by something like a 2-to-1 version of SHA256.

  10. Scott Says:

    Mark S. #9: I see now—that’s a neat idea that I somehow hadn’t considered! One might argue: as long as we’re spending exponential time to verify a quantum computation anyway, why not do it your way, and thereby inch closer to Simon’s and Shor’s algorithms?

    On the other hand:

    1) It’s not trivial to articulate what the advantage here would be, compared to the existing random circuit sampling experiments.

    2) The cost seems greater, because you still need to compute the hash function (rather than “making full use of the qubits of you have”).

    3) Ideally, you’d want a hash function that was exactly 2-to-1, or nearly so. But do you know such a function much simpler than x2 mod N, which also gives you the trapdoor property for free?

  11. Inar Says:

    There is a nice tool to clean a bibtex bibliography https://flamingtempura.github.io/bibtex-tidy/
    Also, in particle physics we use the inspire database. it has it’s own bibliography generator https://inspirehep.net/bibliography-generator . The downside is that bib keys should be either in inspire format or arXiv numbers.

    Anyways, I would be happy to help.

  12. Ted Says:

    Scott, when you say that “BosonSampling gives you properties #1 and #3”, are you referring to Fock-state or Gaussian BosonSampling, and with or without photon losses?

    I know that this is a fast-moving and unsettled story, but my understanding is that lossless Fock-state BosonSampling (probably) gives you property #3, but not #1, while lossy/noisy Gaussian BosonSampling gives you property #1, but maybe not #3. And even lossless Gaussian BosonSampling might not give you #3 (depending on details like the nature of the photon detectors and the number of modes relative to photons), although I don’t think that lossless Gaussian BosonSampling would give you #1 anyway.

  13. Mark S. Says:

    Scott #10: Yeah it’s far from obvious what clear and convincing advantage a proof-of-work based proof-of-quantumness would have over existing random-circuit protocols and/or the recent, similar, trapdoor protocols.

    Well, we are calculating, essentially, a handful of bits of information in existing linear cross-entropy tests and in a proof-of-work based proof-of-quantumness. However, with the former you’d have to repeat the calculation to verify the quantumness, while with the latter, although there’s an initial heavy burn of CO2 from the mining rigs, *once found*, the certificate is a simple 5-tuple (f, y, d, x_1, x_2), which can be trivially verified. I think it’s been said that there’s too much opportunity cost to dedicate Summit or other supercomputers to reverify claims of non-classicalness with the sampling based experiments, but even a very patient human could check f(x_1)=f(x_2)=y and d.x_1 XOR x_2=0. I don’t think that’s a *big* point in favor of a PoW-PoQ scheme, but it’s a difference…

    Also, compared to x^2 mod N, the security of the x^2 mod N scheme might compete with the best classical algorithms for factoring N – which I think means that n=log N needs to more than a 1,000 bits (qubits) to convincingly argue that the trapdoor wasn’t classically inverted. However, we *know* from studying the most recent bitcoin blocks, that it takes 10 minutes to find a preimage of SHA256 that hashes onto a string beginning with ~75 zeroes. That is, there might be a ~10x reduction in a number of logical qubits of the input and the output registers, compared to x^2 mod N. Of course, the bitcoin miners are motivated to earn bitcoin, while the motivation to invert a hash to prove quantumness of a quantum computer is far from clear, so that might be a silly comparison. (Peggy the quantum computer could offer a smart contract of $b bitcoin for the first miner Vicky to find preimages x_1 and x_2 of her image y…)

    On the other hand, the x^2 mod N hash *has got to be* pretty efficient to implement reversibly; I guess the number of CCNOT’s grow linearly or thereabouts with n. While SHA256 has been optimized for ASIC’s, I suspect that, although constant, the number of ancillae and the depth of the circuit of a reversibly-implemented SHA256 may be prohibitive for the NISQ era, so the advantage above might be washed away.

    Lastly, I’d guess that the number of collisions onto SHA256 is Poisson-distributed. I think that means that if our preimage is m qubits and we only care about the last m-1 qubits of SHA256, only about 1-1/e=63% of the time such a hash wouldn’t be 1-to-1, and we’d have no way to know whether it’s 2-to-1 *a-priori*. The miners would often chase a non-existent white whale if there are no collisions. A gap over classical might still exist, but more statistics would need to be generated. Clearly as you say x^2 mod N, by design, doesn’t have this issue (and has the trapdoor to boot!)

    Some advantages and clear disadvantages to a PoW-PoQ approach, but still fun to think about, nonetheless!

  14. Shawn G. Says:

    I found out about this workshop too late! Will the talks be posted online?

  15. Raul Garcia-Patron Says:

    Very interesting summary of the “three properties” needed for quantum advantage demo. Concerning approach 2 on “Start with BosonSampling or Random Circuit Sampling or the like and figure out how to make them efficiently verifiable classically” the paper of Daniel Stilck-Franca and myself https://arxiv.org/abs/2011.12173 has an interesting twist on this question. If a QC company implementing RQC can provide witnesses to certify that a classical simulation is far on TV distance from the RQC, one can transform that into a strategy to simulate the device using mirror descent.

  16. Frank A Says:

    Hi Scott,

    It seems that the three properties you mentioned are all satisfied in this recent paper from USC: https://arxiv.org/abs/2207.07647

    It’s not a supremacy experiment in the traditional sense, but I wonder what you think about whether this passes your bar.

  17. Scott Says:

    Frank A #16: Nope, I don’t count it, because

    1) it involves an oracle and
    2) even there, the speedup is n vs 1 rather than exponential vs polynomial.

    If you’re OK with those drawbacks, though (which you shouldn’t be), we’ve known the “solution” for 30 years! 🙂

  18. Scott Says:

    Raul Garcia-Patron #15: Nice, I’ll put your paper on my stack to read more carefully!

    To be clear, I wasn’t suggesting some clever new verification strategy for RCS, which almost certainly doesn’t exist. I was suggesting modifying RCS to use non-random circuits with secrets inserted into them (which, however, might be hard to distinguish from random circuits using a classical computer).

  19. Frank A Says:

    Scott #17: The setup is one of a game where the player has to guess a hidden string using just one guess. There is a black box that is either quantum or classical, and when the player uses the quantum box she wins.

    As far as I can tell the cost of implementing the quantum oracle is accounted for, and in the noiseless case the speedup is exponential since the player in the game they set up only gets one classical oracle query instead of n, before the hidden bitstring changes. The experimental speedup is only polynomial, though.

    I’m not sure the critique that we’ve known the “solution” for 30 years is fair; if you mean that we’ve known the Bernstein Vazirani algorithm for a long time, well then that would disqualify somebody implementing Shor’s algorithm with a speedup as well 🙂

  20. Scott Says:

    Frank A #19: Yes, I understood all that. Since John Preskill coined the term “quantum supremacy” a decade ago, the term has consistently meant beating a classical computer at a well-defined task that’s fully specified by the mapping you want between classical input data and classical output data. No oracles allowed, let alone oracles that change from one query to the next.

  21. Frank A Says:

    Scott #19: Fair enough, no oracles are allowed as part of the definition of quantum supremacy. I even mentioned that this work isn’t traditional supremacy in my first comment; OK, I agree that it’s not supremacy at all.

    But there is a bigger question of what constitutes a quantum advantage under the three properties you mentioned, which I think are spot on in a more general sense! There isn’t a lot of experimental work out there that satisfies these criteria under the broader heading of quantum advantage. In fact as far as I know the USC paper is the only one that does so far, with the possible exception of the recent “Quantum advantage in learning from experiments” (https://doi.org/10.1126/science.abn7293); is that a fair assessment?

  22. Mike Says:

    Excited for the lecture notes. I seem to recall you mentioning some time ago that some of your students were putting together a quantum complexity book. Is this what you were referring to?

  23. asdf Says:

    “But did you try 1-2-3-4-5?

    “Turns out the encryption key in that script [for updating Hyundai car firmware] is the first AES 128bit CBC example key listed in the NIST document SP800-38A. “

  24. Mateus Araújo Says:

    I’d like to second Rand #2’s comment about not needing to require efficient classical verifiability. There are so many interesting problems where this is not possible! For example calculating the ground state energy of a many-body Hamiltonian. There’s no hope to classically verify that, but if we had a quantum computer that could do it, would anyone dispute that we had achieve quantum supremacy? We can even have a verification of sorts by experimentally preparing the analogous physical system. But that’s not at all an efficient classical verification.

    I think a more reasonable standard of skepticism is how with deal with classical computers. We run benchmarks, test for errors, and if these tests are passed we trust their answers, we don’t believe they’ll maliciously make mistakes only when running calculations we can’t verify.

  25. Nick Drozd Says:

    Three years ago you said:

    The world, it seems, is going to be denied its clean “moon landing” moment, wherein the Extended Church-Turing Thesis gets experimentally obliterated within the space of a press conference. This is going to be more like the Wright Brothers’ flight—about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights. (This time around, though, it thankfully won’t take that long to clear everything up!)

    From that perspective, how are things looking now?

  26. fred Says:

    Nick #25

    It’s true of most engineering things that have many (often unforeseen) practical applications and can be improved continuously.
    With classical computers, the current world can’t function without them, yet we can’t point back at a single moment in time when we could have said “that’s it, computers are now finally here and everything’s changing!”. Same with the electric grid, cars, planes, the internet, etc. Being computers, QC also fall within this, even if they’re quite specialized as computers.

    Not true of less practical engineering feats that are big programs that started with a clear goal, like atomic bombs, the moon race, the LHC, nuclear fusion. Those are really specific realizations of many underlying technologies all coming together.

  27. Mystery Says:

    Scott,

    I wanted to share some big news (to me) to you and your blog readers. I asked out a coworker at my company on a date (I work mostly remotely, but we still have some time in the office) and she said yes (!!!!!!!!!) Not only that—but she told me she actually likes me and was hoping I’d ask her out (!!!!!!!!!!!!!!!!!) And she’s really smart and pretty (!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!)

    The mental breakdown and crisis I’ve had for the last couple months and the drama I caused you was a wake up call that I needed to really put in the work to change my life. I took some of your advice, and also other advice I’ve gotten from people in my life—and it turns out that she doesn’t think I’m a creep at all—she actually likes me!!!!!!

    We had some conversations before but I was always super nervous about coming off as a creep or a harasser. I think so much depends on your state of mind. It’s easy to get sucked into this mental hole where you feel like everyone is hating you and judging you and thinks you’re creepy/ugly/etc., when really nothing could be further from the truth!

    I deeply regret all the pain I caused you. At least you can have the satisfaction of knowing you helped one more lost soul.

    Mystery

  28. Shashvat Shukla Says:

    Hi Scott, why is it the case that without efficient verifiability the speedups are “fragile and contingent” as you say?

  29. fakename69 Says:

    Mystery #27

    Isn’t the validation of correlation with real-world success due to mental sanity due to implied fixed maturity and morality…doesn’t this strike anyone else as odd? What could this possibly imply about everyone who has ever not asked anyone else out? That they aren’t mentally sane, have any form of maturity, or morality, or some mixture of all three? I move to strike out the logical correlation, and say, let’s just wait until we meet everyone in the world who is at an unseemly age to not have asked anyone out, then make a judgement as to what could be going on in terms of the maturity of that person and their morality. Let’s instead say, that instead of real world validation being an indictment of maturity and morals generally, let maturity and morals itself be a indictment of maturity and morals on an individual basis.

    Say, Mystery, what if, after all that advice, you never had the confidence to ask her out, or the success? Would something still be wrong with you, in terms of maturity or morality? The obvious answer, is that self-esteem in those areas aren’t related to those items. It’s a societal illusion.

    What if I were to tell you I’m 28, but have in addition never had a degree (dropped out before AA), a job (well I’ve made 30k part time so far after ten years), and have been living at home with my 61 and 71 year old parents? I have never dated nor had sex? What if I told you I asked someone out for coffee, promptly ghosted me and had a kid with a buff version of me after a few weeks?

    I’m not sure what Scott talked to you about, and I’m glad you’re better, but … what about me and everyone I know who on a piece of paper are doing worse off? What then? What with us?

    The true answer that I know to be true from what I’ve seen with myself and other people: We are all fine and all beautiful, everyone, independent of happenstance real-world success. Validation is only as meaningful as people choose to make it.

  30. fred Says:

    I have a question.

    Quantum computer use basic unitary gates, for example the CNOT gate takes two qbits as input, one acting as a control, i.e. the other qbit state is affected depending on this qbit.
    My understanding is that, for spin, such a gate could be realized by applying a magnetic field to modify the spin.
    In a QC, each qbit is typically in a supersposition state (of 0 and 1, or spin up and spin down), and as a result all the gates also need to be in a superposition state?

    So the CNOT gate above would have to be some sort of gadget producing a magnetic field and be able to be at once in a superposition of both applying and not applying the field based on the superposition state of the control bit?
    And the same goes for every single gates in the QC?

    If that’s true, isn’t the typical claim that a QC with N qbits needs to maintain 2^N states at once missing to take all the gates (themselves as little quantum systems controlling lasers or magnetic fields) into account?

  31. fred Says:

    What I mean is that all the gates also have to be kept in a state of quantum coherence, not just the qbits.
    Of course if a gate is somewhat “passive”, like a beam splitter and recombiner doing some AND function, it’s not that relevant, but if qbits act as controls to a gate, e.g. to implement a CNOT, then those gates have to be kept in superposition as well (coherence with the rest of the QC), and that’s way harder, right?

  32. Lazar Ilic Says:

    Interesting post. Reminds me of a recent Reddit.com/r/math post on a group called The TeXromancers looking to begin a helpful useful project of LaTeXifying up books:

    https://www.reddit.com/r/math/comments/wrbov3/the_texromancers_a_tex_group/

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.