## Archive for the ‘Contests’ Category

### Win a Scott Aaronson Speculation Grant!

Thursday, January 20th, 2022

Exciting news, everyone! Jaan Tallinn, who many of you might recognize as a co-creator of Skype, tech enthusiast, and philanthropist, graciously invited me, along with a bunch of other nerds, to join the new Speculation Grants program of the Survival and Flourishing Fund (SFF). In plain language, that means that Jaan is giving me $200,000 to distribute to charitable organizations in any way I see fit—though ideally, my choices will have something to do with the survival and flourishing of our planet and civilization. (If all goes well, this blog post will actually lead to a lot more than just$200,000 in donations, because it will inspire applications to SFF that can then be funded by other “Speculators” or by SFF’s usual process.)

Thinking about how to handle the responsibility of this amazing and unexpected gift, I decided that I couldn’t possibly improve on what Scott Alexander did with his personal grants program on Astral Codex Ten. Thus: I hereby invite the readers of Shtetl-Optimized to pitch registered charities (which might or might not be their own)—especially, charities that are relatively small, unknown, and unappreciated, yet that would resonate strongly with someone who thinks the way I do. Feel free to renominate (i.e., bring back to my attention) charities that were mentioned when I asked a similar question after winning $250,000 from the ACM Prize in Computing. If you’re interested, there’s a two-step process this time: Step 1 is to make your pitch to me, either by a comment on this post or by email to me, depending on whether you’d prefer the pitch to be public or private. Let’s set a deadline for this step of Thursday, January 27, 2022 (i.e., one week from now). Your pitch can be extremely short, like 1 paragraph, although I might ask you followup questions. After January 27, I’ll then take one of two actions in response: I’ll either (a) commit a specified portion of my$200,000 to your charity, if the charity formally applies to SFF, and if the charity isn’t excluded for some unexpected reason (5 sexual harassment lawsuits against its founders or whatever), and if one of my fellow “Speculators” doesn’t fund your charity before I do … or else I’ll

(b) not commit, in which case your charity can still apply for funding from SFF! One of the other Speculators might fund it, or it might be funded by the “ordinary” SFF process.

Step 2, which cannot be skipped, is then to have your charity submit a formal application to SFF. The application form isn’t too bad. But if the charity isn’t your own, it would help enormously if you at least knew someone at the charity, so you could tell them to apply to SFF. Again, Step 2 can be taken regardless of the outcome of Step 1.

The one big rule is that anything you suggest has to be a registered, tax-exempt charity in either the US or the UK. I won’t be distributing funds myself, but only advising SFF how to do so, and this is SFF’s rule, not mine. So alas, no political advocacy groups and no individuals. Donating to groups outside the US and UK is apparently possible but difficult.

While I’m not putting any restrictions on the scope, let me list a few examples of areas of interest to me.

• Advanced math and science education at the precollege level: gifted programs, summer camps, online resources, or anything, really, that aims to ensure that the next Ramanujan or von Neumann isn’t lost to the world.
• Conservation of endangered species.
• Undervalued approaches to dealing with the climate catastrophe (including new approaches to nuclear energy, geoengineering, and carbon capture and storage … or even, e.g., studies of the effects of rising CO2 on cognition and how to mitigate them).
• Undervalued approaches to preventing or mitigating future pandemics—basically, anything dirt-cheap that we wish had been done before covid.
• Almost anything that Scott Alexander might have funded if he’d had more money.
• Anything that would enrage the SneerClubbers or those who attack me on Twitter, by doing stuff that even they would have to acknowledge makes the world better, but that does so via people, organizations, and means that they despise.

Two examples of areas that I don’t plan to focus on are:

• AI-risk and other “strongly rationalist-flavored” organizations (these are already well-covered by others at SFF, so that I don’t expect to have an advantage), and
• quantum computing research (this is already funded by a zillion government agencies, companies, and venture capitalists).

Anyway, thanks so much to Jaan and to SFF for giving me this incredible opportunity, and I look forward to seeing what y’all come up with!

Note: Any other philanthropists who read this blog, and who’d like to add to the amount, are more than welcome to do so!

### A not-quite-exponential dilemma

Sunday, April 5th, 2009

A commenter on my last post writes:

Scott, it’s your blog – but can’t we switch back to some QC or TCS topics?

I confess: after three years of staid, dignified posts about quantum computing and theoretical computer science, I somehow did the unthinkable, and let this once-respected blog become less about elucidating research than procrastinating from it.  Many readers, or so they tell me, rely on Shtetl-Optimized to keep abreast of the latest theoretical insights.  And rather than ask those readers whether they also rely on deep-fried Snickers bars for the Vitamin E in the peanuts, I have a moral obligation to serve them.

Fortunately for the theory-starved among you, a certain je ne sais quoi in the air last week has caused me to refocus my attention on research.  The mysterious force affected not only me, but seemingly my entire floor of the Stata Center—giving rise to a carnival-like crescendo of increasingly-frantic theorizing that ended just as inexplicably as it began, around 6:59PM Thursday night.

So today, I’m proud to post something vaguely related to science once again.  On the suggestion of Wim van Dam, I hereby announce another contest, with no prize or even possibly winner.  Your task is simple:

Come up with a catchy name for growth rates of the form 2n^α, 0<α<1.

(For example, the running time of the fastest known classical factoring algorithm has this form, as does that of the fastest known algorithm for graph isomorphism.)

The word “subexponential” is often used, but should not be, since we already use it for growth rates smaller than 2n^α for all α>0.

This just in: Friend-of-the-blog Greg Kuperberg, who’s always more fun than a cinder block combined with a reprimand, informs me that 2n^α growth rates already have a name: stretched exponentials.  But

1. I’ve never heard that term in my life,
2. I don’t like it: it sounds like something bigger than exponential, not smaller, and
3. Having called 2√n “subexponential” in his otherwise-great paper on a quantum algorithm for the Dihedral Hidden Subgroup Problem, for Greg to now lecture others on this issue seems like … stretching it.

So my and Wim’s challenge to the readerariat stands.

### Time for another contest

Tuesday, March 17th, 2009

Come up with the best mnemonic device for remembering which is injective and which is surjective.

### Lev R.’s question answered at last; fate of humanity revealed

Tuesday, April 1st, 2008

Almost two years ago, a reader named Lev R. won my Best Anthropicism Contest with the following gem:

why aren’t physicists too interested in computational complexity? because if they were, they’d be computer scientists.

As the champion, Lev won the right to ask any question and have me answer it on this blog. Here was Lev’s question:

I like your “Earth Day, Doomsday, and Chicken Little” post, but you dodged the big question. Will the world end (humans go extinct) anytime soon? Or do you think that despite our best efforts, we’ll somehow end up not destroying ourselves?

In general, I despise being asked to make predictions, even about infinitely less weighty topics — especially when there’s a chance of my being wrong, and people looking back Nelson-Muntz-like and saying “ha ha, Scott was wrong!” That’s one of only several reasons why I could never be a physicist.

An answerable question would be one that asked me to clear up a misconception, or render a moral judgment, or discuss the consequences of a given assumption. (Unanswerable: “When will we see useful quantum computers?” Answerable: “Didn’t that company in Vancouver already build one?”) Questions about relationships between complexity classes or other unsolved math problems are also fine. But as for the universe, how am I supposed to know what it’ll decide to do, among all the things it could do within reasonable bounds of physics and logic? How am I even supposed to have a prior? As a CS theorist, I’m trained to think not about what’s likely to happen, but about the very worst that could happen — within stated assumptions, of course. Among the practical consequences of this attitude, I never gamble and I never play the stock market (and not only because, while there are many things I want, almost none of them can be traded for money). I also don’t worry about being put out of a job by prediction markets. Where the Bayesian stops, and says “every question beyond these is trivial or meaningless,” that’s where I’m just getting started.

But despite everything I’ve said, after years of diligent research into the future of the human race — reading hundreds of trillions of books and articles about climate change, overpopulation, Peak Oil, nuclear proliferation, transhumanism, AI, and every other conceivably relevant topic (what do you think I was doing, writing CS papers?) — I am finally prepared, this somber April 1st, to answer Lev’s question with the seriousness it deserves. Obviously my predictions can only be probabilistic, and obviously I can’t give you the deep reasons behind them — those would take years to explain. I shall therefore present the human future, circa 2100, in the form of a pie chart.

Thursday, November 1st, 2007

In a recent talk at MIT, Umesh Vazirani appealed to the famous Birthday Paradox to say that two random subsets of {1,…,N}, each of size o(√N), probably wouldn’t intersect each other. Of course we all understood what he meant, but it occurred to me that Umesh was actually appealing to the Birthday Unparadox: “If you put three people in a room, chances are no two of them will have the same birthday.”

Once I realized that, I started seeing unparadoxes everywhere I looked:

The Banach-Tarski Unparadox: If you cut an orange into five pieces using a standard knife, then put them back together, the result will have exactly the same volume as the original orange.

Braess’ Unparadox: If you add an extra lane to a highway, one possible result will be to decrease congestion.

Hempel’s Unparadox: If you observe a bunch of ravens and find that all of them are black, this might increase your likelihood for the statement “All ravens are black.”

Russell’s Unparadox: The set of all sets that contain themselves as a member, might or might not contain itself as a member (either way is fine).

In the spirit of my highly-successful Best Umeshism and Best Anthropicism contests (remember those?), I now open the floor to you: come up with the best unparadox! The winner will receive absolutely nothing. (If you have to ask what the point is, this contest isn’t for you.)

### The Aaronson $25.00 Prize Sunday, October 28th, 2007 [Update] For those of you who’ve been living in a non-wifi-enabled cave, four days ago Stephen Wolfram awarded a$25,000 prize to a 20-year-old undergraduate named Alex Smith, for proving that a particular two-state, three-symbol Turing machine is universal. The prize was to celebrate the fifth anniversary of Wolfram’s paradigm-smashing, foundation-shaking masterpiece, A New Kind of Science. (More from Bill Gasarch’s blog, Nature, and Scientific American.)

Smith sounds like a swell guy who entered this contest for exactly the right reasons: he was at his parents’ place over summer break and had nothing better to do. He deserves the money, and I sincerely hope the CS theory community hasn’t heard the last from him.

Predictably, though, as soon as this story broke I started getting emails from journalists asking me about the far-reaching scientific implications of the new universality proof. In trying to give them an honest answer — one that wouldn’t be misunderstood, or spun to support a pre-existing storyline with which I disagreed — I inevitably came off like an ornery old sourpuss. From Scientific American:

Of course, there may be a reason the problem languished. “Finding the smallest universal [Turing machines] is a neat recreational pursuit,” quantum computation researcher Scott Aaronson of the Massachusetts Institute of Technology says, but “it’s no longer seen as connected to the central questions of the field.” …

“The impact of NKS on all the areas of computer science and physics I’m familiar with has been basically zero,” he says. “As far as I can tell, the main impact is that people now sometimes use the adjective ‘Wolframian’ to describe breathtaking claims for the trivial or well-known.” [Martin] Davis offers a sunnier take: “The book has a lot of beautiful pictures.”

And from Nature:

The solution isn’t hugely relevant to modern computer science, says Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. “Most theoretical computer scientists don’t particularly care about finding the smallest universal Turing machines,” he wrote in an e-mail. “They see it as a recreational pursuit that interested people in the 60s and 70s but is now sort of ‘retro’.”

Having partially degrumpified, in the remainder of this post I wish to offer something positive.

But first some background: a month after NKS came out, I wrote a review of it for the journal Quantum Information and Computation, in which I examined Wolfram’s claims about quantum mechanics and computational complexity, and explained what I saw as the problems with them. (Rather than rehash the review, I’ll just point you there if you’re interested.)

Today I’d like to celebrate the fifth anniversary of my critical review of NKS, by offering a $25 prize for stodgy, conventional work in the field of quantum complexity theory. The Aaronson$25.00 Challenge

In NKS, Wolfram places himself among those computer scientists and physicists who doubt the possibility of quantum computers, not for any practical reason but as a consequence of their disbelieving quantum mechanics itself. As he writes on page 771:

Indeed within the usual formalism [of quantum mechanics] one can construct quantum computers that may be able to solve at least a few specific problems exponentially faster than ordinary Turing machines. But particularly after my discoveries in Chapter 9 [‘Fundamental Physics’], I strongly suspect that even if this is formally the case, it will still not turn out to be a true representation of ultimate physical reality, but will instead just be found to reflect various idealizations made in the models used so far.

Here, then, is the challenge:

If a quantum computer can efficiently solve a problem, can it also efficiently convince Wolfram that the solution is correct? More formally, does every language in the class BQP admit an interactive protocol where the prover is in BQP and the verifier is in BPP?

In other words: can quantum computers always “show their work”? It’s obvious, for example, that if a quantum computer spit out the factors of a 5,000-digit number, you wouldn’t have to believe quantum mechanics (or even know what it was) to check whether the answer was right. I’m asking whether every problem solvable by a quantum computer has the same property. And to make things fair to the quantum computer, I’ll let it give not just a static proof but also an interactive protocol, by which a distrustful polynomial-time classical verifier could become convinced, to arbitrarily high confidence, that the quantum computer knows the right answer.

(An example for the uninitiated: suppose you had two graphs G and H, and suppose you picked one of the graphs at random, randomly permuted its vertices, and gave the result to a quantum computer. And suppose the quantum computer could unfailingly tell you which graph you started with. Clearly this should convince you that G and H are not isomorphic — since if they were isomorphic, then the quantum computer couldn’t have done better than guessing! And this is true even though you never received a proof of non-isomorphism that you could hand to someone else.)

I’ll award $25 either for a proof that every quantum computation can be “counter-Wolframized,” or for an oracle relative to which some quantum computation provably can’t be. If both problems are solved then I’ll award$25 for each. Every serious submission will be reviewed by a Prize Committee consisting of me. The Committee may also choose to award smaller prizes for partial results.