If you like math, and you don’t yet have a Math Overflow account, stop reading this post now (not right now, but by the end of the sentence) and set one up, before returning here to finish reading the post. Math Overflow is the real deal: something that I’ve missed, dreamed about, and told my friends someone ought to set up for the last fifteen years, and that now finally actually exists. (It was founded by Berkeley grad students and postdocs Anton Geraschenko, David Brown, and Scott Morrison.) If you have a research-related math problem you can’t solve, you can post it there and there’s a nontrivial chance someone will solve it (or at least tell you something new), possibly within eleven minutes. If you’re an ambitious student looking for a problem to solve, you can go there and find one (or a hundred).
To take one example, here’s a terrific complexity question asked by Timothy Gowers, about a notion of “average-case NP-completeness” different from the usual notions (if you think he’s asking about a well-studied topic, read the question more carefully). I didn’t have a good answer, so I wrote a long, irrelevant non-answer summarizing what’s known about whether there are average-case NP-complete problems in the conventional sense.
But my real topic today is the sensitivity versus block-sensitivity problem, which I recently posted to MO in a disguised (and, dare I say, improved) form.
For non-Boolean-function-nerds, sensitivity vs. block-sensitivity is a frustrating and elusive combinatorial problem, first asked (as far as I know) by Noam Nisan and by Nisan-Szegedy around 1991. Here’s a lovely paper by Claire Kenyon and Samuel Kutin that gives background and motivation as well as partial results.
Briefly, let f:{0,1}n→{0,1} be a Boolean function, with n input bits and 1 output bit. Then given an input x=x1…xn to f, the sensitivity of x, or sx(f), is the number of bits of x that you can flip to change the value of f. The sensitivity of f is s(f) = maxx sx(f). Also, the block-sensitivity of an input x, or bsx(f), is the maximum number of disjoint sets of bits of x (called “blocks”) that you can flip to change the value of f, and the block sensitivity of f is bs(f) = maxx bsx(f). Clearly 1 ≤ s(f) ≤ bs(f) ≤ n for every non-constant Boolean function f. (bs(f) is at least s(f) since you could always just take each block to have size 1.)
To give some examples, the n-bit OR function satisfies s(OR)=bs(OR)=n, since the all-zeroes input is sensitive to flipping any of the n input bits. Likewise s(AND)=bs(AND)=n, since the all-ones input is sensitive to flipping any of the bits. Indeed, it’s not hard to see that s(f)=bs(f) for every monotone Boolean function f. For non-monotone Boolean functions, on the other hand, the block-sensitivity can be bigger. For example, consider the “sortedness function”, a 4-input Boolean function f that outputs 1 if the input is 0000, 0001, 0011, 0111, 1111, 1110, 1100, or 1000, and 0 otherwise. Then you can check that bs(f) is 3, whereas s(f) is only 2.
Here’s the question: What’s the largest possible gap between s(f) and bs(f)? Are they always polynomially related?
What makes this interesting is that block-sensitivity is known to be polynomially related to a huge number of other interesting complexity measures: the decision-tree complexity of f, the certificate complexity of f, the randomized query complexity of f, the quantum query complexity of f, the degree of f as a real polynomial, you name it. So if, as is conjectured, sensitivity and block-sensitivity are polynomially related, then sensitivity—arguably the most basic of all Boolean function complexity measures—ceases to be an outlier and joins a large and happy flock.
The largest known gap between sensitivity and block-sensitivity is quadratic, and is achieved by “Rubinstein’s function.” To define this function, assume for simplicity that n is an even perfect square, and arrange the input bits into a √n-by-√n square grid. Then we’ll set f(x)=1, if and only if there exists a row that has two consecutive 1’s and all other entries equal to 0. You can check that bs(f)=n/2 (for consider the all-zeroes input), whereas s(f)=2√n (the worst case is when every row contains exactly one 1).
It’s a reasonable guess that Rubinstein’s function gives pretty much the largest gap possible, and how hard could that possibly be to prove? Well, how hard could a white rabbit in front of a cave possibly be to kill?
I’ll confess to going on sensitivity versus block-sensitivity binges every couple of years since I first learned about this problem as an undergraduate at Cornell. The last binge occurred this weekend, triggered by the strange block-sensitivity properties of my counterexample to the GLN Conjecture. And that’s when it occurred to me to use the hyper-inter-network tools of Web 2.0, together with my power and influence here at Shtetl-Optimized, to unleash a new flood of activity on the problem. There are at least four factors that make this problem well-suited to a collaborative math project:
- The statement can be understood by almost anyone. I could explain it to my parents.
- It seems unlikely (though not impossible) that the solution will require any heavy-duty math. What seems needed, rather, is lots of creativity to come up with new ideas specific to the problem at hand, as well as diabolical examples of Boolean functions that refute those ideas.
- Even though the problem has been around for 20 years, the relevant literature is very small (maybe half a dozen papers); it would take at most a day to learn everything known about the problem.
- Despite 1-3, this is a real problem that a significant number of people would care about the answer to.
If you feel like you want a new angle on the problem—something that hasn’t already been explored to death, or even to serious injury—you can try my “geometric variant” of sensitivity vs. block sensitivity described on Math Overflow.
I’m calling this a “philomath project,” a term that pays tribute to the successful polymath projects popularized by (and carried out on) Timothy Gowers’ wonderful blog, but that avoids infringing on a registered trademark of GowersCorp.
So, here are the philomath project rules: do you have an idea about sensitivity vs. block sensitivity? Or a vague pseudo-idea? Or a proposal for an easier variant? Then post it here! Or go over to Math Overflow and post it there. Let’s see if a block of us acting in unison can flip this problem.