http://eccc.hpi-web.de/report/2015/082/

The section 3 deals with MAX E3LIN2.

Their algorithm starts with a random assignment z_i.

Now if you try to find what is the best value y_1 for the first variable and returned (y_1, z_2, …) you will satisfy on average Ω(sqrt(D_1)) more equations than random. (D_i is the number of equations the variable i is in.)

In their algorithm, they compute all such “local improvements” y_i such that (z_1, …, z_{i-1}, y_i, z_{i+1}, … z_n) is the best thing you can output if you are only allowed to modify the value of the i-th variable.

Then they manage (bottom of page 6) through some nicely working combinatorial trick to combine all those local improvements into a globally improved x_i, with some loss. (In the paper, they only improve compared to random by a third of the sum of how much the local improvements improve compared to random.) The trick works for all Max EkLIN2 with odd k.

]]>http://www.nature.com/nphoton/journal/v4/n10/full/nphoton.2010.168.html

Sorry for not bringing it up before – I guess there were a lot of different points going on in that discussion.

What bothers me a little about the conjectures is that there seems to be no mathematical principle behind them. For example, people have to ask you which demonstration would or would not refute your conjectures (until we get to something really obvious, like factoring a 2048-bit number), since they are too vague for others to figure out themselves. It is like conjecturing that computers will never be as intelligent as people, and then once computers beat people at chess, the response is that computers still cannot recognize faces.

]]>