## Four striking papers

In the past week or two, four striking papers appeared on quant-ph. Rather than doing my usual thing—envisioning a huge, meaty blog post about each paper, but then procrastinating on writing them until the posts are no longer even relevant—I thought I’d just write a paragraph about each paper and then open things up for discussion.

(1) Matt Hastings has announced the first provable superpolynomial black-box speedup for the quantum adiabatic algorithm (in its original, stoquastic version). The speedup is only quasipolynomial (n^{log(n)}) rather than exponential, and it’s for a contrived example (just like in the important earlier work by Freedman and Hastings, which separated the adiabatic algorithm from Quantum Monte Carlo), and there are no obvious near-term practical implications. But still! Twenty years after Farhi and his collaborators wrote the first paper on the quantum adiabatic algorithm, and 13 years after D-Wave made its first hype-laden announcement, this is (to my mind) the first strong theoretical indication that adiabatic evolution with no sign problem can *ever* get a superpolynomial speedup over not only simulated annealing, not only Quantum Monte Carlo, but *all* possible classical algorithms. (This had previously been shown only for a variant of the adiabatic algorithm that jumps up to the first excited state, by Nagaj, Somma, and Kieferova.) As such, assuming the result holds up, Hastings resolves a central question that I (for one) had repeatedly asked about for almost 20 years. Indeed, if memory serves, at an Aspen quantum algorithms meeting a few years ago, I strongly urged Hastings to work on the problem. Congratulations to Matt!

(2) In my 2009 paper “Quantum Copy-Protection and Quantum Money,” I introduced the notion of copy-protected quantum software: a state |ψ_{f}⟩ that you could efficiently use to evaluate a function f, but not to produce more states (whether |ψ_{f}⟩ or anything else) that would let others evaluate f. I gave candidate constructions for quantumly copy-protecting the simple class of “point functions” (e.g., recognizing a password), and I sketched a proof that quantum copy-protection of *arbitrary* functions (except for those efficiently learnable from their input/output behavior) was possible relative to a quantum oracle. Building on an idea of Paul Christiano, a couple weeks ago my PhD student Jiahui Liu, Ruizhe Zhang, and I put a preprint on the arXiv improving that conclusion, to show that quantum copy-protection of arbitrary unlearnable functions is possible relative to a *classical* oracle. But my central open problem remained unanswered: is quantum copy-protection of arbitrary (unlearnable) functions possible in the real world, with no oracle? A couple days ago, Ananth and La Placa put up a preprint where they claim to show that the answer is no, assuming that there’s secure quantum Fully Homomorphic Encryption (FHE) of quantum circuits. I haven’t yet understood the construction, but it looks plausible, and indeed closely related to Barak et al.’s seminal proof of the impossibility of *obfuscating* arbitrary programs in the classical world. If this holds up, it (conditionally) resolves *another* of my favorite open problems—indeed, one that I recently mentioned in the Ask-Me-Anything session!

(3) Speaking of Boaz Barak: he, Chi-Ning Chou, and Xun Gao have a new preprint about a fast classical way to spoof Google’s linear cross-entropy benchmark for *shallow* random quantum circuits (with a bias that degrades exponentially with the depth, remaining detectable up to a depth of say ~√log(n)). As the authors point out, this by no means refutes Google’s supremacy experiment, which involved a larger depth. But along with other recent results in the same direction (e.g. this one), it does show that *some* exploitable structure is present even in random quantum circuits. Barak et al. achieve their result by simply looking at the marginal distributions on the individual output qubits (although the analysis to show that this works gets rather hairy). Boaz had told me all about this work when I saw him in person—back when traveling and meeting people in person was a thing!—but it’s great to see it up on the arXiv.

(4) Peter and Raphaël Clifford have announced a faster classical algorithm to simulate BosonSampling. To be clear, their algorithm is still exponential-time, but for the special case of a Haar-random scattering matrix, n photons, and m=n input and output modes, it runs in only ~1.69^{n} time, as opposed to the previous bound of ~2^{n}. The upshot is that, if you want to achieve quantum supremacy using BosonSampling, then either you need more photons than previously thought (maybe 90 photons? 100?), or else you need a lot of modes (in our original paper, Arkhipov and I recommended at least m~n^{2} modes for several reasons, but naturally the experimentalists would like to cut any corners they can).

And what about my own “research program”? Well yesterday, having previously challenged my 7-year-old daughter Lily with instances of comparison sorting, Eulerian tours, undirected connectivity, bipartite perfect matching, stable marriage, factoring, graph isomorphism, unknottedness, 3-coloring, subset sum, and traveling salesman, I finally introduced her to the P vs. NP problem! Even though Lily can’t yet formally define “polynomial,” let alone “algorithm,” I’m satisfied that she understands *something* of what’s being asked. But, in an unintended echo of one of my more controversial recent posts, Lily insists on pronouncing NP as “nip.”

Comment #1 May 13th, 2020 at 3:42 am

Can you tell us what other problems you urged Matt Hastings to look at? Just to get a preview of what results we might see in the next few years 😉

Comment #2 May 13th, 2020 at 6:37 am

The Clifford & Clifford paper looks very interesting. Are they using some unusual version of the $O$ notation? Their main theorem says that “The expected time complexity of Boson Sampling with $m=n$ is $O(n\rho^n + n^3)$ where $\rho=27/16$”. It seems a weird way to state the result, since the $n^3$ is of smaller order than the $n\rho^n$ and so it could be left out without changing the meaning of the statement. There are similar statements in the abstract and in other places in the paper….

Comment #3 May 13th, 2020 at 6:42 am

smart kid.

Comment #4 May 13th, 2020 at 9:10 am

Scott

From the abstract of the paper:

> We show a superpolynomial oracle separation between the power of adiabatic quantum

computation with no sign problem and the power of classical computation.

So the paper only claims to show an oracle separation. To what extent is this meaningful in the real world ? I mean you can define a problem in terms of some oracle but if in reality the only way to construct such an oracle would be to use some underlying model of the problem then not being able to do anything with the oracle except make calls to it seems like a completely artificial restriction.

Comment #5 May 13th, 2020 at 10:11 am

Gerard #4: Right, that’s exactly why I wrote “black-box” in the post. This is

alwaysthe tradeoff in this field: black-box lower bounds might or might not be indicative of what happens in the real world, but they’re close to theonlyunconditional lower bounds that we can hope to prove in the present state of complexity theory. In the case at hand, for Hastings’s separation to “matter in real life,” one would need to find explicit optimization landscapes that “instantiate” the behavior that he’s now shown in the black-box setting, and for which there isn’t (known or believed to be) a classical shortcut that takes advantage of the explicitness.Comment #6 May 13th, 2020 at 10:30 am

Scott #5

> for which there isn’t (known or believed to be) a classical shortcut that takes advantage of the explicitness.

I think P = NP would likely invalidate that assumption and last I checked there was far from universal consensus on that question, so I’m not sure the impersonal “believed to be” is warranted.

More generally I think that these oracle or black box results are among the most confusing claims to the non-specialist so I almost think they should all be accompanied by a prominent disclaimer to the effect that the proof applies only to an abstraction that is not known to be directly applicable to real world problems. I realize the primary intended audience should be well aware of this but papers are frequently getting picked up by more general venues where they can create considerable confusion about what has or has not been shown.

Comment #7 May 13th, 2020 at 11:30 am

[…] Cross-Entropy Benchmarking in Shallow Quantum Circuits by Boaz Barak, Chi-Ning Chou, Xun Gao. See this nice post by Scott Aaronson mentioning the last two (and other striking […]

Comment #8 May 13th, 2020 at 12:17 pm

Gerard #6: Uhh, OK. I welcome you to go through 15 years of posts on this blog, talking about black-box and non-black-box results in quantum computing, and find me a single example where I elided the crucial distinction between the two. Honestly, this wouldn’t even crack my top 10 list of things the popular press gets egregiously wrong about quantum computing. And if P=NP, that would remove the majority of the motivation for building QCs in the first place, but we know that and it’s not like it’s some abstruse technical point about the black-box model. And I’ll bet everything I own that P≠NP, with only (say) $20,000 on the other side. 🙂

Comment #9 May 13th, 2020 at 12:32 pm

> “And I’ll bet everything I own that P≠NP, with only (say) $20,000 on the other side. ????”

What odds would you bet on you living to collect?

Comment #10 May 13th, 2020 at 2:44 pm

Scott #8: If it’s a fair bet, with fair odds, then you must own more than Bill Gates does 🙂

Comment #11 May 13th, 2020 at 3:54 pm

I feel pity with your daughter even if she gets Fields Medal at the age of 10.

Comment #12 May 13th, 2020 at 4:36 pm

Rainer #11: I mean, she won’t win a Fields Medal anytime soon, but she’s gone from 8 hours per day of American public schooling, to 1-2 hours per day of personalized tutoring (the math by me and the reading by my mom, a former reading specialist), with

recess the entire rest of the day(her choice of dolls, legos, bicycling, trampoline, cartoons, video games…). Wouldn’t most kids consider that a pretty good trade?Comment #13 May 13th, 2020 at 4:41 pm

Jacob #9: Hmm. It’s hard for me to imagine how P≠NP could be proved in my lifetime, but it was also hard for the civilized world to imagine how Trump could win the presidency, and Nate Silver was still wise to give that 1 in 3 odds. So, 1 in 3? 🙂

Comment #14 May 13th, 2020 at 5:37 pm

Pro-Lily, “NiP” way better!

Comment #15 May 13th, 2020 at 8:19 pm

Today, a full day after the lesson, I asked Lily to explain in her own words what NP-complete problems were.

“It’s stuff like packing the boxes, or visiting each vertex exactly once, or the traveling salesman.”

“And what’s special about those problems?”

“That they’re all basically the same. And that if you can solve any one of them, it means that the NP=P is true and also that you can steal a lot of money.”

To be fair, she also called my lessons “kind of boring.” 🙂

Comment #16 May 13th, 2020 at 9:37 pm

Wow and she is only 7. And dialog btw me and my gf, following your blog:

She: do you know what they are sayin?

Me: 1%

She: how you understand what he is talking about?

Me: I like this guy and if someone offer him “coffee” I understand he is the winner. Or more scientifically, if i see one smart guy say other smart guy “oh, thank you now make sense”, I take previous answer as better information but learned my lesson with wait till topic end. Because someone can shred previous answer anytime.. Except the first guy. My guy still undefeated 🙂

She: Don’t send this message!

Me: Nope, I will.. this blog prove, civilians never understand what real scientist talking about with reading their popular books. I thought, I became Quantum Physics expert after finishing “Brief History of Time” but now i see everything was illusion. We only learned few party tricks to show other “intellectuals” how we know about more than “Political Science” because we understand how Universe works and Quantum things and No-Cloning stufff and cat Necromancy…

Do you accept 47 y.old to attend Lily’s lessons? She is cool, I understand her better 🙂

Peace!

Comment #17 May 13th, 2020 at 10:59 pm

Hash #16: Were you drunk when you posted that? 🙂

What I’m doing, of course, is using Lily as my filter: if, with effort, I can get a given mathematical idea into the head of a pretty normal 7-year-old, have her not just blankly nod along but correctly repeat the idea back, ask pertinent questions about it, etc. etc., then surely it ought to be possible to do the same with Joe Public and Josie Journalist! Surely Joe and Josie can’t then look me in the eye, and tell me that what I successfully taught a first-grader is too complicated for them! So, this is my grand experiment. This is my main lockdown “research project.” If and when Lily laughs at the idea that quantum computers would just magically solve NP-complete problems in polynomial time by trying all the answers in parallel, without needing to choreograph any interference in the amplitudes … that’s when I’ll call my experiment a success. 🙂

Comment #18 May 14th, 2020 at 2:38 am

This discussions with Lily are great! You should consider recording these sessions. She herself could rewatch these sessions, if nothing else…

The problem of K – 12 CS education has its best chance of being solved with your sessions with Lily!

Once people realize how efficient and customized homeschooling is, it is hard to go back to the busy work of 8 hours at a school. What you are doing sounds like true homeschooling except with the pandemic’s constraints of being confined to the home. My homeschooler parent friends say their kids are fine in about 3 hours a day. The rest of the day, they pursue some extreme passion like chess or music… They use the best curriculum and teachers they can find for each subject.

Comment #19 May 14th, 2020 at 2:42 am

Scott #17

The fundamental difference is that (almost) all humans are geniuses until they reach adolescence. So that only works if Joe and Josie are under 12 🙂

Comment #20 May 14th, 2020 at 10:10 am

Scott, what video games does your daughter play? Did she pressure you to get video games, or were you a willing participant? Are video games a good addition to your household (perhaps video games contributes to cognitive development by teaching strategy, etc.)?

Comment #21 May 14th, 2020 at 10:18 am

Nothing to do with the thread, but I found the linked video to be really interesting

As Scott recently wrote in a post, it’s likely that most of the technological progress we do from now on will be tied to CS.

I was reminded of this while watching this Stanford School of Engineering lecture on how computer science is used to fight the coronavirus – using AI based data mining of biomedical literature, 3d protein folding modeling, … to find candidate drugs against covid.

https://www.youtube.com/watch?v=vio-3FqfEF0

What’s mind-blowing is that all the methods covered in this video would be totally inconceivable to the scientists that were fighting the Spanish Flu of 1918 (even if they had known about viruses and the DNA).

When you look at recent feats of engineering, many of them were part of the folklore for a long time: flying and the myth of Icarus, intelligent machines and the 18th century idea of an automaton that played chess, lasers and the idea of burning glass technology (redirect the sun light to create powerful destructive rays), advanced materials and the long tradition of metallurgy… people had no idea of how to do those things, but they could conceive of them.

On the other hand, many recent practical inventions that rely entirely on massive processing power and large scale digital devices (like an LCD screen) would have been totally impossible to imagine a mere 100 years ago.

E.g. the idea of a moving image, and then generating that moving image from scratch using a computer, all leading to virtual reality. Although there’s a clear chain of slow iterative improvements covering decades, it’s really the scaling of the technology that suddenly unlocked totally new unforeseen applications that are qualitatively very different from anything before them.

My college electronics professor used to sell us on the topics by saying that there is no other engineering discipline that covers such a wide range of scales: frequencies from 1Hz to many GHz, voltages from one megavolt down to one microvolt, etc. All accessible with roughly the same tools. This lead to the possibility of massive scaling that eventually poured over to digital processing and CS.

For example in the 1980s personal computers had 48kB of memory, and now graphics cards have up to 24GB of memory, that’s a density increase of nearly 6 orders of magnitude.

Even the best experts in any field lack the imagination to realize the qualitative implications of such a massive scaling (Bill Gate’s infamous 640KB quote).

It’s probably linked to the fact that the human brain can’t deal very well with things that have exponential growth.

Comment #22 May 14th, 2020 at 12:20 pm

Dave #20: Lily mostly likes (1) iPad games, like “Kiddopia” and a game where she grows a worm longer and longer, and (2) Nintendo Switch games, like Super Smash Bros, which she plays at my brother’s house. I don’t know if it does anything for her mental development, but I know that

(a) it does something for my and Dana’s mental stability for her to be able to amuse herself part of the day, and

(b) I played even more video games when I was her age.

If it were up to me, she’d go through all the classic Nintendo games (Super Mario 3, Zelda, Final Fantasy 2…), which with the passage of time, now strike me as having had the brilliance of Shakespeare. But it’s

allbetter than vegging on the couch watching princess-and-pony cartoons.Comment #23 May 14th, 2020 at 12:43 pm

Scott #15,

if my dad had teached me math in the same way, I am sure I would have got the Fields Medal at the age of 10 😉

Comment #24 May 15th, 2020 at 5:38 pm

Scott #22: Great, my son’s favorite game is one where he grows a snake longer and longer. Now I’m sure he’s on the right track! 🙂

Comment #25 May 15th, 2020 at 10:20 pm

Scott,

Let me suggest a slightly lower tech but perhaps more fun “math for Lily” father/daughter activity: factoring any integers that happen to come up.

I think factoring builds and retains basic brain math circuits. I practice all the time with licence plate numbers and odometer readings. (Not sure if you have been out of Boston long enough to drive and do math at the same time yet).

In classes, I often recommend these factoring activities as an easy way to get better in math. No doubt many students think I am wacky, but they already do because I get excited talking about generating functions, etc.

Comment #26 May 16th, 2020 at 2:52 am

James #2 – Clifford and Clifford intend to say that the implied constants in the big-O are such that if you run their algorithm for realistic $n$ you will see a significant part of the run time from the $n^3$ term. Of course this is an evil misuse of big-O…

Comment #27 May 16th, 2020 at 6:48 am

Incidentally …

☞ Numerical Incidence Properties

Comment #28 May 16th, 2020 at 3:07 pm

Would love to know how important this brand-new arxiv paper is “Classical Simulation of Quantum Supremacy Circuits” claiming to have a tensor-network-based classical algorithm that is nearly comparable to the Sycamore quantum processor on the random circuit task.

Comment #29 May 16th, 2020 at 4:15 pm

Brandon #28: No, that’s not what they claim. They claim they can roughly match the performance that IBM said that they could get with Summit, except (1) using a tensor network algorithm, and (2) they actually benchmarked the performance on a smaller scale (the IBM paper didn’t). The part about matching Sycamore’s performance only applies to much easier circuits, not the full depth-20 ones.

Comment #30 May 18th, 2020 at 5:53 am

Scott,

This is probably the billionth time that someone is asking for your opinion on a proposed proof of P != NP, but as much as I wanted to avoid this, the following new proposed proof just seems too interesting to ignore (even If it will turn out to be completely false): https://arxiv.org/abs/2005.04598

I’d be really interested to know what you (and other readers of this blog) think about that paper.

PS. Sorry for asking the forbidden question. 🙂

Comment #31 May 18th, 2020 at 7:20 am

Scott 29: I believe that they also claim that their method does not require as much space as the proposal using Summit. See the discussion at the bottom of page 3 and top of page 4. I think the claim is that a small increase in the number of qubits will make the methods that use secondary storage on Summit to perform state-vector updates impractical, but that their method can better accommodate an increase in the number of qubits. This is a result of using a tensor network algorithm, it is true, though one could call full state-vector updates an instance of a tensor network algorithm.

Comment #32 May 18th, 2020 at 12:46 pm

matt #31: Yes, thanks, you’re right. We discussed it today at our Zoom group meeting, and I now know that

(1) on the positive side, the memory requirements are a lot better than with IBM’s proposal (how much better? I’m still unclear on that), but

(2) on the negative side, you have to repeat the whole computation for each additional sample that you want.

In short, being a tensor network approach, this seems to inherit both the advantages and disadvantages of Johnnie Gray’s earlier proposal along the same lines (which I blogged about at the time). I guess the main difference is just that Gray never tested it out near the requisite scale?

Comment #33 May 18th, 2020 at 12:49 pm

A set theorist #30: Sorry, but the ground is strewn with the skeletons of failed attempts to separate P from NP using logic and descriptive complexity that (to me, anyway) looked indistinguishable from this one. More would be needed to push me over the threshold of spending any more time on it than I have just now.

Comment #34 May 18th, 2020 at 1:12 pm

Scott, has your group meeting discussed the question: is Sycamore actually an exponential speedup in an asymptotic sense? It seems to me that since they sample from a distribution which is approximated by a mixture of the correct distribution with probability p and the maximally mixed one with probability 1-p, and since p tends to zero exponentially in the spacetime volume, then the number of samples that they need is exponentially large in the spacetime volume. Hence, the asymptotic run time of Sycamore seems like it exponential in spacetime volume, though several of the classical proposals are only exponential in the space, i.e., a smaller exponential.

Comment #35 May 18th, 2020 at 2:30 pm

matt #34: Yes, actually, that point came up a bunch of times in our group meeting, on this blog, and elsewhere. It’s precisely the reason why no one calls Sycamore a “scalable” QC, because clearly the signal degrades exponentially with depth until you have at least some fault-tolerance. It’s cool that one is able to eke out

someadvantage even before fault-tolerance, in a way that I wouldn’t know how to explain without saying something about exponential scaling. But I wouldn’t want anyone to pretend that this was itself scalable.Comment #36 May 18th, 2020 at 3:02 pm

Hi Matt,

We’re glad that you’re interested in the paper! As Scott says, like many other proposals, our method is based on tensor network contraction. This indeed mitigates the hard memory limit, since we don’t have to store the state vector in memory, but we need to compute a batch of 64 amplitudes each time to generate one sample from the distribution. Fortunately, our contraction is pretty fast, taking only 10-15 minutes per sample. Unfortunately, we have to do this 2000 times, which gives the 20 day estimate.

The main algorithmic ingredient is essentially a method for ‘slicing’ the tensor network while introducing very little overhead. Although the full tensor network contraction algorithm requires about 100 PB of memory, we need to ‘slice’ it up so that each can fit into 16GB on each core. The key is that our new slicing method only increases the time complexity by about 5x over 25 slices (this overhead was previously expected to be much larger; on the order of 5000x).

This reduces the total contraction cost by a factor of about 1000x Johnnie and Stefanos Kourtis’ work, which was already a tremendous improvement on the original estimate that reduced the cost from 10,000 years to about a decade. It’s worth mentioning that you might be able to further cut these numbers down by introducing approximations that simplify the tensor networks. In any case, please feel free to reach out if you have any specific questions!

Cupjin

Comment #37 May 18th, 2020 at 3:27 pm

Regarding #34 and #35 there is still a nice question regarding the asymptotic quantum/classical gap for sycamore-type computer. It is the following, consider the fidelity

F(n) as a function of n (and other parameter of the circuits), which as mentioned above is exponential in $n$. Is there an efficient randomized classical algorithm to produce a *single* bit-string with fidelity F(n). (If the answer is negative this will be a sort of weak asymptotic violation of the strong Church-Turing thesis without fault tolerance.)

Comment #38 May 19th, 2020 at 6:25 pm

Hello Scott and Matt (#35, #34),

As you pointed out, the exponential decay of fidelity does prevent the observed quantum advantage from scaling. We considered this exact question to identify the boundaries of how far (in terms of circuit depth and width) the quantum advantage extends given the current classical algorithms that are known (see a recent preprint with Daniel Lidar and Sergio Boixo at https://arxiv.org/abs/2005.02464). In direct response to Matt’s question “is Sycamore actually an exponential speedup in an asymptotic sense?” the short answer is no, but we can be more precise about the regime where the advantage observed by Google holds. There’s some subtlety about which classical algorithms are feasible to run (due to hardware memory limitations and such), but even without worrying about these issues there is an “island” of supremacy in the (width, depth) plane. The island boundaries expand exponentially in the fidelity, but the exponentially long time needed to collect samples from the quantum hardware impose a practical cutoff in the same plane at QEC-level fidelities that coincides roughly with the number of qubits needed for surface-code error-corrected quantum computing.

Comment #39 May 20th, 2020 at 3:23 am

I’m sorry to have to post this here of all places, but a pet peeve of mine is saying stable *matching* instead of marriage. The women students’ group at my university asked the algorithms lecturer about this a couple of years ago, and the change happened with no drama or problems.

The main technical point is that Gale-Shapley is a good choice for matching two unequal categories (such as students and schools) because you can justify your choice of which side goes first, as the result is not “symmetric” after all. Some of the students felt that the exercise “prove each woman gets their last choice (among the stable set), similiar to how we showed in class that each man gets their first choice in this set” was an unnecessarily provocative wording; and it hurt no-one to change the example to “students and internships”.

Comment #40 May 20th, 2020 at 9:42 am

Sam #39: The problem with calling it “stable matching” is that then people will confuse it even more with the superficially similar bipartite perfect matching problem.

When I teach this, I always mention something about how (straight) women get more of what they want in a world where they take initiative and propose to men—how’s that phrasing, OK?—and also about the extremely interesting same-sex case of the problem (where, e.g., a stable arrangement need no longer exist).

In my extremely recent experience, my 7-year-old daughter was

way, waymore engaged by problems that involved girls and guys picking marriage partners, than she was by problems that involved managers hiring secretaries or medical students getting matched to residencies. Who could’ve guessed? 🙂I’m ready to draw a line in the sand here. If

thismaterial is offensive, despite all efforts to be inclusive and egalitarian, thenallattempts to relate math to life are potentially offensive, and need to be avoided from now on whenever we teach in favor of pure abstraction. Fight me.Comment #41 May 20th, 2020 at 6:41 pm

This maps to a chapter from the book “Montessori : science behind the genius”, on Montessori education, by Dr. Lillard, cognitive science professor at University of Virginia. It is a fascinating book for parents. Here is something relevant to the most recent comments on this post :

https://youtu.be/vK6FnF3-3Tw

Comment #42 May 21st, 2020 at 1:30 am

@set theorist,

sorry to be vague, but that paper has too many warning signs to take seriously without much better exposition from the author. Maybe he rushed out a sloppy version 1.0 to stake priority but if there is no 2.0 answering basic questions for such an approach it will probably be ignored.

Comment #43 May 26th, 2020 at 12:18 pm

Eh, I’ll make that argument. Well, actually, no I won’t, because I don’t know how to argue it. But it just doesn’t seem right and I feel like I shouldn’t ignore it.

Then again, it’s too good an example to discard, so I’d probably more try to bracket it rather than go with something else. But it sounds like you’re maybe already doing that. My objection may be more to the common presentation rather than the example itself. So, on the whole, <shrug>.

Comment #44 May 28th, 2020 at 1:55 pm

@sniffnoy

I’m more bothered by stable marriage itself (not the term, but the real-life marital behavior) having been undermined for at least a century.

But yes, “stable marriage” the nom de task is pleasantly evocative and much better than alternatives, so line in the sand it is.

Comment #45 May 28th, 2020 at 2:13 pm

I just noticed that my reply to @set_theorist misunderstood what paper he referred to (another logic-based P=NP solution was posted on arxiv around the same time, and I thought he meant that other one). My comments are possibly a bit harsh for the actual paper referenced, though they still apply as they do to most such attempts. In line with what Scott wrote (though he may not endorse what I am about to say), general abstract methods like logic and descriptive complexity are suspicious in that they just don’t seem specific enough to the problem to serve as more than a dispensable shell for any claimed solution that uses them.

Comment #46 May 28th, 2020 at 2:28 pm

STEM Caveman #45:

general abstract methods like logic and descriptive complexity are suspicious in that they just don’t seem specific enough to the problem to serve as more than a dispensable shell for any claimed solution that uses them.

Not only do I agree; I couldn’t have said it better!