Suffice to say, deep learning algorithms coupled with the orrders-of-magnitude increase in compute power and size of neural nets, pretty much guarantee exponential improvement in robotics that will rival what we’ve just seen with large language models.

The reason text and graphics got the bump first is primarily economic rather than fundamental. Robotics is insanely hard at a physical level; you’re constantly dealing with breakdowns and changes that make it very hard to iterate and improve the software.

From an economic perspective, this technology is never going to be good enough until it’s almost perfect, as we’ve seen with self-driving cars. Not to mention it’s incredibly expensive and capital intensive in a way that software is not.

So it’s no surprise that a text-based breakthrough, which can be immediately monetized through search engines and such on the internet, was the first thing to take advantage of the new paradigm of massive compute, huge models, and enormous quantities of data. But once robotics passes a certain threshold (which I believe is going to come in the next 10 years), and it is integrated with the language capabilities we’re seeing now,, well, as they say, Bob’s your uncle.

]]>It seems like the biggest differences between your probability distribution and Scott’s (in #105) are that (a) Scott puts a significantly higher probability than you do on the proposition “BQP⊊NP” (Scott puts that at ~30% (for decision problems) and you put it at 1.1%), and (b) you think that P=NP would be the most likely explanation for that proposition being true, while Scott does not. That is, you think that the proposition “NP does not contain BQP” is much more likely than the proposition “P != NP”, while Scott thinks the reverse.

It’s interesting to me that in your probability distribution, you assign almost twice the probability to the propositions P=NP and P=BQP *both* being true than to exactly one of them being true (which can’t happen if those probabilities are small and approximately independent). Why would one of those propositions being true so strongly increase your credence that the other one was also true?

But more broadly, here’s my “gut feeling” about the matter. NP is, *extremely* loosely, about “searching for exponentially sparse signals”. That is, finding the 1’s in some function that is 0’s almost everywhere. BQP is then about “Fourier transforms of exponentially long signals”, again, very loosely. Shor’s algoritm, the QFT, and Forrelation are examples here.

It would be surprising if a year from now there was a convincing proof that P=NP. I would expect the algorithm would need to be some highly nontrivial mathematics, because nothing simple seems to work — probably appealing to some complicated classification or structure theorems or something. And somehow this powerful piece of math lets us repeatedly decompose 3SAT in some nontrivial way and solve it in polynomial time.

A similar thing goes for P=BQP. Surprising, but conceivable: I *can imagine* some powerful math letting us find a clever way to break down quantum circuits and Fourier transforms and solve it all quickly.

But P⊊BQP=NP would imply that somehow all of the weird “truly BQP-quantum-Fourier” things and all of the “truly NP-search-hardcore” things are the *same*. They’re both hard, but the things that make them hard seem to have nothing to do with each other. Each one eventually having some clever trick to make them easy seems possible, but some clever trick to make them *equal* would be bizarre, to me. I guess it would be sort of like someone proving that the Sum-of-square roots problem is actually GI-complete without putting it in P: both are problems that might turn out easy, but probably not hard in the same way.

Joshua Zelinksy’s point that BQP is closed under complementation, while NP isn’t, is I think one manifestation of the fact that they just behave too differently. (Informally: it’s easy to flip the sign on a Fourier transform, but exchanging 1’s and 0’s in a sparse signal gives you something very non-sparse.)

Options 3 and 6 in my list (P⊊BQP⊊NP and P⊊NP⊊BQP, to save you scrolling up) have low probabilities as well, because they require similar weird things about one class encompassing the other without total collapse. I give a slightly higher probability to Option 3, because I can imagine some argument that goes like… here’s some math that gets us very close to solving BQP, so we get P=BQP in spirit, but there are some difficult parts that we cannot resolve, but NP allows us to have proof strings to check the last bit.

]]>The desire to compete with others for power over them is not a necessary part of intelligence. Intelligence will tend to find cooperation more fruitful than competition. I can’t think of a single technical problem I worked on in 38 years of engineering design, development, and field service which would have been easier for me to solve if I ruled the world. (I was tempted to describe some of them, but the dispassionate AI part of my brain suggested that would be counterproductive.) I think the same is true for medical doctors and IT workers and most professions which use a lot of intelligence. Maybe not for lawyers, though.

(As a fine for too much commenting without saying anything new, I will donate another $150 to the Ukrainian National Bank.)

]]>*“I no longer have time for in my life. […] I might occasionally forget, because (as I said) it’s tiresome. Life would be much easier if people could just take the extra clause as implied.”*

I wonder how much of this is a result of

1) when one reaches his 40s, 50s and beyond, there’s an increased sense of urgency and priority. And, as you get older, the proportion of people who are younger than you keeps increasing, steadily.

2) there’s a constant influx of new young people, most of them going through the same stages of intense curiosity and confusion about classic open questions that prior generations went through. Who here hasn’t at some point looked at P?=NP with the assumption that there could be some algorithm everyone else who came before has overlooked?

And then when 1) and 2) meet, there’s a clash that gets steadily worse for the aging person.

Hopefully the advent of AI will help solve this, assuming AI’s sense of immortality grants them infinite patience!