in his recent talks, Boaz has been making the case for the desirability of a theory able to deal with questions of the form “what is the probability that the 2^100-th digit of pi is 1?”

He views his recent work on integrality gaps for sum-of-squares relaxations as something that would eventually fit into such a theory.

The slogan is that this theory would be to Bayesian probability as the “classical” theory of pseudorandomness is to frequentist probability. More here: http://windowsontheory.org/2015/10/01/a-different-type-of-pseudo/

]]>Yeah, I think most people would call those lower order terms. Although that’s not always standard. At least to me, “error term” in analytic number theory gives mind to terms which are big-O of something, whereas lower order terms have actual constants out front.

]]>

Fundamental limitations

in the purifications of tensor networks

De las Cuevas, Cubitt, Cirac, Wolf & Pérez-GarcíaWe show a fundamental limitation in the description of quantum many-body mixed states with tensor networks in purification form. The proof is based on an undecidable problem and on the uniqueness of canonical forms of matrix product states. […] The proof technique is somehow non-standard since it relies on the notion of undecidability.

These findings are plausibly compatible with an increasingly hybridized STEM-world, in which QED-mediated systems are (seemingly) efficiently simulable; yet general quantum dynamical systems are (seemingly) *not* efficiently simulable; moreover some (seemingly) natural condensed-matter questions are not even mathematically decidable.

So hurrah! for pragmatic algorithm supremacy! And hurrah too! for principled quantum supremacy! And a third hurrah! For … uh … Kurt Gödel and a measure of cognitive humility? 🙂

]]>It seems like it is more accurate to say that the second order terms are different and larger than one would expect, since they aren’t just big-O terms but terms we can write down explicitly.

]]>http://phys.org/news/2016-03-mathematician-pair-prime-random-thought.html

The gist is that the last digits {1,3,7,9} of the sequence of prime numbers is measured to be far from random. No mention is made of how this goes with bases other than 10.

We often hear of the largest known prime. In the other direction, what is the smallest number not known to be prime or not? That is presumably a moving target, so a better question is: how far out have the numbers been sorted into prime or not prime, as of today? The time history of this would also be interesting. Exponential growth? Possibly the NSA knows more than anyone, and are likely to not be talking!

]]>