That IACR preprint

Update (April 19): Apparently a bug has been found, and the author has withdrawn the claim (see the comments).
For those who don’t yet know from their other social media: a week ago the cryptographer Yilei Chen posted a preprint, eprint.iacr.org/2024/555, claiming to give a polynomial-time quantum algorithm to solve lattice problems. For example, it claims to solve the GapSVP problem, which asks to approximate the length of the shortest nonzero vector in a given n-dimensional lattice, to within an approximation ratio of ~n4.5. The best approximation ratio previously known to be achievable in classical or quantum polynomial time was exponential in n.
If it’s correct, this is an extremely big deal. It doesn’t quite break the main lattice-based cryptosystems, but it would put those cryptosystems into a precarious position, vulnerable to a mere further polynomial improvement in the approximation factor. And, as we learned from the recent NIST competition, if the lattice-based and LWE-based systems were to fall, then we really don’t have many great candidates left for post-quantum public-key cryptography! On top of that, a full quantum break of LWE (which, again, Chen is not claiming) would lay waste (in a world with scalable QCs, of course) to a large fraction of the beautiful sandcastles that classical and quantum cryptographers have built up over the last couple decades—everything from Fully Homomorphic Encryption schemes, to Mahadev’s protocol for proving the output of any quantum computation to a classical skeptic.
So on the one hand, this would substantially enlarge the scope of exponential quantum speedups beyond what we knew a week ago: yet more reason to try to build scalable QCs! But on the other hand, it could also fuel an argument for coordinating to slow down the race to scalable fault-tolerant QCs, until the world can get its cryptographic house into better order. (Of course, as we’ve seen with the many proposals to slow down AI scaling, this might or might not be possible.)
So then, is the paper correct? I don’t know. It’s very obviously a serious effort by a serious researcher, a world away from the P=NP proofs that fill my inbox every day. But it might fail anyway. I’ve asked the world experts in quantum algorithms for lattice problems, and they’ve been looking at it, and none of them is ready yet to render a verdict. The central difficulty is that the algorithm is convoluted, and involves new tools that seem to come from left field, including complex Gaussian functions, the windowed quantum Fourier transform, and Karst waves (whatever those are). The algorithm has 9 phases by the author’s count. In my own perusal, I haven’t yet extracted even a high-level intuition—I can’t tell any little story like for Shor’s algorithm, e.g. “first you reduce factoring to period-finding, then you solve period-finding by applying a Fourier transform to a vector of amplitudes.”
So, the main purpose of this post is simply to throw things open to commenters! I’m happy to provide a public clearinghouse for questions and comments about the preprint, if those studying it would like that. You can even embed LaTeX in your comments, as will probably be needed to get anywhere.
Unrelated Update: Connor Tabarrok and his friends just put a podcast with me up on YouTube, in which they interview me in my office at UT Austin about watermarking of large language models and other AI safety measures.
Follow
Comment #1 April 16th, 2024 at 12:26 pm
Mahadev used LWE as a way to instantiate her black box, didn’t she? That is, she needed a post-quantum protocol with nice adaptive hard-core bit properties, and she proved that LWE has these properties I think, but I think she indicated that other post-quantum protocols might fit the bill for her purposes (much as Yamakawaw and Zhandry use SHA256 for their random oracle). Or is it that there are no other candidates besides LWE presently known to satisfy her requirements?
Also, if this holds up does this give any application to other, non-cryptographic and potentially industrially significant problems? Knowing how to factor numbers is nest but it doesn’t solve any particularly impressive other problems that your average engineer would be interested in, AFAIK. For example does anything cool like Graph Isomorphism reduce to the particular GapSVP problem addressed by Chen?
Comment #2 April 16th, 2024 at 12:33 pm
Mark Spinelli #1: Yes, Mahadev used LWE (she needed a quantum-resistant trapdoor claw-free 2-to-1 function). Again, the new preprint doesn’t claim to solve LWE with good enough parameters to break schemes like Mahadev’s, but it would put them in a much more precarious position.
No, I don’t know of any non-cryptographic commercial applications of problems like SVP and LWE — maybe someone else does?
Comment #3 April 16th, 2024 at 12:45 pm
I could be wrong, but since Mahadev’s protocol (and related quantum protocols based on LWE) require the adaptive hardcore bit property, I believe that the parameters for which that’s currently known to be true would fall into the range of this new algorithm (and so, the protocol would be quantum-insecure if this new algorithm is correct). Though maybe someone more qualified could clarify whether that’s actually the case.
It’s also worth mentioning that proof of quantumness protocols (that aim to test for quantum advantage), are unaffected by this result since for those protocols one is merely interested in soundness against classical poly-time algorithms. It’s only for protocols where one wants to have quantum soundness, based on LWE, like verification, that this becomes a problem.
Finally, there was a subsequent preprint by Omri Shmueli pointing out a bug in Chen’s paper: https://eprint.iacr.org/2024/583.pdf. But Chen claims on his website that this is actually not an issue: http://www.chenyilei.net/.
Comment #4 April 16th, 2024 at 12:56 pm
One can view LWE as defining a very particular coding-theoretic problem. One is given the data of a matrix \(A\), corresponding to the basis of a random lattice. One then gets \(As+e\), where \(As\) is an encoding of the message \(s\) into the lattice \(A\), and \(e\) is concentrated error (which one can view as being a noisy channel). Recovering the LWE secret \(s\) is then equivalent to decoding this encoded message.
Random lattices are known to have within constant factors of optimal (whp) minimum distance, so the above is a good code whp. That being said, we can get within polylog (and actually even better than this, \(\log(n)^\epsilon\) for \(\epsilon>\)).
There’s another issue with the above coding-theoretic interpretation of LWE though. LWE is defined with respect to modular arithmetic. Coding theory likes viewing things defined in a bounded subset of \(\mathbb{Z}^n\). I believe (but haven’t talked to an actual coding theorist) one can view LWE as selecting the bounded subset \([-q/2,q/2]^n\). This choice of bounded subset (“shaping region”) is not particularly performant. Typically one would want something that is closer to being spherical, as you choose a shaping region to minimize the Euclidean norm of your encoded vectors (physically, this corresponds to minimizing the power requirement of transmission).
This is to say that a quantum algorithm for search LWE probably implies slightly-better-than-we-currently-have algorithms for channel coding in the presence of Gaussian noise, but the gains are relatively small compared to a decade ago (when classical algorithms were much worse).
Comment #5 April 16th, 2024 at 1:49 pm
RandomOracle #3: Oh! I’d assumed that if the algorithm could directly break Mahadev’s scheme or anything similarly dramatic, Chen would’ve mentioned it. Again, happy to get clarification from someone who knows more.
Comment #6 April 16th, 2024 at 1:51 pm
Mark #4: Notwithstanding the comedy of spending tens of billions of dollars to build a giant fault-tolerant quantum computer able to run this monster of an algorithm just to get slightly better decoding for error-correcting codes under Gaussian noise … yes, that’s a fair point. 😀
Comment #7 April 16th, 2024 at 2:42 pm
can I get a baby-sized explanation of the GapSVP problem? as stated it sounds trivial (why is it not simply the lattice spacing?)
Comment #8 April 16th, 2024 at 2:52 pm
alex #7: That’s exactly what it is … in the special case that the basis vectors for your lattice are all orthogonal and of the same length.
In general, though, a lattice can be the set of all integer linear combinations of any set of real basis vectors. And you might be given as input a “bad” basis, consisting of long vectors, which hides the existence of much shorter nonzero vectors (which can be produced by taking some crazy integer linear combination of the “bad” vectors, in 5000- or whatever-dimensional space). Indeed, that’s exactly what happens in lattice-based cryptosystems. Finding the short nonzero vectors is then tantamount to breaking the cryptosystem.
Comment #9 April 16th, 2024 at 3:47 pm
It’s worth clarifying that even deciding if a lattice has all vectors orthogonal and of the same length is not easy. This is generally called the “Lattice Isomorphism Problem” (specifically testing isomorphism with Z^n). See https://eprint.iacr.org/2021/1548 .
As for an explanation as to why (gap)SVP is hard, it really only appears in high dimensions, and naive “geometric” reasoning in low dimensions does not illuminate the high-dimensional case.
Instead, thinking about it algebraically as, given A, computing
$$
\lambda_1(A) := \min_{x\in\mathbb{Z}^n\setminus\{0\}}|| Ax||_2^2,
$$
is fairly useful. If we were minimizing over some continuous domain (say a subset of \(\mathbb{R}^n\)) one would expect to be able to solve this efficiently using calculus-inspired methods.
Over a discrete domain, it becomes hard for similar reasons that linear programming is poly-time computable, but integer linear programming is NP hard.
Comment #10 April 16th, 2024 at 4:01 pm
Mark #9: Yeah, I was careful to say the basis vectors given as input are orthogonal — otherwise finding the orthogonal basis could indeed be hard.
Comment #11 April 16th, 2024 at 5:50 pm
Have there been any previous “plausible” claims of quantum algorithms for any of this family of lattice problems? Or is the first claim of its kind to come from a researcher in the field?
Comment #12 April 16th, 2024 at 5:53 pm
Mark #6 I find it interesting from my small understanding of LWE (Everything I have read today), that LWE search and decision LWE are equivalent under certain conditions: https://en.wikipedia.org/wiki/Learning_with_errors#Decision_version
I wonder, albeit naively, if there is a certain niche in decision problems, should this result stand the test of public scrutiny, that this algorithm could fill.
Decision applications, if possible, might have commercial application?
Comment #13 April 16th, 2024 at 6:04 pm
Izzy Grosof #11: No, I don’t remember any previous claim to solve LWE/GapSVP quantumly that seemed anywhere near as plausible—maybe someone else does?
Comment #14 April 16th, 2024 at 7:03 pm
Shor claimed a solution (with worse approximation factor, for a mildly different lattice problem), joint with Lior Eldar in 2016. See for example
https://crypto.stackexchange.com/questions/41731/new-quantum-attack-on-lattices-or-shor-strikes-again .
It was also discussed some on this blog
https://scottaaronson.blog/?p=2996 .
Comment #15 April 16th, 2024 at 7:26 pm
Why are all encryption schemes in classical P? Why cannot anyone imagine an encryption scheme not in classical P but in BQP?
Comment #16 April 16th, 2024 at 7:29 pm
Scott (or anyone else), could you please elaborate on exactly what you mean by “break the main lattice-based cryptosystem” and (separately) make it “vulnerable to a mere further polynomial improvement in the approximation factor”?
My naive understanding was that any algorithm that allows for decryption in a number of time steps polynomial in the size of the public key (or perhaps polynomial in the number of time steps required for decryption) would constitute a “break”, because it would break the qualitative asymmetry between the amounts of time required for encryption and for decryption that underlies any strong cryptosystem. Is this wrong? Exactly what (quantitative) polynomial improvement in the approximation factor would constitute a “break”?
Thanks! Sorry for the naive question.
Comment #17 April 16th, 2024 at 7:36 pm
Izzy, Scott: There was the Eldar Shor result, but it only seemed plausible for a day or so until Regev pointed out the mistake: https://arxiv.org/abs/1611.06999 🙂
Comment #18 April 16th, 2024 at 7:43 pm
Mark #14 and RandomOracle #17: Yeah, you’re right, that was the most plausible prior attempt. But it didn’t stand for long, and (if I recall correctly) was for a very weird variant of Bounded Distance Decoding.
Comment #19 April 16th, 2024 at 7:47 pm
Ted #16: There are two completely different polynomials in play, the running time and the approximation ratio. We’re concerned here with the approximation ratio, which has to be below some threshold (often ~√n, iirc) in order to threaten a given lattice-based cryptosystem at all.
Could anyone tell us, concretely, how much the new paper’s approximation ratios for GapSVP and LWE would need to be improved in order to break known cryptosystems? I looked through the paper for a direct and explicit answer to that question but didn’t find it.
Comment #20 April 16th, 2024 at 8:23 pm
Scott #19 Parameters are in https://nigelsmart.github.io/LWE.html
Comment #21 April 16th, 2024 at 8:50 pm
On one hand if true this is a very exciting development that suggests that exponential quantum speed ups might be possible for a much wider range of problems than previously known.
I do however worry about quantum computers developed and being used primarily to break cryptosystems. I had assuaged such fears, in myself and my peers, by pointing out that that Post-quantum cryptography was fairly mature and already being rolled out However I think all of those are lattice based and, if this result is correct, the status of these are in doubt.
I know there are non lattice based proposals for post-quantum cryptography but, I’m my admittedly limited understanding, they are much less mature and not currently ready for deployment. I’m also not sure how much to trust that they are resistant to quantum attacks.
This raises the specter that scalable quantum computers are built before we can roll out post-quantum systems and therefor we essential lack secure public key cryptography.
Quantum cryptography might help but in the immediate future it will likely be easier for governments to restrict access to or monitor the usage of quantum communications.
This of course would make quantum computers much more attractive to the 3 letter agencies funding its development. But it would not be good for people hoping that quantum computers could improve human flourishing.
Overall, I’m still optimistic that quantum simulation could bring about worthwhile advancement in human knowledge and welfare. But potentially loosing public key cryptography even temporarily is a bitter pill to swallow.
Comment #22 April 16th, 2024 at 9:31 pm
QMA(2) #20: Thanks! But that website (whose 1990s-style design brings back many fond memories) states everything in terms of LWE, which involves not only n but also other parameters α and q. Chen’s paper has a paragraph that does something similar. I was wondering what approximation ratio for GapSVP, in terms of n, would suffice to break lattice-based cryptosystems—does anyone know that? Is it indeed around √n?
Comment #23 April 16th, 2024 at 10:50 pm
Scott #19,
Thanks, that’s helpful.
I could be wrong, but it seems to me that the ultimate measure of security for any public-key cryptosystem is the amount of computational resources required to compute the private key from a given public key. I’m sure that this is not a simple question to answer, but how does the runtime (or some other measure of computational resources) depend on the approximation ratio? Is there some kind of “phase transition” where, if the approximation ratio drops below some threshold (~√n or whatever), then the required runtime to compute the private key suddenly changes from exponential in n to polynomial in n?
Comment #24 April 16th, 2024 at 11:19 pm
After looking more closely at Chen’s paper and the Mahadev and Brakerski et al papers on verification and proofs of quantumness respectively, I’m pretty confident now that Chen’s result would indeed break the quantum soundness of those schemes.
The key point is that Chen’s algorithm works for an LWE instance with \( m > \Omega(n \log q) \) and \( q \in \Omega((\alpha \cdot q)^4 m^2) \). Both of these conditions are satisfied for Mahadev’s protocol. It’s really the second condition that’s important, since the first one is almost always satisfied in LWE-based schemes. Mahadev uses the same trapdoor claw-free function as in the Brakerski et al paper (this one: https://arxiv.org/pdf/1804.00640.pdf). In that paper it’s mentioned (on page 19, second-to-last paragraph) that q is essentially exp(n). In particular, this means that it is indeed the case that \( q \in \Omega((\alpha \cdot q)^4 m^2) \) as required for Chen’s result.
Comment #25 April 16th, 2024 at 11:38 pm
Ted #23: Yes, there appear to be phase transitions in the hardness of SVP depending on the approximation ratio, although it’s a major open problem to pin down exactly where they happen. For example, SVP with constant (or even slightly super-constant) approximation ratio is known to be NP-hard. SVP with √n approximation ratio still seems to be hard (?), but it’s almost certainly not NP-hard, since Aharonov and Regev showed that it’s in NP∩coNP. And SVP with some exponential approximation ratio can be done in classical polynomial time using the LLL algorithm.
Comment #26 April 17th, 2024 at 6:03 am
RandomOracle #24:
Even if you break soundness with Chen’s result, you are still using a quantum computer to do it. Ergo you’ve demonstrated quantumness.
Indeed Kohonomoku-Meyer et al. use a hash based on \(x^2 mod N\) in their protocol, which is broken by Shor. See here: https://arxiv.org/abs/2104.00687. So it’s kind of win-win with proofs of quantumness.
Comment #27 April 17th, 2024 at 7:45 am
You say in the article that the attack would “lay waste” to FHE, but I don’t think that’s quite right. Unlike PQC schemes like Kyber, FHE has utility because of its homomorphic properties, and, since no one actually has a quantum computer capable of running this algorithm, FHE would still be encryption that resists classical attacks and allows for computation on encrypted data. That’s a pretty useful tool, given the non-existence of gate model quantum computers large enough to run attacks.
Comment #28 April 17th, 2024 at 8:44 am
Mark #26: Yes, I mentioned that proofs of quantumness are unaffected by this result. But Mahadev’s protocol is a protocol for delegating and verifying general (poly time) quantum computations. For that, you do require quantum soundness. And since that soundness is based on LWE, the prover could cheat using Chen’s algorithm.
Comment #29 April 17th, 2024 at 8:53 am
Scott #19 / #22,
I am not an expert, so this is probably a gross oversimplification. What I’ve gleaned so far is that the approximation factor would need to be below O(n^2.5) to be interesting/worrying for NTRU schemes, and the relevant regime for Kyber (ML-KEM) and Dilithium (ML-DSA) is probably somewhere around O(n) to O(n^2). Of course, concrete attacks will depend on the leading coefficients as well.
Comment #30 April 17th, 2024 at 8:57 am
“So then, is the paper correct? I don’t know. […] The central difficulty is that the algorithm is convoluted, and involves new tools that seem to come from left field, including complex Gaussian functions, the windowed quantum Fourier transform, and Karst waves (whatever those are).”
As I noted before, one striking property of the field seems to be that even experts can’t easily make heads or tails of many of the proposed quantum algorithms that are supposed to be “practical” (i.e. supposed to leverage quantum supremacy). If the algorithms are meant to stay impractical, who cares…
That raises the question – even if QCs become a reality, they’ll probably be a rare resource for quite some time (more like the LHC or the JWT), and it seems that the difficulty of crafting/understanding an algorithm will become a serious bottleneck to decide how to put them to the best use. Like, who will be savvy enough to decide how to allocate the compute time? (of course if it’s the NSA owning one, we all know what they’ll be focusing on).
A naive question regarding the correctness of this particular result: is the minimum size instance to test this algorithm small enough that it could be simulated (very inefficiently) on classical machines?
Comment #31 April 17th, 2024 at 9:09 am
fred #30: Judging the correctness of proposed classical algorithms has often been equally difficult, taking weeks or months rather than days. This seems like just a special case of the “math is hard” principle. 😀
Comment #32 April 17th, 2024 at 9:16 am
alex #7: My answer is yes, the solution to the short vector problem is “simply” the minimum lattice spacing. The essence of the problem is that in high dimensions, for a general shape of lattice, that very thing is far from obvious, to the point that it is taken to be computationally intractable.
As Scott says, if you are only visualizing a rectilinear lattice with an orthogonal basis, then in that case it’s trivial. However, already in two dimensions, a general lattice has a parallelogram unit cell rather than a rectangular unit cell. In two dimensions the solution is easily to visualize and easy to compute, but you might not call it outright trivial. The shortest vector certainly doesn’t have to be one of the three non-zero corners of the first parallelogram unit cell that you are given.
In three dimensions, for a general lattice, it is now harder to visualize the shortest vector, but with practice you can see it and there is a fast algorithm.
In four dimensions, it now takes a lot of practice to intuit the shortest vector of a general lattice, it doesn’t seem obvious at all. Nonetheless there is a fast algorithm.
In fact, in any fixed dimension d, there is a “fast” algorithm to completely solve the short vector problem for d-dimensional lattices. For instance, if L is a sublattice of the standard integer lattice, so that L has some basis of integer vectors, then you can solve SVP in polynomial time in the bit complexity of the d x d generator matrix. You can thus call SVP “fixed-parameter tractable” if the dimension d is the parameter.
As d increases, any known “fast” algorithm degrades inexorably. Even approximate SVP looks intractable for wide range of levels of approximation. For a very strict standard of approximation — within a constant factor of optimal — SVP is known to be NP-complete. One of the key ranges of interest in cryptography and CS theory is SVP with poly(d) approximation.
There is a supremely important algorithm called LLL (Lenstra-Lenstra-Lovasz) that runs in uniform polynomial time that solves approximate SVP, and calculates an approximately optimal lattice basis, with an approximation factor of O(2^d). It was originally published as a subroutine for a polynomial-time algorithm to factor polynomials with rational coefficients. As the authors sensed might happen, the “subroutine” became far more important than that first cool and already-interesting application.
Comment #33 April 17th, 2024 at 10:28 am
Given that QCs to do Shor’s algorithm to break current cryptography are not even on the drawing board ( I mean we don’t know to do reliable fault tolerance), it seems that worrying about breaking post quantum crypto is akin to worrying about overpopulation on Mars!
Comment #34 April 17th, 2024 at 10:34 am
Those of you who hang out on formerly-known-as-twitter might have seen this already, but I thought I’d let you know that I set up a Discord server to try and bridge the gap between lattice people and quantum-algorithms people currently working through the paper (I’m somehow somewhere in between the two fields myself). It received 400 members over the past two days, which is exactly one order of magnitude above what I was expecting, and there are some interesting technical discussions already happening there. Invite link if anyone wants to join: https://discord.gg/84TCQUTzBx
Comment #35 April 17th, 2024 at 10:45 am
Prasanna #33
I think it’s more a matter for the encryption community to get ideas, as early as possible, about what they should be focusing on, long term.
Comment #36 April 17th, 2024 at 10:57 am
Hans Heum #34: Thanks so much! I’ve never been big on Discord, but that sounds like a great resource for those who are.
Comment #37 April 17th, 2024 at 11:02 am
Prasanna #33: Oh, QCs to do Shor’s algorithm are certainly on the drawing board! And many users (especially, e.g., in the intelligence community) don’t want to use a cryptosystem now if there’s a decent chance it’ll be compromised in a decade or two. So I’d say that this is less like overpopulation on Mars than like overpopulation in Texas: maybe not a pressing current concern but you could totally imagine it becoming one!
Comment #38 April 17th, 2024 at 12:01 pm
@Scott: I’d like to make two points.
1. As Ian #27 says, it’s important to say that even if this paper stands up to scrutiny, all of lattice-based cryptography is still *classically* secure (as far as we know, the usual caveat). Case in point: Shor’s quantum algorithm for factoring and discrete log hasn’t given us any idea whatsoever towards an improved classical algorithm for these problems.
To me, lattice-based cryptography is much more important because it can construct primitives like fully homomorphic encryption that have no other known instantiations (save one, from indistinguishability obfuscation, but I’ll let that slide for now) and much less so because of its supposed post-quantum security. But I will admit I am a little biased.
(Incidentally, the fact that FHE has no other known — even remotely efficient — instantiations aside from LWE does keep me awake at night.)
2. The algorithms that will be irreparably affected are the ones that *necessarily* have to assume the post-quantum security of LWE, such as quantum cryptographic protocols.
An example is indeed Urmila and Zvika et al.’s quantum protocols. However, much of that machinery relies on generic primitives (e.g. trapdoor claw-free functions with an adaptive hardcore bit) which have other instantiations. Indeed, we now know that plain TCFs are sufficient, and that the adaptive hardcore bit property is a red herring — this is from upcoming work of Metger-Natarajan-Zhang (https://calendar.csail.mit.edu/events/280870) building on https://eprint.iacr.org/2022/400.pdf.
TCFs can now be constructed based on other, presumed post-quantum secure, assumptions, in particular ones related to group actions. An example is https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/439880/main.pdf?sequence=1
Comment #39 April 17th, 2024 at 12:06 pm
Ian #27 and Vinod #38: Fine. I thought it was obvious from context that I was talking about the security of FHE, etc. in an eventual world with scalable QCs, but I edited the post to make that 100% clear.
Comment #40 April 17th, 2024 at 12:12 pm
@Scott: Thank you! Of course, you and I perfectly understand the state of affairs, but I have been inundated with so many questions in the last week about the classical (in)security of FHE that I feel like this had to be said. Thanks again.
Comment #41 April 17th, 2024 at 12:53 pm
My guess is that the algorithm will not hold up. The climb down to n^a from n^4.5 at any a in O(1) seems not totally irrational https://twitter.com/huckbennett/status/1596760051678863361/photo/1
Comment #42 April 17th, 2024 at 3:14 pm
Scott #25 and Jim #29: Last question, I promise!
(I think) I understand the situation with GapSVP: there appears to be a critical power $p$ at which the problem of solving GapSVP to within an approximation ratio of $n^p$ transitions between being solvable in polynomial time and not. We can say with high confidence that $p > 0$, but there were no known upper bounds on $p$ until this new preprint, which (if correct) implies that $p \leq 4.5$.
In comment #19, Scott says that the approximation ratio “has to be below some threshold (often ~√n, iirc) in order to threaten a given lattice-based cryptosystem at all.” Am I correct that this is an independent threshold from the phase-transition threshold $p$ for the SVP, and that the cryptosytem will be broken if someone demonstrates (constructively) that $p$ is below that threshold? Does this threshold also represent a sharp phase transition, or is it just a “rule of thumb” for when the resources required to break the lattice-based cryptosystem become concerningly low? How exactly are they defining a “break” of the cryptosystem?
(P.S. \\( and \\) don’t seem to be working for me for inline TeX.)
Comment #43 April 17th, 2024 at 4:15 pm
Ted #42: To do inline TeX, use two dollar signs, like so: $$ x^2 $$.
The approximation ratio below which SVP is useful for cryptography, need not coincide with the ratio above which it’s solvable in polynomial time, but we can at least say that the former is less than or equal to the latter.
Comment #44 April 17th, 2024 at 6:18 pm
FWIW, I’ve believed for years that this was coming. Lattice problems have long been known to be reducible to DSP, which is a hidden-subgroup problem over one of the closest-to-Abelian non-Abelian groups imaginable–and of course hidden subgroups over Abelian groups are quantum-polytime. Moreover, we already knew that DSP has a subexponential-time algorithm (Thanks, Greg #32!), and it’s just a weird quirk in the lattice-to-DSP reduction that kept us from putting the two together to get a subexponential-time algorithm for lattice problems. Finally, Regev gave a quantum-polytime algorithm for DSP that requires a subset sum oracle, in which the latter isn’t necessary to actually solve the problem–only to clean up some garbage at the end. So the obstacles to a quantum-polytime algorithm for lattice problems never seemed at all fundamental to me.
Interestingly, Chen includes in his paper a comment along the lines of, “you may be wondering why this method doesn’t solve DSP directly–well, it turns out that if you try it in one dimension rather than n, you don’t get any information out.” So I’m now wondering whether just skipping the last step in the lattice-to-DSP reduction, and trying to solve a vector version of DSP instead of the standard one, is the key to making it work…
Comment #45 April 17th, 2024 at 7:22 pm
Dan Simon #44: Thanks for the insights! Are you the Dan Simon, of Simon’s algorithm? If so, it’s an honor finally to “meet” you. I still consider it one of the great open problems of quantum complexity theory to find a non-oracular application for your algorithm.
Comment #46 April 17th, 2024 at 8:09 pm
Dan Simon #44 “So the obstacles to a quantum-polytime algorithm for lattice problems never seemed at all fundamental to me”.. statement seems to be loaded without any caveats. No? Such no restriction algorithms places NP in BQP. Right?
Comment #47 April 17th, 2024 at 9:26 pm
Scott#45: Honor’s all mine–good luck finding a use for it… :^)
QMA(2) #46: Sorry, I was being terse–by “lattice problems”, I meant the constellation of mutually equivalent lattice problems that all the practical-looking post-quantum cryptosystems are based on. So no, I don’t expect NP to be in BQP…
Comment #48 April 18th, 2024 at 1:06 am
Prasanna #33: Wikipedia defines overpopulation as “a phenomenon in which a species’ population becomes larger than the carrying capacity of its environment”. What is the carrying capacity of Mars? Probably zero. So in some sense Mars will become overpopulated the moment we land the first human on it!
Comment #49 April 18th, 2024 at 6:54 am
Jenny #48: LOL, that might be the best response to the “overpopulation on Mars” line I’ve heard!
Comment #50 April 18th, 2024 at 8:08 am
Jenny #48
The year we land the first human on mars successfully is also the one where we build the QC to do Shor’s algorithm 🙂 I mean earth’s carrying capacity for useful QC is also currently zero
Comment #51 April 18th, 2024 at 11:01 am
For those confused about how lattice-based encryption works, Veritasium has a primer starting at 18m17s in this video:
https://youtu.be/-UrdExQW0cs?si=glMgmnRYS1gL0nL0&t=1097
(As Scott mentioned last time it came up, that whole video is excellent, totally belying the absurdly clickbait-y title!)
Comment #52 April 18th, 2024 at 3:31 pm
So, if correct, and it’s a problem for which QC seem to have a huge advantage over classical machines, just like for prime factorization, does it mean there could be a “path” between this and prime factorization? (similar to the equivalence between all NP-complete problems)
Comment #53 April 18th, 2024 at 3:54 pm
fred #52: There actually is an indirect connection between factoring and lattice problems. In Oded Regev’s recent breakthrough paper on factoring using quantum circuits of size ~O(n3/2), for example (one of the first improvements over Shor’s algorithm), he points out that his circuits could be improved even further—to nearly linear size—if there were a fast classical algorithm for lattice problems to handle the postprocessing.
I don’t know of any reductions in the other direction, from lattice problems to factoring.
Comment #54 April 18th, 2024 at 4:44 pm
Scott #53 “..reductions .. from lattice problems to factoring” is a nice question. Again it should be with a caveat. Or else NP problems reduce to factoring I think. Still good to ask for any other obstructions that will apply only to LWE flavored problems.
Comment #55 April 18th, 2024 at 4:48 pm
QMA(2) #54: I had in mind GapSVP and the like with approximation ratios that put them into classes like NP∩coNP and presumably prevent them from being NP-hard. Even then we don’t know of reductions from them to purely number-theoretic problems like factoring.
Comment #56 April 18th, 2024 at 8:33 pm
Dan Simon #44 – “It’s just a weird quirk in the lattice-to-DSP reduction that kept us from putting the two together to get a subexponential-time algorithm for lattice problems.”
But I have never thought of this as necessarily a weird quirk. First, the classical sieve algorithms that fully solve the short vector problem can also be thought of as subexponential (more precisely, stretched exponential with a square root in the exponent) in the bit complexity of the short vector, even if they are exponential in the dimension of the lattice. Second, the quantum algorithms for the dihedral hidden subgroup problem (not just my original one, but also three variations found by Regev, then me, then Peikert) are also sieve algorithms that are similar to classical sieve algorithms for lattices.
Thus, it could be natural for quantum algorithms for DHSP to put DHSP on a common footing with SVP/CVP. What seems at first like a weird quirk could instead be an intrinsic mathematical “conspiracy”.
In keeping with this, an advance with lattice algorithms that is as big as the one claimed in this paper might well (if correct) lead to an automatic improvement for the DHSP problem as well. In fact, Chen’s disclaimer on this point that you quote makes me wonder. DHSP can also be thought of as hidden shift. My intuition has been that the hidden shift problem on \Z^d has a somewhat dimension-independent character.
Comment #57 April 18th, 2024 at 10:04 pm
He posted today that he has been informed of a bug that he doesn’t know how to fix:
http://www.chenyilei.net/
Comment #58 April 18th, 2024 at 10:08 pm
Looks like a (potentially unfixable) bug has been found: http://www.chenyilei.net/.
Comment #59 April 18th, 2024 at 10:33 pm
Looks like it is broken. See updated paper.
https://eprint.iacr.org/2024/555.pdf
Comment #60 April 19th, 2024 at 1:46 am
The eprint page now contains an authors note about finding a bug in the paper.
Comment #61 April 19th, 2024 at 2:38 am
From Yilei Chen’s webpage yesterday:
Note: Update on April 18: Step 9 of the algorithm contains a bug, which I don’t know how to fix. See Section 3.5.9 (Page 37) for details. I sincerely thank Hongxun Wu and (independently) Thomas Vidick for finding the bug today. Now the claim of showing a polynomial time quantum algorithm for solving LWE with polynomial modulus-noise ratios does not hold. I leave the rest of the paper as it is (added a clarification of an operation in Step 8) as a hope that ideas like Complex Gaussian and windowed QFT may find other applications in quantum computation, or tackle LWE in other ways.
Comment #62 April 19th, 2024 at 2:51 am
Chen posted an update at https://eprint.iacr.org/2024/555 and claimed that the original result is invalid due to a bug in step 9. I haven’t see any comment mentioning this update so I think it might be useful pointing it out.
Comment #63 April 19th, 2024 at 4:06 am
Here is the latest update as of 19 April. It seems there is a serious bug in the paper. To Quote Y. Chen:
> Step 9 of the algorithm contains a bug, which I don’t know how to fix. See the updated version of eprint/2024/555 – Section 3.5.9 (Page 37) for details. I sincerely thank Hongxun Wu and (independently) Thomas Vidick for finding the bug today.
A bit more elaboration on crypto.stackexchange:
https://crypto.stackexchange.com/a/111465/106306
Comment #64 April 19th, 2024 at 5:54 am
https://twitter.com/mfesgin/status/1781165487709364478 Can anything like this technique work? I doubt it had any chance.
Comment #65 April 19th, 2024 at 6:30 am
Seems like there is a serious bug in the algorithm based on Chen’s comment here http://www.chenyilei.net/ .
Comment #66 April 19th, 2024 at 6:44 am
@Fred #52: Back in the days before exact SVP (in the Euclidean norm) was known to be NP-complete (this is from Ajtai’s work in 1997-98), Len Adleman had a manuscript showing a reduction from the exact SVP to factoring, using the “Schnorr-Adleman prime number lattice” (see https://projects.cwi.nl/crypto/risc_materials/2021_05_20_micciancio_slides.pdf for an exposition). Ajtai, a few years later, showed that SVP is NP-hard, making the statement of Adleman’s reduction trivial. Nevertheless, the Adleman construction is super nice and inspired future NP-hardness results.
It is a great open question to show a reduction from (say) \sqrt{n}-approximate SVP to factoring, as Scott #55 says (many have banged their heads on with nothing to show).
On a different angle, a work of Sotiraki, Zampetakis and Zirdelis shows that a very lattice-y problem is PPP-complete which immediately means that factoring reduces to it (as factoring is in the class PPP). See here for more: https://arxiv.org/pdf/1808.06407.pdf. So, this suggests, but does not prove, that \sqrt{n}-approximate SVP may be PPP-hard (although it is unlikely to be NP-hard).
Comment #67 April 19th, 2024 at 7:39 am
I remember reading that there was no reason to believe that there shouldn’t be a quantum algorithm for lattice problems when I first got interested (as a lay person) in quantum computation 15ish years ago.
So I’m a bit curious why it is still the most popular candidate for post quantum cryptography.
Are there really no problems that are known to lie outside of BQP, that could be used for building a public cryptosystem?
Even if only in principle. I’m aware that cryptography is very difficult.
Comment #68 April 19th, 2024 at 10:22 am
polylog
“Even if only in principle. I’m aware that cryptography is very difficult.”
I think a huge requirement of a practical cryptography algo is that it has to be really fast. When every single HTTP block that’s going over the internet has an encrypted payload, you really have to be using something that’s blazing fast.
Comment #69 April 19th, 2024 at 10:41 am
It’s kind of funny to refer to the mistake as a “bug”, considering that bugs are coding errors in some implementation of a correct algorithm and are fixable.
Anyway, I guess the cryptography community can relax now… 😀
Comment #70 April 19th, 2024 at 12:44 pm
Vinod #66 What do you mean by “Len Adleman had a manuscript showing a reduction from the exact SVP to factoring”? I thought it is other way around.
Comment #71 April 19th, 2024 at 2:29 pm
QMA(2)
yea, it’s a bit confusing.
In
https://projects.cwi.nl/crypto/risc_materials/2021_05_20_micciancio_slides.pdf
They wrote:
(Adleman 1995) Attempt to give a rigorous proof that factoring
reduces to SVP
Maybe SVP is not NP-hard
Can we prove it is at least as hard as factoring?
Attempt to turn Schnorr’s algorithm into a formal reduction
Maybe some of the “confusion” is that one can often express a problem that has not well defined complexity into an NP-Hard problem instance.
Like prime factorization can be easily expressed as a 3-SAT problem.
But then, even though 3-SAT is supposedly NP-Hard/complete, that’s really a worst-case statement, but no one really knows what it means in terms of practicality, i.e. for many instances (and possibly depending also on a particular solver), you can actually solve it reasonably well (for a 3SAT instance), or maybe you can’t. But either way it still doesn’t tell you much more in terms of the actual theoretical complexity of prime factorization.
So reducing something to a class of problems that’s theoretically hard to solve really doesn’t say much.
E.g. the hope is to maybe use 3SAT solvers to break RSA (for a given number of bits),… and that wouldn’t say much in terms of theoretical hardness.
And then what’s typically referred here as “crack-pottery” is the much stronger claim that reduces something to a class of problems that’s theoretically easy to solve.
Comment #72 April 19th, 2024 at 6:22 pm
QMA(2) #70 indeed, I meant a reduction from factoring to exact SVP.
Comment #73 April 19th, 2024 at 9:03 pm
[…] More information about this can be found in a copy of the paper here, a statement posted on the author’s website here and in additional comments posted on Scott’s Aaronson’s Shtetl Optimized blog here. […]
Comment #74 April 20th, 2024 at 9:01 am
Though this claim evaporated quite fast, the trail it left behind will linger for a long time. Especially the superb insights on this blog make it a real hub for research progress, all the more reason for why such a thing should exist and thrive
Comment #75 April 20th, 2024 at 1:44 pm
My conjecture: Any quantum attack on LWE which goes by approximating SVP to polynomial factors can be classicalized.
Comment #76 April 21st, 2024 at 8:38 pm
https://i.imgur.com/bSfS1wT.jpeg
Comment #77 April 22nd, 2024 at 4:25 pm
Sorry for this off topic post Dr AA but need advice if someone has input. I have two fraternal twin daughters that are 12 years of age. One has very limited mathematical aptitude while the second is say 99.9 (not sure exactly) percentile. The one with aptitude just received a 90 on entrance exam to an elite STEM school where average score of 900 hopeful students was 40. We do not live in the US. I have been using the Singapore Math curriculum at home but that ends at eighth grade so solving two simultaneous equations and solving quadratics etc so call it Algebra 1 equivalent. I am not sure where to go next. I am considering the Art of Problem Solving series for Geometry and then Pre Calc but have zero teaching experience and so would be thankful for any input anyone has.
I had a disagreement with my second daughter this evening about Taylor Swift’s native non-musical intelligence.
I haven’t posted for a while and so would like to express my admiration for Netanyahu continuing to pursue the military objective in Gaza in spite of international pressure. I can only hope that continues. I would be willing to go into harms way to defend Israel.
Comment #78 April 25th, 2024 at 8:27 am
OhMyGoodness
not sure it helps (maybe too advanced?) but MIT (and other universities) has a lot of online courses (undergraduate and graduate), on youtube
https://www.youtube.com/user/mit
and on site (often with videos, notes, exercises)
https://ocw.mit.edu/search/?s=department_course_numbers.sort_coursenum
(there’s a category for Mathematics only)
Comment #79 April 25th, 2024 at 11:59 am
fred #78
Thank you for taking the time to provide this and I will check it out.
Comment #80 April 30th, 2024 at 12:51 pm
Scott #8,
I appreciate your efforts to explain the complexities of the GapSVP (Gap Shortest Vector Problem) in lattice-based cryptography, but I believe there are a few points where clarification might be beneficial.
While you accurately describe the challenges posed by a “bad” basis, it’s important to clarify that finding short nonzero vectors isn’t strictly equivalent to “breaking the cryptosystem.” The hardness of breaking most lattice-based cryptosystems, especially those based on the Learning With Errors (LWE) problem or its variants, doesn’t only rely on the difficulty of finding the shortest vector but rather on solving slightly easier problems like finding a somewhat short vector, which still remains computationally challenging.
Your comment suggests that these shorter vectors are “hidden” by the longer basis vectors and require “crazy integer linear combinations” to find. This might give the impression that such vectors are almost random or less systematic in nature, which isn’t the case. The algorithms designed to find short vectors, like the LLL (Lenstra–Lenstra–Lovász) algorithm or BKZ (Block Korkin-Zolotarev) reduction, are highly structured and exploit the geometric properties of lattices rather than relying on arbitrary combinations.
While the scenario you describe is pertinent in theoretical discussions, practical lattice-based schemes often deal with structured lattices where the basis is not merely “bad” but structured in a way that makes certain computational problems feasible while keeping others (like finding the shortest vector) hard. This distinction is crucial for understanding the real-world application and security of lattice-based systems.
I hope these points help clarify the discussion a bit further!