HSBC unleashes yet another “qombie”: a zombie claim of quantum advantage that isn’t
Today, I got email after email asking me to comment on a new paper from HSBC—yes, the bank—together with IBM. The paper claims to use a quantum computer to get a 34% advantage in predictions of financial trading data. (See also blog posts here and here, or numerous popular articles that you can easily find and I won’t link.) What have we got? Let’s read the abstract:
The estimation of fill probabilities for trade orders represents a key ingredient in the optimization of algorithmic trading strategies. It is bound by the complex dynamics of financial markets with inherent uncertainties, and the limitations of models aiming to learn from multivariate financial time series that often exhibit stochastic properties with hidden temporal patterns. In this paper, we focus on algorithmic responses to trade inquiries in the corporate bond market and investigate fill probability estimation errors of common machine learning models when given real production-scale intraday trade event data, transformed by a quantum algorithm running on IBM Heron processors, as well as on noiseless quantum simulators for comparison. We introduce a framework to embed these quantum-generated data transforms as a decoupled offline component that can be selectively queried by models in lowlatency institutional trade optimization settings. A trade execution backtesting method is employed to evaluate the fill prediction performance of these models in relation to their input data. We observe a relative gain of up to ∼ 34% in out-of-sample test scores for those models with access to quantum hardware-transformed data over those using the original trading data or transforms by noiseless quantum simulation. These empirical results suggest that the inherent noise in current quantum hardware contributes to this effect and motivates further studies. Our work demonstrates the emerging potential of quantum computing as a complementary explorative tool in quantitative finance and encourages applied industry research towards practical applications in trading.
As they say, there are more red flags here than in a People’s Liberation Army parade. To critique this paper is not quite “shooting fish in a barrel,” because the fish are already dead before we’ve reached the end of the abstract.
They see a quantum advantage for the task in question, but only because of the noise in their quantum hardware? When they simulate the noiseless quantum computation classically, the advantage disappears? WTF? This strikes me as all but an admission that the “advantage” is just a strange artifact of the particular methods that they decided to compare—that it has nothing really to do with quantum mechanics in general, or with quantum computational speedup in particular.
Indeed, the possibility of selection bias rears its head. How many times did someone do some totally unprincipled, stab-in-the-dark comparison of a specific quantum learning method against a specific classical method, and get predictions from the quantum method that were worse than whatever they got classically … so then they didn’t publish a paper about it?
If it seems like I’m being harsh, it’s because to my mind, the entire concept of this sort of study is fatally flawed from the beginning, optimized for generating headlines rather than knowledge. The first task, I would’ve thought, is to show the reality of quantum computational advantage in the system or algorithm under investigation, even just for a useless benchmark problem. Only after one has done that, has one earned the right to look for a practical benefit in algorithmic trading or predicting financial time-series data or whatever, coming from that same advantage. If you skip the first step, then whatever “benefits” you get from your quantum computer are overwhelmingly likely to be cargo cult benefits.
And yet none of it matters. The paper can, more or less, openly admit all this right in the abstract, and yet it will still predictably generate lots of credulous articles in the business and financial news about HSBC using quantum computers to improve bond trading!—which, one assumes, was the point of the exercise from the beginning. Qombies roam the earth: undead narratives of “quantum advantage for important business problems” detached from any serious underlying truth-claim. And even here at one of the top 50 quantum computing blogs on the planet, there’s nothing I can do about it other than scream into the void.
Update (Sep. 26): Someone let me know that Martin Shkreli, the “pharma bro,” will be hosting a conference call for investors to push back on quantum computing hype. He announced on X that he’s offering quantum computing experts $2k each to speak in his call. On the off chance that Shkreli reads this blog: I’d be willing to do it for $50k. And if Shkreli were to complain about my jacking up the price… 😄
Follow
Comment #1 September 26th, 2025 at 12:01 am
I will say that, like in all fields, somehow you got to “survive”. The paper is a pile of crap but both institutions weighted risk vs reward and the ratio was <1. The audience targeted is not the scientific community. Unfortunately. I have experience in this “industry”, happy to explain more.
Comment #2 September 26th, 2025 at 5:38 am
Llor #1: Is there more to explain here that I don’t already understand? That wasn’t made clear enough by all the credulous pieces in the business/financial news, repeating the “34% better” part, and ignoring the fact that whatever benefit they saw can have nothing to do with quantum computational speedup?
Comment #3 September 26th, 2025 at 6:45 am
I was hoping you’d write a post about this! I saw the story yesterday and immediately knew there was nothing there but no one would trust me. Now I have a reference I can send to people…
Comment #4 September 26th, 2025 at 7:01 am
Papers like this are fueling speculation in Quantum Computing adjacent stocks. The hype train is full steam. Several very intelligent friends of mine with outstanding analytical skills have surprised me with news of taking new substantial stakes in QC stocks. I’ve tried to warn them off as it still isn’t clear to me how QC is going to translate to effects on real economy.
From what I can tell, the most promising application of QC is still just modeling QM. Which is great! But it is more along the lines of pure research and not clearly a reason to buy QC stocks.
Might be good to update the cartoon “The Talk” with a target audience of would be QC investment speculators?
Comment #5 September 26th, 2025 at 7:01 am
On the bright side, this gives us all a small, brief reprieve from narratives of “AI advantage for important business problems” detached from any serious underlying truth-claim.
Variety is the spice of life, right?
Comment #6 September 26th, 2025 at 7:30 am
Agree with the critique, I am on terrible ground because I don’t really understand the paper properly but I fished one good idea from the barrel (ha ha ha) which is the concept of using QC to represent conditionality and that this representation can’t be feasibly stored in a classical memory, but can be represented in a QC and then the QC can compute (some what mysterious for me so far) quantum features that provide information on the problem.
Of course, the fact that the simulated noiseless version failed (worse than pure classical) but the noise filled version gave near perfect results is a *small* issue. This indicates that the features derived are useless, but “something else cool happened”. Still the idea of extracting small hints of information as a pre-processor is somewhat appealing to me.
Shame it didn’t work, shame that the authors didn’t make that clear but presented a large claim instead, but I am glad that we have the paper to think about. If they had said this is a negative result then I would have been happy, but probably would never have looked at it.
Comment #7 September 26th, 2025 at 7:47 am
Fergus #3: Yeah, in this case, there’s hardly anything I need to explain that isn’t already confessed in the paper itself! All my post needs to do is to create a situation of common knowledge that no, you’re not just imagining things; there really is no quantum fire beneath this smoke; it’s indeed just a scientifically risible attempt to juice HSBC’s stock price or whatever (which might well have succeeded at its purpose; I couldn’t say).
Comment #8 September 26th, 2025 at 8:01 am
Anon QC speculator friend #4: Talking about the prospects of quantum computing in terms of which stocks to buy or short, is almost exactly like talking about cultivating character traits in terms of what will get you laid.
Like, yes, one would hope that becoming a better, more virtuous human being will ultimately lead to success with the opposite sex, as sometimes it does. But if instead it turned out that the Dark Triad traits maximized mating success, that discovery wouldn’t make evil good or good evil.
In the same way, one would hope that the quantum computing companies that are actually advancing the science and engineering, and refraining from making ridiculous claims, will ultimately create value for their investors. But if they don’t, and if instead people get rich by investing in obvious scams (as sometimes happens), then the scams were still scams, and the truth was still the truth.
Comment #9 September 26th, 2025 at 8:13 am
Noah Motion #5:
On the bright side, this gives us all a small, brief reprieve from narratives of “AI advantage for important business problems” detached from any serious underlying truth-claim.
From my standpoint, the trouble with “AI advantage for important business problems” narratives is that nowadays, many of them might well be true! So at the very least, one needs to carefully examine any given claim, looking at what exactly was done and at empirical data about the results.
Quantum speedup, by contrast, is so subtle and specialized, and in a way that’s so routinely raped (for lack of a better term) in corporate press releases, that one merely needs to learn to recognize various “tells” that prove that that’s what’s happening.
Comment #10 September 26th, 2025 at 8:23 am
Financial institutions are eager to turn QCs into the same “easy win” LLMs have been, as soon as possible, to be in the lead this time.
But privately the execs will admit they have no idea what QCs are about, but they’re sure there must be something they can hype to their stockholders…
this HSBC bs will force many other players to come up with their own bs ASAP.
Comment #11 September 26th, 2025 at 8:25 am
Simon #6:
I am on terrible ground because I don’t really understand the paper properly but I fished one good idea from the barrel (ha ha ha) which is the concept of using QC to represent conditionality and that this representation can’t be feasibly stored in a classical memory, but can be represented in a QC and then the QC can compute (some what mysterious for me so far) quantum features that provide information on the problem.
The thing is, even the one good idea you say you took isn’t really true without careful qualification. Yes, n qubits take 2n amplitudes to describe, and those amplitudes could encode information about conditional probabilities in some complicated distribution D that would take ~2n classical bits to write down. But none of that is useful, unless you can learn the properties of D that you care about by measuring the state — and not only that, but also the properties couldn’t be learned just as easily using for example classical Monte Carlo methods. This requires choreographing a pattern of interference that concentrates amplitude on the right answer while canceling out the amplitudes of the wrong answers.
Comment #12 September 26th, 2025 at 8:36 am
A much more minor point than their head-smackingly stupid claim that they only see any effect when they add a random noise realization to their output, but the statement:
“We observe a relative gain of up to ∼ 34% in out-of-sample test scores”
strikes me as a carefully constructed way to phrase a result to get a big sounding number without clearly explaining what they actually are saying.
What is the “test score?” Accuracy? Precision? Taking a step back, I am also not even sure what kind of task this is — classification or regression? “Fill probability” sounds like a continuous variable from 0 to 1, but then they are talking about “fill prediction performance” — is the change from “fill probability” in their motivation to “fill prediction” in their reuslts significant in terms what they are testing?
Plus they are talking about “relative gain.” If their metric is accuracy, I’d expect them to talk about “absolute gain,” so 34% would mean (say) the accuracy went from 50% to 84%. But “relative gain” sounds like they actually are saying they went from 50% to 1.34 * 50% = 67%. Did they just choose a way to do the baseline comparison to get a bigger number?
Then there’s the wiggle words “a gain up to” — so they only are reporting their best result, which they admit in the abstract was subject to a noise fluctuation not present in a noiseless simulation. If they wanted to clearly communicate their results in way robust to random fluctuations you’d think they’d report at least a mean or median improvement over the experiments they tried, in addition to the maximum improvement.
And they haven’t included any measure of whether this effect is statistically significant. Not even an indication of how many trials they did.
Ay yi yi.
Comment #13 September 26th, 2025 at 8:48 am
You are NOT screaming in the void.
Comment #14 September 26th, 2025 at 8:51 am
Scott #8,
I should clarify that I’m not a speculator. In terms of your analogy I’m sexually abstinent when it comes to speculating on QC 🙂
I just worry about all my friends out there in the wild world of QC speculation having adverse consequences. That’s why I propose an update to “The Talk” about why abstinence is the way to go when it comes to QC stocks. The sex isn’t all that sexy despite the hype! LOL
Comment #15 September 26th, 2025 at 8:56 am
My interpretation of the paper was that their relatively simple ML algos were overfitting on their training data, and adding in noise (which happened to be quantum noise in this case) was improving results, but then they said that they had a AUC of 0.99 and that seems completely absurd. How is that even possible? extreme P hacking? did you pick up the size/shape of the training data?
Comment #16 September 26th, 2025 at 10:11 am
Scott, thank you for your responsible take on these reports. I’m deeply concerned how this bubble could affect our community eventually. VC funding will dry up today or tomorrow. And if it dries up abruptly, it sets off cascading defunding from all agencies. Without which, quantum science can’t keep up the pace or depth it truly deserves !!
Comment #17 September 26th, 2025 at 10:25 am
Anon QC speculator friend #14: I mean, I’m not invested in QC either, but that’s partly just so that I retain my intellectual independence. I don’t actually have blanket advice of abstinence, in either the sexual realm or the quantum investing one. If some investor genuinely believes in some particular QC effort, and they’re willing to take a large gamble, why not?
I’d just hope to see some narrative that involves
(1) actual quantum hardware that exists or could plausibly be built,
(2) running an actual quantum algorithm that’s been discovered,
(3) to get an actual speedup over the best known classical methods,
(4) for some problem that actually matters for the company or its customers or clients.
Is it too much to ask that people don’t skip any of these four steps?
Comment #18 September 26th, 2025 at 11:04 am
Thank you for your service in debunking this unfortunate paper. I have passed on a link to this blog to a few people who had “shared” the announcement on social media.
Comment #19 September 26th, 2025 at 11:11 am
I would love to see somebody from HSBC hop on here and respond.
Scott, you are a legendary brain box, thank you for all your blog writing.
Comment #20 September 26th, 2025 at 11:54 am
dan wrote:
“I would love to see somebody from HSBC hop on here and respond.“
very unlikely…. corporations (esp banks) just don’t do that!!
Comment #21 September 26th, 2025 at 4:12 pm
IBM has a specific definition of “quantum advantage” (https://arxiv.org/abs/2506.20658) and the paper does not reference it. Scott, I wonder if you used the word “advantage” intentionally as part of your critique of the paper or if it is just a sign that IBM’s defintion of “quantum advantage” is not gaining wider usage?
Comment #22 September 26th, 2025 at 4:17 pm
Warbler #21: No, I don’t care about (or even remember) this or that company’s idiosyncratic definition of “quantum advantage.” I just mean the obvious thing: that you are beating the best anyone can figure out how to do with a classical computer for the same task, by a wide margin, for reasons of quantum computational speedup.
Comment #23 September 26th, 2025 at 5:12 pm
They aren’t using an exact simulator (since they use >100 qubits), but an MPS. I don’t really think there is any quantum-advantage in there, but if there was, one might expect something like this.
Comment #24 September 26th, 2025 at 5:17 pm
Hi Scott,
Could you recommend some open CS problems that sensory deprived LLM could think for a few hrs at a time? Not as intractable as P vs NP, but something that e.g. claude opus or gpt5 can understand and at least have a non-zero chance to solve or advance on?
Comment #25 September 26th, 2025 at 5:31 pm
Oleg S #24: Pretty much any small problem that would arise in the ordinary course of research, anything one would give to a grad student for example, is a candidate here, so much so that it’s hard to know where to start. And when examples do arise for me, I’ll probably just feed them to GPT myself rather than posting here! In fact, stay tuned for my next blog post, which will precisely be about a case where my collaborator and I used GPT5-Thinking to help us with a research problem in quantum complexity theory.
Comment #26 September 26th, 2025 at 5:36 pm
Scott,
“Is it too much to ask that people don’t skip any of these four steps?”
Well, are you aware of a viable example of an investment that plausibly meets all four criteria AND the necessary fifth:
(5) will solving said problem feasibly lead to lucrative profits?
I’m abstinent not because I’m “religiously” opposed to QC investments, but because I can’t honestly think of a single viable investment that even plausibly meets all five criteria. I guess I just haven’t met “the one” yet. 😛
Comment #27 September 26th, 2025 at 6:56 pm
Is it possible that there is some practical advantage here even if there is no theoretical one? After all, there is no complexity advantage to neural networks, but they are obviously an outstanding fit for certain problems.
Comment #28 September 26th, 2025 at 7:38 pm
For what it is worth, I read this blog *in prison* and had four copies of the book! I am writing an investor-perspective piece titled “Quantum Computing Since Yesterday”
Comment #29 September 26th, 2025 at 7:48 pm
Do you think Grover-type square root speedups will be important in machine learning / AI applications within the next 30 years or so? Will we have enough fault-tolerant qubits by then for Grover?
And, do you think quantum computers could give us anything better than Grover for any machine learning applications?
So if this is phrased badly, not an expert
Comment #30 September 26th, 2025 at 10:44 pm
Martin Shkreli #28: I’m excited about your quantum conference call. I don’t count myself an “expert”, but I know enough to know that the quantum industry and its investors need to face a reality check. You are in a unique position to do this. Even if that means I lose my job, I would rather die a scientist than live to see myself become a hack. I hope the markets reward you for bringing this information to their attention. Big fan, btw.
Comment #31 September 27th, 2025 at 11:03 am
From what I understand, the noise in quantum hardware was actually useful for simulating algorithmic trading.
Scott, you argue that this is simply B.S. because there isn’t a genuine quantum computational advantage — which is true.
In my opinion , you’re only looking at it from the lens of complexity theorist.
Does the advantage always need to come in the form of computational speedup?
This reminds me of when I was reproducing Jens Eisert’s work on using noise to improve training by escaping barren plateaus. Checkout his work here https://arxiv.org/pdf/2210.06723
(Note that I was able to reproduce his result for the parameterised circuit that they chose in the paper)
The same barren plateaus occur during noiseless simulation, but with device noise, we are able to escape them.
While this might not be considered a quantum computational speedup, it’s still valuable for training.
So if it really turns out that quantum noise is useful for some subroutine in algorithmic trading, it’s still a meaningful result, no?
UTILITY is UTILITY at the end of the day — it doesn’t need to take the form of computational speedup.
If there are problems where quantum noise is genuinely useful, that’s a good thing no?
Interested to hear your opinion Scott
Comment #32 September 27th, 2025 at 11:08 am
Martin Shkreli #28:
1. I’m honored that you would read my book in prison.
2. I know that you must have positive qualities, because one of the most wonderful women I ever met (who happens to be my former student and OpenAI colleague) chose you. She must’ve seen something in you that most of the world hasn’t.
3. Congratulations on the birth of your child.
4. Were you sincere when you told the court that you regret what you did with Daraprim? If you weren’t, do you have any case for how such an action would make the world better or at least not worse?
5. I’m serious about my offer to join your call and talk about quantum computing, for an amount that’s surely peanut change to you. I’ll even donate half of it to good causes.
Comment #33 September 27th, 2025 at 11:16 am
Julian #29: Briefly—
I find it very hard to say what will happen in the “next 30 years” — maybe superintelligence will take over the world and kill all humans, and if it wants to build quantum computers to run Grover on, it surely will.
I can say that Grover eventually becomes a net win for optimization and ML problems, but with currently known methods for doing fault-tolerance, it only seems to happen after N (the number of candidate solutions) is in the trillions or so. That’s the problem.
Yes, there are probably at least some super-Grover speedups, some of the time, for some problems in optimization and ML. That’s one of the great research topics in quantum algorithms, with the latest frontiers being the algorithms called Yamakawa-Zhandry and DQI (Decided Quantum Interferometry). We should absolutely try to figure out more while being honest about what we’ve found.
Comment #34 September 27th, 2025 at 11:42 am
Dikshant Dulal #31: If someone gets “UTILITY” only from the noise in a quantum computation, rather than from the quantum computation itself, then either
(1) they’ve discovered something that overturns decades of quantum computing theory, or maybe even a century of quantum mechanics itself — so let them make that case explicitly, or else
(2) if they had looked systematically, they would’ve found similar “UTILITY” from the noise in chicken entrails or sunspots or all sorts of other things that aren’t quantum at all. But they didn’t look systematically, because their goal was to link what they’re doing to the word “quantum” in people’s minds—which reveals the fundamental dishonesty of the whole enterprise.
Furthermore, all this seems sufficiently obvious to me, that a person would need to invest nontrivial effort in order to not understand it!
Comment #35 September 27th, 2025 at 2:23 pm
The obvious question I’d have for any HSBC people on this thread is if they’ve also seen advantage in the noisy simulators. If they do then this kind of unexpected result reminds me of “GradIEEE Half Decent” http://tom7.org/grad/. By employing associative floating error accumulation linear networks can learn non linear functions. A bug in the implementation of the machine. If they don’t then it’s worth finding out if there’s not some similar unexpected gain due to non linear effects from quantum noise and after that if the noise can be classically modeled
Comment #36 September 27th, 2025 at 2:25 pm
You are right that a noisy simulator baseline would clarify if the uplift was just noise acting like regularization. But simulating 109 qubits with realistic noise is practically impossible today, and simplified noise models would not reflect the messy behavior of real Heron hardware. The authors stuck to noiseless sim versus real hardware because that is reproducible and grounded in reality. Even if the gain comes from noise, showing it on actual market data and actual chips is still meaningful.
Thanks for being skeptical. Here is your silver medal. Gold is always reserved for the bold who actually build something for others to talk about.
Comment #37 September 27th, 2025 at 2:37 pm
Scott is missing the engineering point. Simulating 100+ qubit PQFM circuits with realistic noise is not just hard, it is impossible with present classical resources. The only noisy sims available today use toy channels like depolarizing or amplitude damping. Those do not capture Heron’s coupler graph, correlated drift, crosstalk, or readout asymmetries. Comparing that kind of cartoon to hardware would have been meaningless.
The authors did the only defensible thing: show noiseless simulation to isolate the transform, then show hardware to capture the real distribution. The improvement is not hype, it is an empirical observation that hardware noise yields statistically useful structure. That is what the data says.
If someone claims it is just noise, then they need to show a 109 qubit noisy simulator calibrated to Heron that reproduces the result. Until then, calling it a qombie is rhetoric, not science.
Comment #38 September 27th, 2025 at 2:39 pm
Hm true about IBM scale having limitations. Maybe it’s possible to simulate the more abstract gates instead as the system shouldn’t have deep entanglement with the hex lattice and the swap overhead. But if a noiseless sim does run noise can be injected clearly. I would not restrict to regularization, it could be any non linear effect for classification
Comment #39 September 27th, 2025 at 2:42 pm
Dennis #36: Ooooh! Does my creation of the quantum computational and quantum information supremacy protocols demonstrated by Google, USTC, Xanadu, and now Quantinuum over the past 6 years count for a gold medal from you?
Comment #40 September 27th, 2025 at 3:00 pm
Just FYI, IBM Watson #37 is a troll. I know this because the same phrasing (“calling it a qombie is rhetoric, not science”) was submitted in multiple comments from multiple pseudonyms, as in an astroturf campaign.
Comment #41 September 27th, 2025 at 3:49 pm
I read the paper. So what is the issue here? Doing actual experiment with chemicals is far better than simulating chemical reactions on computer. They experimented on QC hardware. That counts for something. We have to understand that ultimately we are converting quantum to classical, so it is going to be difficult to assess quantum noise through classical system…
I think I’m eager to see their next research outcome.
Comment #42 September 27th, 2025 at 4:20 pm
Rodriguez #41: Thousands of people are now doing experiments on real QCs. The question is always, what did we learn from this experiment? If you want to learn something new, there are stringent requirements:
(1) Do something where you’re not 100% sure in advance what the outcome is going to be (or whether it’s going to work).
(2) Rule out any explanations for the observed effect other than quantum mechanics.
(3) Compare to the best that could be done classically for the same task.
(4) Bend over backwards to communicate the results honestly.
…
I’ll leave it as an exercise for the reader which of these ground rules were observed in this instance and which were violated.
Comment #43 September 28th, 2025 at 10:37 am
Scott, the paper never claims a quantum advantage, right? They basically say that “we have an not-well understood “advantage” in the sense of a 34% gain which we need understand better”. It’s completely stupid to believe this is a result of “quantum noise” I agree. But the claim is not a supremacy claim all I am saying. The media (with the companies involved) overdid it. The arxiv preprint is just an average to bad paper which appears more often than not on arxiv.
From my email you can easily check who I am and I am glad to provide more info and contexr 🙂
Comment #44 September 28th, 2025 at 10:46 am
Llor #43: I would’ve had no objection whatsoever to people putting out a paper saying, “hey, we found that noise in a quantum circuit seems to improve our financial predictions, relative to when there’s no noise. We don’t understand what’s going on here; maybe someone else can help explain this.”
In actual reality, however, HSBC (and, alas, IBM) launched a marketing campaign to give people the impression that they did see a clear quantum advantage for a finance problem, and the media then dutifully reported that, and that’s what 99.9% of the world saw, and this sort of thing is now so predictable that one has to assume it was the intent, so that’s what I was responding to.
Comment #45 September 28th, 2025 at 11:08 am
Scott,
“I would’ve had no objection whatsoever to people putting out a paper saying, “hey, we found that noise in a quantum circuit seems to improve our financial predictions, relative to when there’s no noise. We don’t understand what’s going on here; maybe someone else can help explain this.”
^ this is the reality.
“In actual reality, however, HSBC (and, alas, IBM) launched a marketing campaign to give people the impression that they did see a clear quantum advantage for a finance problem, and the media then dutifully reported that, and that’s what 99.9% of the world saw, and this sort of thing is now so predictable that one has to assume it was the intent, so that’s what I was responding to.”
This is what media interpreted and the promo video also did not help. (1) The video was not needed for scientific reasons. There are other reasons behind it. (2) Even there, unless memory fails, “quantum advantage” is never claimed as usually done by the IBMs and IonQs and Xanadus and Googles.
That being said I am keen to provide more context which I think you’d like 🙂
Comment #46 September 28th, 2025 at 12:48 pm
This is not a lab curiosity with cherry-picked data. It is production-scale intraday corporate bond RFQ data, the kind of dataset that never shows up in textbook exercises. The authors ran the same backtesting pipeline that an actual trading desk uses on more than 143,000 RFQs across 3,400 bonds. The only change was the feature representation: classical, noiseless quantum simulation, or quantum hardware generated.
On your checklist:
Uncertainty of outcome? Yes. The unexpected finding was that noisy quantum hardware outperformed both classical features and noiseless quantum transforms. That result was not known in advance.
Alternative explanations? Controlled. The quantum part was isolated as a black-box data transform. The ML models and evaluation were unchanged.
Best classical baseline? Used. Logistic regression, gradient boosting, random forests, and feed-forward neural nets are not weak benchmarks. They are the models production desks actually deploy.
Honesty? Clear. The authors explicitly say they cannot fully explain why the noise helps, they do not claim general theory, they emphasize this is empirical. That is intellectual honesty, not hype.
And here is the larger point. Who is promoting the view that quantum simulators are the “same” as quantum computers, and why? If quantum computers are real, they must diverge exponentially from simulators on problems that require entanglement computing, exactly as Feynman defined. If they do not diverge, then both simulators and hardware are nothing but piles of extremely pricey junk. No divergence means both are just doing classical analog wave computing in two of the most wasteful ways imaginable: fragile quantum waves when robust classical waves would suffice, or digital emulation of analog dynamics at massive computational cost.
What we have here is precisely that divergence. The noiseless simulation gave no advantage. The noisy quantum hardware gave a measurable edge on real-world data. That is not trivial. It means the hardware is probing a regime the simulators cannot replicate efficiently.
So what did we learn? That noisy near-term quantum devices, when used as feature generators, can deliver performance gains on financial time series that neither classical methods nor simulators provided. That is new, non-obvious, and valuable. In trading, a few basis points of predictive edge are the difference between making markets and losing them.
The lesson is not that the ground rules were ignored. The lesson is that noise itself may encode useful structure, and ignoring this is not skepticism, it is denial.
Comment #47 September 28th, 2025 at 4:04 pm
Scott, if you got the 50k Shkreli deal, would you consider donating part of the sum to humanitarian aid for Gaza children?
Comment #48 September 28th, 2025 at 6:36 pm
Leia #46,
“The lesson is that noise itself may encode useful structure, and ignoring this is not skepticism, it is denial.”
—
“The first principle is that you must not fool yourself and you are the easiest person to fool.” – Feyman
Comment #49 September 28th, 2025 at 6:46 pm
MK #47: Yes, absolutely. If I get the deal, I’ll donate some of it to the Gaza Humanitarian Foundation, to make sure it isn’t commandeered by Hamas, as pretty much all other Gaza aid is. Thanks for the suggestion.
Comment #50 September 28th, 2025 at 7:20 pm
I am not clear on how noise can reveal structure. By definition it seems noise comes from a random distribution. Can that distribution be shaped differently by the market structure if the noise is generated from a truly quantum circuit than if it is generated by some classical protocol? Maybe. Would that be quantum advantage? It would be even in Scott’s definition because there would be a speed up compared to a classical machine trying to generate the same noise distribution. However, I do not see that this was properly isolated in the paper. First of all different types of classical noise distributions were not attempted with a variety of classical ai generated features nor was noise level adjusted to see if this effect is tunable. Also what about comparisons to other regularizing techniques? If it is an effect of noise, I should like to see the impact of error mitigation, for example. Does the effect go away with error mitigation? There is an interesting result here but it seems that in an effort to make it marketable a lot extra variables were introduced to make it seem “real world”. Subsequent studies should focus on just this one question with a variety of ML
Benchmark datasets.
Comment #51 September 28th, 2025 at 8:27 pm
“It would be interesting to apply error mitigation and see if the result converges to the simulated one — if it does, then the observed effect likely comes from noise. We could then reintroduce various classical noise distributions to test whether the same structure appears or if it fundamentally requires a quantum device. If the latter, that would suggest the noise distribution itself carries quantum structure that’s hard to reproduce classically.”
Comment #52 September 28th, 2025 at 9:39 pm
Sometimes we put God in the gaps, sometimes we put quantum noise.
What is the best rigorous result we have for quantum noise leading to algorithmic improvement? Do we have any? Even if we squint?
Comment #53 September 28th, 2025 at 11:59 pm
Hi there, I’ve loved reading your blogs to get a more sane perspective on the QC industry. I’m going into the equity research field, and I’ll be focused on semiconductors and quantum computing. For my quantum computing coverage, I think it would be really helpful to have 5-10 books or articles on the subject to ground my research and frame any “advancements.” Do you have any recommendations for where to start? Ideally, a few sources talking about quantum algorithms and a few on quantum complexity theory would be great. I’m a physics major taking quantum mechanics 1 and 2, so I have a bit of background on the science behind it
Comment #54 September 29th, 2025 at 8:08 am
C’mon folks. If, “noise itself may encode useful structure” in QM we’re talking about potential Nobel prizes for the authors of this paper in the future. We’re talking about beyond current understanding of QM. We’re talking about a revolution in physics brought on by an HSBC/IBM paper.
Again,
“The first principle is that you must not fool yourself and you are the easiest person to fool.” – Feyman
Scott, I’m starting to think that this isn’t a knowing con by HSBC/IBM, but rather they have fallen for their own crackpottery because they wish it were so so so soooo SOOOO MUCH!
Comment #55 September 29th, 2025 at 12:49 pm
There is no God in the noise. There is no “let’s try to find structure in the noise.”
If it were noise, the repeated experiments would have given positive and negative results for every run. It did not. This point needs to be noted. The point of further research then becomes:
Added quantum power will look a lot like noise from our classical perspective. However, if it is quantum, it’s a form of noise that no classical digital computer of any size can replicate. It would be noise generated at the classically inaccessible level of quantum entanglement.
The experiments need to be repeated with different data systems and other financial systems. Or perhaps another domain. Maybe the folks get a Nobel Prize and Scott becomes a jealous hippy. Or maybe it turns out to be a dud and Scott becomes an ecstatic teenager screaming: I told you so. Only time will tell. As scientists, we can make no conclusion on the 1st set of research outcomes.
Comment #56 September 29th, 2025 at 1:59 pm
#53 GD,
Just buy his book. it’s advertised on the side banner, not sure exactly what type of hand feeding you are looking for here.
Comment #57 September 30th, 2025 at 1:35 pm
The main suspect is their event matching procedure, which seems to me is essentially a shrinkage procedure that opens the gate to look-ahead biases and data leakage.
Comment #58 September 30th, 2025 at 4:45 pm
From the paper: “Also, further attempts through quantum simulations with artificially induced noise do not reproduce the beneficial quantum hardware results, thus demanding further investigation.”
Comment #59 October 1st, 2025 at 10:30 am
Dikshant Dulal #31: You do not need quantum noise to make noise useful. An Ensemble model with random initialisation can achieve the same outcome. This brings point to what are we actually attributing the utility to. As someone mentioned, this is about survival in the organisation rather than by having a significant breakthrough.
Comment #60 October 1st, 2025 at 1:43 pm
I’ve seen this effect before in prior work: you can appear to gain an advantage on a noisy QPU if your target distribution is symmetric. In that case, the hardware noise effectively just averages your operator expectation over noisy states — nothing more exotic. To me, this suggests they may have overlooked a simple modeling assumption such as normality. With financial data, that’s often a perfectly reasonable baseline. In fact, on page 11 they even acknowledge the normality of features, but never connect the dots — perhaps avoiding the conclusion that nothing uniquely “quantum” is being extracted.
If the output features had clear non-normality or conditional structure, then a QPU could be justified. But even in that case, methods like Vine-Copulas or Tensor Networks could likely capture nonlinear dependencies just as effectively.
This might also explain why their ML baselines underperformed: they were overfitting to noisy market data under complex distributional assumptions, whereas out-of-sample the system likely regressed back to the normal regime. A simple statistical benchmark would make this immediately clear.
Comment #61 October 3rd, 2025 at 12:59 am
[…] claim drew sharp criticism from top academics. Quantum complexity theorist Scott Aaronson dismissed the paper as a “qombie”—a zombie claim of quantum advantage that refuses to die. He […]
Comment #62 October 3rd, 2025 at 1:05 pm
Hi Scott. They obviously neglected to try “Metal Machine Music” by Lou Reed (the album version, not the classical acoustic version). That noise makes everything better.
Comment #63 October 3rd, 2025 at 3:47 pm
I’ve heard rumours we may not be submitting it to a peer reviewed journal now. I think the scientific community should try to make sure that happens.
Comment #64 October 3rd, 2025 at 3:53 pm
The thing is even if the noise does contribute it still has to beat classical and that was not exhaustively tested. Not even systematically tested. I am already getting notes from people who think they can beat it classically. So they have a high hill to climb. But let them climb it. The one error they made was promoting the crap out of it before the scientific community outside of ibm had had a chance to digest it. That looks a bit deceptive.
Comment #65 October 8th, 2025 at 1:45 am
Seems to me that the “noise” has a look-ahead bias, especially given the linear decrease of improvement over a relatively stable baseline during the blinding window.
In terms of business value, predicting accurately the 0% fill and 100% fill buckets is pretty useless, given that traders adjust their RFQ answers based on whether they really want to trade (or not to trade) a certain bond. The “RF with quantum generated data” on Figure 13 actually gives a worse probability estimation in the range that would be of interest to a trader (between a 25% to 75% fill probability).
In any case, if HSBC decided to publish it, it means the marketing value of doing quantum stuff is higher than the derived business value. Probably a team in a hurry to publish something before year-end review/compensation as well.