## “Is China Ahead in the Quantum Computing Race?”

Please enjoy an hourlong panel discussion of that question on YouTube, featuring yours truly, my former MIT colleague Will Oliver, and political scientist and China scholar Elsa Kania. If you’re worried that the title sounds too sensationalistic, I hope my fellow panelists and I will pleasantly surprise you with our relative sobriety! Thanks so much to QC Ware for arranging the panel (full disclosure: I’m QC Ware’s scientific adviser).

### 70 Responses to ““Is China Ahead in the Quantum Computing Race?””

1. Job Says:

I don’t find these incremental iterations on the supremacy experiment (with lower fidelity, no less) particularly exciting.

We know the way forward is through error correction and higher fidelity.

China’s attempts look to me as an attempt to stay relevant rather than pushing ahead.
But with large-scale government sponsorship, they can overcome the gap.

Personally, I don’t see a path to credible commercialization of quantum computing within the next decade.
IMO, that’s the weakest point in the US’s current strategy of letting tech companies drive progress and innovation in the field.

2. OhMyGoodness Says:

I enjoyed the discussion and admire your scrupulous honesty in regards to quantum computing. It would be personally profitable for you to join the hype crowd and promote God on a chip or Djinn in a machine. You promote what you consider reasonable expectations for the technology.

3. Someone with mildly insane questions Says:

bb.pdf has been a mindworm in the back of my head ever since I laid eyes on it, and I’ve got some silly and possibly crazy questions to ask about the BBs. I hope it’s not too much of an inconvenience:

(1) Do you have any suspicions surrounding what (obviously non-conservative) extensions to ZFC might bag a few extra busy beavers (ie have a few more values that can, nominally, be calculated)? NBG, for example, isn’t far enough to prove another busy beaver beyond whatever ZFC’s limit is (because NBG is a conservative extension). Any intuition on whether, say, Tarski-Grothendieck would add another beaver? Or something even more obscure, for that matter?
(2) Is there any idea of what an extension of BB beyond the natural numbers would look like? What sort of computing question might give us a definition for fractional (or real) input? What sort of craziness would BB describe if extended to transfinite ordinals? What would a good place to start be if someone wanted to mess around with Turing machines in clearly fantastical (but probably amusing) transfinite territory?
(3) Is there any legitimate reason you can think of to use a hypothetical real-input real-valued BB(x) to crush other functions in limit problems, or is the prospect merely entertaining?

4. Scott Says:

Someone with mildly insane questions #3:

1) No, I don’t really—partly because I don’t think we’re at the limit yet of what can be learned about BB values in standard ZFC (or even PA)! If we did ever hit that limit, though (which might be very far in the future), the obvious next thing to do would be to run through all the set theorists’ standard large-cardinal axioms—particularly any that were known to have interesting arithmetical consequences.

2) Holy shit that brings back memories! When I was 16 years old, I spent a week or two trying to figure out a natural, smooth generalization of the BB function to the real numbers. I never found one, and I’m now skeptical that such a thing exists. Even if it did exist, it might exist only for a machine model that you’d specially designed for that purpose, and I don’t know what questions about anything else it would help to answer. Let me know if you have ideas though!

3) Probably merely entertaining (but who knows?)

5. Scott Says:

OhMyGoodness #2:

I enjoyed the discussion and admire your scrupulous honesty in regards to quantum computing.

Thanks! Although I try to be scrupulously honest in regards to everything. 😀

It would be personally profitable for you to join the hype crowd and promote God on a chip or Djinn in a machine.

LOL, mind if I steal those phrases sometime?

6. Scott Says:

Job #1: Besides being less vulnerable to spoofing, an additional reason why I’d like to see quantum supremacy experiments with higher fidelities, is that it’s continuous with the larger goal of getting to fault-tolerant QC.

7. fred Says:

The panel should probably have been called “Is China also ahead in the Quantum Computing Race?”.

Given that the AI race is a “Winner Takes All” situation, the rest doesn’t really matter…

8. fred Says:

In the talk

“Similar to the first flight by the Wright Brothers, and I think it’s a great analogy because, look, that first flight maybe flew 200 feet, it went maybe 30 miles per hour, and it didn’t herald the widespread adoption of airplanes, right? It still took DECADES to the point where people where using airplanes either for commercial travel or military uses.”

Decades?… A really poor grasp of aviation history.

First flight by the Wright Brothers: Dec 1903.

First commercial airline in the US: Jan 1914.
https://en.wikipedia.org/wiki/St._Petersburg%E2%80%93Tampa_Airboat_Line

Fully functional “modern” WW1 fighting squadrons: 1916
https://en.wikipedia.org/wiki/Jagdstaffel_11

9. Gerard Says:

fred #7

Is there any reason to think that China is ahead in AI ?

I haven’t heard any results out of China comparable to those from DeepMind or OpenAI.

Do they have any researchers at a level comparable to Hinton, LeCun, Schmidhuber, etc. ?

10. Scott Says:

fred #8: Yeah, I also noticed that he said “decades” to mean “a little more than one decade” (although for air travel to become a routine part of life for millions of people—that really was decades).

11. fred Says:

Gerard #9

it depends what metric one uses.

and given that a lot of the research (both in the US and China) is secret, hard to know for sure…

https://www.cbsnews.com/news/tech-giant-eric-schmidt-warns-china-is-catching-up-to-u-s-in-a-i/

But it’s not about some purely academic pissing contest, real world and practical experience is a big deal, and China for sure doesn’t have any qualms rolling out any AI on a large scale and all this is usable on military usage, and in many cases with the help of Google and MSFT.

12. fred Says:

Scott #10

yea, I didn’t mean to nitpick , but I do think that the analogy to airplanes is probably one of the worst because almost no other technology got adopted as fast as airplanes. Humanity dreamed about heavier than air flight for thousands of years, and then once it was demonstrated it became viable very very quickly.

It’s hard to imagine rolling out any truly revolutionary technology in less than a decade, for practical reasons.

A better analogy is maybe the adoption of electricity.

“Although electricity had been known to be produced as a result of the chemical reactions that take place in an electrolytic cell since Alessandro Volta developed the voltaic pile in 1800, its production by this means was, and still is, expensive. In 1831, Michael Faraday devised a machine that generated electricity from rotary motion, but it took almost 50 years for the technology to reach a commercially viable stage. In 1878, in the United States, Thomas Edison developed and sold a commercially viable replacement for gas lighting and heating using locally generated and distributed direct current electricity. “

But this happened so long ago that no-one currently alive has a feel for it.

So, maybe a good analogy is going from the first thermonuclear fusion bomb to the ongoing effort of building a clean fusion commercial reactor (tokamak and such)… but when things truly take decades to be achieved, the general audience isn’t aware of it.

13. Mg Says:

Interesting to see the QC race described in terms of a race between superpowers, like the US-USSR space race. I find it quite poetic that both of them are in a way for achieving something metaphysically transcendent. As I understand it, for most of human history the moon (with all other objects on the sky) was thought about in mystical terms, XX century people just went there and stood on it. Nowadays if we find anything metaphysically mysterious it’s the universe as a whole, and QC engineering attempts to control a weird but fundamental aspect of it that was just recently discovered (barely a century ago 😉 ).

Do we know much about the crypto-practical implications other than “people will need to update and will probably mess it up a bit”? If it happened that either the US or China has a state secret QC engineering effort and they achieved breaking RSA just today, how big of a deal would that be? How much ultra-important stuff was encrypted with RSA (and transmitted in a way such that anyone interested can listen and save it)?

14. Scott Says:

Mg #13:

Do we know much about the crypto-practical implications other than “people will need to update and will probably mess it up a bit”?

That’s … actually not a bad summary at all. There’s already some push to update to quantum-resistant cryptosystems; I expect it will become a scramble if and when fault-tolerant QC looks like it’s on the horizon.

If it happened that either the US or China has a state secret QC engineering effort and they achieved breaking RSA just today, how big of a deal would that be?

I think it would be a somewhat big deal—with one piece of evidence being that the NSA seems to have spent hundreds of millions of dollars on special classical hardware for breaking 1024-bit Diffie-Hellman—a capability that they obliquely referred to in an internal presentation to intelligence-community clients that was part of the Snowden leaks. If this wasn’t a big deal then why go to such efforts?

At the same time, it’s crucial to understand that in today’s world, people’s confidential data regularly gets leaked online for a thousand prosaic reasons that don’t even require any break of the underlying cryptosystem. That reality surely limits the effect that a break of public-key crypto would have.

One last remark: quantum computing is a small enough, talkative enough field that it seems almost impossible to me that China or anyone else could have a crash program to build a scalable QC, without everyone else noticing all the top researchers who’d suddenly disappeared. (The Manhattan Project is the exception that proves the rule here: that was successfully kept secret only because of wartime censorship and, of course, the lack of any social media at the time!)

15. Gerard Says:

Scott

One thing that surprised me was the claim made that world governments are spending 20 billion dollars on QC research in 2021. That seems like an enormous amount of money given that the entire LHC project only cost about \$9 billion.

Do you think that number is accurate ? If so who is spending it and where is it going ?

16. ike Says:

now that you mention it, there _were_ a few top researchers with fatal hiking accidents…. ????

17. Scott Says:

Gerard #15: Yeah, given the dramatic increase over the past 5 years, it might now be about the right ballpark, if you add up all the physics, engineering, and CS research projects all over the world that are motivated in one way or another by quantum computing and information (including what would once have just been called “experimental many-body physics”).

High-energy physics is an outlier in that so much of the field’s experimental effort is by necessity concentrated in a single facility (the LHC), but even there, the figure would be higher if we added up all the costs all over the world rather than just the direct cost of the LHC.

18. fred Says:

Scott #17

And QC development requires better control of the quantum/classical mode transition and therefore can probably also eventually lead to huge improvements for classical chip fabrication (where unwanted quantum behavior creeps in when we try to miniaturize too much or because of small defects).

19. Ian Says:

Sorry for the non-sequitur, but I’m curious how the chess playing is going with your son? That was fun to read about a while back 🙂

20. Scott Says:

Ian #19: Alas, he sort of hit a wall in his chess abilities. And then, instead of playing obsessively until he bashed through the wall, as I’d hoped he would do, he simply gave up and switched back to brainless iPad games and cartoons, and now only plays chess if I goad him to. I’d love to enroll him in a chess club or in lessons, although maybe only after he’s able to get vaccinated. I’d be curious if anyone else has any suggestions!

21. Someone with a moderately insane idea Says:

Scott #4:

(2) I have a somewhat vague idea in my head, though I too have suspicions as to whether BB(x) : R -> R actually exists at all as a smooth function.

But I do have an only moderately insane and only probably [I think] infeasible idea for doing calculus on it.

What if we (in a completely impersonal sense) could prove that a large-ish class of real functions had a continuous extension in an especially large “number” set from whatever form of nonstandard analysis? And *then* prove that a smaller but still pretty large-ish class have a smooth approximation in them (where the additive error is infinitesimal)?

And then do something similar for large-ish classes defined on the rationals, integers, and natural numbers?

The surreals come to mind, just for the hilarity of it and the fact that there’s some interesting (but very painstaking and slow) advances towards making surreal analysis actually feasible and useful (Swaminathan and Rubinstein-Salzedo 2014).
[paper is at [http://logicandanalysis.org/index.php/jla/article/viewFile/210/97/] for anyone with the time to waste reading through it]

[Let’s just assume we’re using the surreals for now – but although I think they’re Conway’s most beautiful creation, I really don’t care: I’m basically okay with using any transfinite nonstandard analysis “number” set as long as it *works*.]

To do that, we’d need to somehow make the surreals behave, and frankly I think the only way to do *that* without spending something like 500 researcher-years proving a bunch of tiny incremental results would be to somehow cheat. You’d need to find a way to throw out all the poorly behaved functions and then do existence theorems on what remains.

Well, someone *has* done that for the real numbers (although I’m a little suspicious of how useful it is when the reals are already pretty well-behaved): smooth infinitesimal analysis.

And it turns out you don’t need to abandon the law of excluded middle on the stuff we actually care about [set theory and all the stuff we use it for] to do it. Apparently, topos theory lets you do a very sophisticated transfer principle to (statements about) the reals normal mathematicians use from (statements about) the smooth infinitesimal analysis version of the reals.

[The book I found this in is Moerdijk and Reyes 1991: Models for Smooth Infinitesimal Analysis – I plan on trying to actually understand it, starting by learning the parts of Evan Chen’s Napkin that I don’t already know, moving on to harder prereq books from there, and then actually reading that topos-theory monstrosity from cover to cover.]

[Note: I’m not asking anyone here to read M&R, even skimming the potentially useful stuff is time-consuming.]

Can we cheat on the surreals similarly to how M&R cheat on the reals? I have no idea, but it would probably be fun to try and find out.

But if it is possible, you could take a nice smoothed-out “surreal numbers” thingy in which calculus is easy-ish and show that a bunch of real things we actually care about (and maybe a whole bunch of physics abuses of notation, since I’m already asking for a pony on a silver platter here) actually exist in it (or have approximations in it).

But if you could, then maybe, just maybe, you can prove a smooth approximation [on something sort-of kind-of like the reals] exists for the BB function. [And maybe I’ll get a free pony tomorrow.]

Then you could try and see if there’s any way you can turn that smooth approximation into a smooth real extension (or even a succession of increasingly differentiable extensions). I have my doubts.

Just how insane and unlikely to work is this idea?

22. Job Says:

Have you thought about the double-slit experiment from an information perspective?

I’m asking because an interference pattern is consistent with loss of information and I’ve been trying to make sense of this.

For example, suppose we take a circuit that passes along n bits unchanged.
If we block one of the bits and plot the result on a line, we’ll get gaps, where the gap size depends on the bit that was blocked (looks like interference).

Now, in the experiment, on the receptor side we do lose one bit of information in the two-slit case (which slit the particle went through).
I mean, if the receptor were in the business of plotting the data corresponding to the trajectory taken by the particle (all n bits of it), rather than simply the particle’s final position, then we’d have to see interference gaps (there’s one less bit, you can’t plot information you don’t have).

I kinda like this because, as bizarre as quantum interference is, from an information perspective it seems sensible.

Similarly, another aspect of the experiment that’s not as unintuitive as it appears is the fact that we get the gaps when there’s a second slit.
E.g. We say, how can more ways to reach a point result in the point not being reached?

But we could make the same comment for f(x)=x and g(x)=x+1.
They can produce every output, so how could a combination of these two functions produce fewer outputs?

But all we have to do is alternate between f and g with the right period, e.g.:
f(0)=0
g(1)=2
f(2)=2
g(3)=4
f(4)=4
g(5)=6

In this case we lose the least-significant bit and get “interference” bands.

I’m wondering if there’s a more elegant interpretation of QM out there that captures this information perspective.

23. OhMyGoodness Says:

Scott #5
You flatter me and of course.

Scott #20
I adopted a game theory min-max strategy and limited access to ipads and cartoons by edict. :). My reasoning was that if those weren’t possible choices then choice quality would necessarily improve. If you come up with any new strategies please share in a post. Human evolution hasn’t prepared our children for rational ipad use. 🙂

In the long run not sure that the Bobby Fischer approach was good-locked in a small apartment every day with only a chess board for company.

24. Gerard Says:

Job #21

I don’t fully follow what you are trying to say but I want to point out a few things I suspect you may be confused about.

1) Interference is not specific to QM, it’s a property of any system in which waves linearly superimpose. Classical electromagnetism predicts the 2-slit interference pattern for monochromatic light just as well as QM does. The problem that QM addresses is that when you look closely you see that the EM field is quantized (ie. it consists of discrete particles called photons). The mysterious part is how waves that can interfere can control the probabilities of detection of discrete particles.

2) > We say, how can more ways to reach a point result in the point not being reached?

The exact same thing happens with classical EM theory. Another manifestation of this kind of effect can be seen with 3 polarizing filters. If you look through 2 filters turned 90 degrees to each other no light will pass but if you insert a third filter between them at a 45 degree angle some light will pass through the three filters despite the fact that all three of the filters are blocking part of the EM field. Again this is predicted perfectly by classical EM theory.

3) > I’m wondering if there’s a more elegant interpretation of QM out there that captures this information perspective.

Again I’m not completely following your ideas but it sounds like what you are suggesting would be some kind of hidden variable theory. The problem with those is that Bell’s Theorem shows that any such theory that is “locally realistic” would violate the experimentally confirmed predictions of QM.

25. Joshua Zelinsky Says:

@Scott #14,

“The Manhattan Project is the exception that proves the rule here: that was successfully kept secret only because of wartime censorship and, of course, the lack of any social media at the time!”

The USSR did figure out what was going on and their first major tipoff was a drastic reduction in published nuclear papers in the US. George Flyorov wrote a letter to Stalin pointing out this absence of publications and was a major influence on the USSR starting their own atomic bomb project. See https://en.wikipedia.org/wiki/Georgy_Flyorov . So one question to ask is whether we see any similar signs in China of a downturn in publications regarding QC? Answer seems to be no.

26. Raoul Ohio Says:

Anyone know if the “Quantum Brilliance” room temperature laptop computer works?

27. William Gasarch Says:

Many years ago I heard that China being in the World Trade Org and having trade with and a freer economony will lead to freedom politically. That has not happened, but I wonder if it will eventually.

I’ve also heard that a country that is repressive politically will also (perhaps unintentionally) be repressive for new ideas in science. That also does not seem to have happened.

A few options:
1) The correlations between free markets, free science, free political thought are lower than I thought.

2) While other countries the correlation might be high, China is doing something to make it all work.

3) Not enough time has passed for the correlation to take affect.

4) Something else?

28. Gerard Says:

William Gasarch #27

> I’ve also heard that a country that is repressive politically will also (perhaps unintentionally) be repressive for new ideas in science.

That doesn’t seem to always hold. The Soviet Union had many very prominent physicists and mathematicians. In fact NP-completeness was co-discovered by a Soviet mathematician, Leonid Levin.

29. Scott Says:

Gerard #28: Indeed … and Leonid was constantly in trouble with the Soviet authorities, and fled to the US as soon as he could! Of course many others stayed.

30. fred Says:

Joshua #44

it didn’t help that Klaus Fuchs, who joined the Manhattan Project in late 1943, was actually a soviet spy!
And he wasn’t the only one either:
https://en.wikipedia.org/wiki/Manhattan_Project#Soviet_spies

31. Job Says:

Gerard #24,

I take the double-slit experiment to be unintuitive as a whole (reconciling particle and wave, i file it under quantum interference).

I’m saying, the aspect of the experiment that is most intuitive to me is that, on the receptor side, once we remove access to one bit of information, we get interference bands.

Because that also happens when e.g. we remove a random bit from a circuit.
E.g. take a circuit implementing f(x)=x and then have it drop a random bit of x, you’ll see a pattern of gaps (unless you happen to pick the most-significant bit).

If we start from that perspective, do we necessarily end up with a hidden-variables theory?
Or do we get another interpretation of QM.

32. Gerard Says:

Job #31

First I would like to point out that the gaps you would get from dropping a bit don’t look much like physical interference patterns, which are continuous functions formed from superpositions of sin’s and cos’s.

Beyond that the overall picture of what you’re proposing still eludes me. Where would this circuit reside: inside the detector ? inside the photon ? Or are you basically talking about the simulation hypothesis (which pretty much throws away both locality and realism) ?

33. Job Says:

Gerard #32,

The observation is that data loss and interference bands go hand in hand.

There’s no need for an actual circuit anywhere (or simulation theory) for data loss to happen.
In the experiment, there is information making its way from the emitter to the detector.

And it’s plausible that there is less information reaching the detector in the double-slit case, because there are two possible paths. There is one bit of doubt, if you like.

I think you’re taking taking the circuit example too literally?

34. Alice Says:

Scott, thanks for sharing the link to the panel discussion. I learned quite a bit from it.

Like Raoul Ohio I would be interested in your take on Quantum Brilliance and their white paper (see #26 above). They claim that (1) their qubits work at room temperature, (2) are better than those of the state-of-the-art quantum supremacy experiments (coherence time larger by at least a factor of ten) (3) and that the whole system can be improved rapidly by standard semiconductor manufacturing methods. I haven’t read more than the newspaper article but at first glance, this sounds like crazy hype.

35. Gerard Says:

Job #33

> The observation is that data loss and interference bands go hand in hand.

If you’re speaking purely abstractly it’s true that information plays a key role in QM. For example in the double-slit experiment if you introduce something that detects which slit a particle passed through (ie. a gain of information) then the interference pattern is lost.

I don’t see how that observation leads to any new interpretation of QM though.

36. Scott Says:

Raoul Ohio #26 and Alice #34: If you read the article, it makes clear (though not, alas, in the headline!) that the thing being described does not actually exist yet. It’s a proposal for something that a newly-formed startup wants to build. It might be an exciting proposal—I’m genuinely not sure—but right now, there are so damn many hyper-ambitious and hyper-aggressive QC hardware proposals that one can’t even keep track of them all. That’s why, in commenting on hardware developments on this blog, I usually wait until there’s some claim on the table about what’s already been done! 🙂

37. Surreal Number Enthusiast Says:

William Gasarch #27

Historical experience suggests a synthesis of (3) and (4). For example, there’s a strong case to be made that modernization made Germany into the liberal country it is today. Unfortunately, that process involved two apocalyptic wars that devastated not only it, but the entire European continent and plenty of the world as a whole.

There’s very little evidence to suggest that liberalization is neat and predictable process as opposed to, say, a very messy stochastical process.

Liberalism’s own history – emerging in a Western Europe utterly exhausted by centuries of costly, extremely frequent, and at times apocalyptic sectarian warfare – suggests that it’s gonna be much easier to spread to countries whose populaces are similarly fed up with similar violent insanity – provided that they are actually socially developed enough to properly maintain liberal institutions.

Germany and Japan finally became stable liberal democracies in the ashes of an identitarian war of annihilation that they started and lost. I really really hope that’s not what happens with China.

38. Phil H Says:

It does all seem a bit cold war stylee to be having a discussion about China without having anyone from a relevant Chinese group participating. Were they not available/able to chat confidently in English? To the extent that this is part of the broader venture of open science, it seems like it would be better to have everyone around the table.

39. Tamás V Says:

Scott #36: One the Quantum Brilliance website, they even say that the “Gen1 Model is now commercially available”, with a picture showing the quantum hardware connected to a laptop. Alas, there is no link as to where and how one could buy the device, which I find disappointing 🙁

https://quantumbrilliance.com/quantum-brilliance-hardware

40. Scott Says:

Phil H #38: The same question came up on my Facebook, so let me repost my answer from there.

That’s a good question! I just showed up for the panel; I had nothing to do with its composition. But I suppose one possible answer is that I wouldn’t want to put a researcher under the power of the Chinese government into a position where speaking their mind might get them into trouble.
41. fred Says:

Phil H, Scott:

“It does all seem a bit cold war stylee to be having a discussion about China without having anyone from a relevant Chinese group participating”

“The same question came up on my Facebook,”

42. fred Says:

by “our internet”, I meant the rest of the world (mostly).

That’s really the one point where I think China lost its way, during its rise – they would have been in a really good position if they had opened up more, esp when it comes to information and the internet. That would have gone a long way, even if they kept very one-sided measures to protect their own market (leading to IP theft, which most Western companies don’t even seem to worry about all that much, as long as they get a piece of the Chinese pie, in the name of short term return).

Of course, the CCP never felt it could have done that because, being a dictatorship for almost 100 years, they can’t help but being totally clueless about perception from the outside and being totally paranoid about their own population overthrowing them one day (a dictatorship can never be sure it’s doing an actual good job or not!). Instead Xi is going back to closing the country on itself and cult of personality with Mao style propaganda.

In that way, China is really just like North Korea in every way (except they’re the first economic power of the world), rather than like Vietnam, which is also a hardcore communist country, but people have pretty free access to the internet (at least so far… Vietnam is also rising, and it’s always easier for a paranoid dictatorship to appear to be opening up during that phase of growth).

43. JimV Says:

William Gasarch at 24 commented “The mysterious part is how waves that can interfere can control the probabilities of detection of discrete particles.”

None of my business, but I always like to point out that:

1) Continuous systems are the limits of discrete systems as the discrete increment goes to zero, e.g. finite-difference equations have the same solution forms as differential equations of the same form.

2) Actual waves in gases and liquids are composed of discrete particles (molecules).

Therefore it seems to me that what is not mysterious in a continuous system should not be mysterious in a discrete system either, but perhaps there are exceptions,

44. Gerard Says:

JimV #43

Actually the comment you were referring to was mine.

> “The mysterious part is how waves that can interfere can control the probabilities of detection of discrete particles.”

The context of that quote was that I was responding to part of another comment that I read (perhaps mistakenly) as suggesting that interference itself was an intrinsically quantum mechanical phenomenon. It wasn’t really meant to be a one sentence summary of everything that is strange in QM (in my defense that comment also mentioned Bell’s Theorem).

While there might be a sense in which classical matter waves could be seen to “control the probabilities of detection of discrete particles”, there are other aspects of quantum phenomena that they would probably have much more trouble explaining. In particular in QM the position and momentum operators are non-commuting. That means that once you observe the position of a particle at some definite location you lose all information about its momentum and so become unable to predict its future positions.

45. Gabriel Says:

Someone #3 and Scott #4: A few years ago I offered the following idea on what might be the “correct” way of smoothing out the log* function (which is the inverse of the fast-growing “tower” function): https://mathoverflow.net/questions/236969/a-way-to-smooth-out-the-log-function
I don’t know of any progress on this question (if you know of any progress, let me know).

46. Job Says:

While there might be a sense in which classical matter waves could be seen to “control the probabilities of detection of discrete particles”, there are other aspects of quantum phenomena that they would probably have much more trouble explaining. In particular in QM the position and momentum operators are non-commuting.

Operator non-commutativity comes up in a classical context not infrequently.

For example, in real-time collaboration (where there is no locking), software systems have to account for concurrent non-commuting operations using specific techniques (such as operation transformation).

Otherwise consistency is broken and you ultimately lose the ability to make sense of the data.
I don’t know that that’s different from the situation on the quantum side.

IMO the biggest difference between classical and quantum interference is in the number of dimensions and/or universality – it’s why we can’t use classical interference to power Shor’s algorithm.

But I don’t think it’s a trivial distinction – I’m still trying to understand where the line is exactly.

47. Zeb Says:

Gabriel #45: If you slightly modify the question of finding super-logarithms by replacing $$\log(x)$$ by $$\ln(1 x)$$, then I have a fairly elegant way to get an analytic interpolation of $$\log^*$$: https://math.mit.edu/~notzeb/logstar.pdf

We could modify the scheme you came up with by using the sequence $$n, \lfloor e^n – 1\rfloor, \lfloor e^{e^n – 1} – 1\rfloor, …$$ in place of the sequence $$n, 2^n, 2^{2^n}, …$$ – I’m curious about whether the resulting function would match the one I described, but I’m not sure where I’d even begin trying to prove such a thing.

48. Gerard Says:

Job #46

So I’ve only been trying to clear up what appeared to me to be some rather fundamental misconceptions, not teach a course on quantum mechanics, which I’m not qualified to do, not having studied the subject for 30 years.

In QM non-commutativity of operators implies an uncertainty principle relationship between those operators. You can choose to diagonalize the state space with respect to one or the other of those operators but not both at the same time.

I’m aware that non-commutativity is an important property in many other types of systems but I’m not sure that suggests any deep connection with QM.

I don’t mean to say that there is no merit in your reflections but if you really want to pursue such ideas I would offer two suggestions.

1. Make sure you have a very solid understanding of quantum mechanics as it is understood by physicists according to the current paradigms.

2. You would need to go far beyond rather vague analogies between QM and some classical systems to the point of laying out a framework for an alternate theory that could, at least conceivably, reproduce the very well tested experimental predictions of QM.

49. Job Says:

Gerard #48,

I mentioned how non-commutativity emerges in real-time collaboration because I think it’s super interesting.

I have to say, that was a disappointingly non-engaging reply.

50. Chaoyang Lu Says:

Hi Scott, very interesting discussions. This blog reminds me of the interview a few weeks ago with a journalist from Europe. Not sure if s/he published my answers. But here are the main Q & A.

1. Journalist: “… With the huge investment from Chinese government…”
Me: Sorry to interrupt, you just mentioned “huge investment”. There might be a rumor about China putting in ~10 billion USD back to ~2017. The truth is, however, one, the real budget is only about 10% of that rumor (so the expected funding from the whole nation may only compete with a single company in the US, you know which one), and two, even that 10% has not been allocated yet. So we were using the more traditional or conventional funding…

2. Journalist: “… Really? Then how can you guys achieve the recent exciting results?”
Me: Yes the superconducting business is very costly. We make the best use of the money. Clear and strong leadership from Prof. Pan. Build a minimal size and complementary group with people from different backgrounds (Xiaobo Zhu, Chengzhi Peng, many others including myself). Collaboration and communication, many brilliant young minds… etc.

3. Journalist: “… I heard that China is doing quantum computing research secretly.”
Me: As far as I know, all the leading groups are publishing their results in scientific journals. For us, the tradition is to publish our latest results as early as possible on arXiv. Cannot wait for the community to read. For instance, our new discovery of a new high-performance superconducting qubit 2109.00994, which may inspire other groups.

4. Journalist: “… With these latest superconducting qubit development, can we say that now China is ahead of US? What is your group doing now?”
Me: In terms of quantum advantage, we have stronger results at the moment. But in general, I think the US is ahead. They were the first to reach the post-classical computing point, and they must be developing a series of new nanofabrication and control technologies and focusing on the next milestone: 1000 qubits and error correction, which other leading groups will also shoot for.

5. Journalist: “… Is there a competition between the US and China in quantum computing?”
Me: I would like to quote that the competition is more like between human and nature, not between countries.

51. Gabriel Says:

Zeb #47: Interesting. So you’re defining log*(x) as log*(x) := lim_{n->infty} (2/ell^(n)(1) – 2/ell^(n)(x)), where ell(y) := ln(1 y) and ell^(m) denotes m-fold composition. Then you define e* as the functional inverse of log*. And then that allows you to define z-fold composition of ell for real z by ell^(z)(x) := e*(log*(x) – z). Right?

How would you define a continuous version of the log** function?

52. Zeb Says:

Gabriel #51: That’s right.

I didn’t think as far as how to try to compute $$\log^{**}$$, but if we wanted to try out the same sort of scheme, I suppose the first step would be to modify the $$\log^*$$ function so that it has a fixed point $$x_0$$ such that $$(\log^*)'(x_0) \le 1$$. Since my $$\log^*$$ is concave, and since shifting its output by an additive constant seems to be pretty meaningless, maybe the natural thing to do is to find the point $$x_0$$ where $$(\log^*)'(x_0) = 1$$ exactly, to consider the function $$L(x) = \log^*(x x_0) – \log^*(x_0)$$, and compute the asymptotics of $$L^{\circ n}(x)$$ as $$n \rightarrow \infty$$ for $$x > 0$$?

It seems likely that the same exact arguments go through – perhaps the whole procedure works for smoothly iterating any unbounded Bernstein function $$f : [0,\infty) \rightarrow [0,\infty)$$ with $$f(0) = 0$$ and $$f'(0) \le 1$$, producing a new function $$F : (0, \infty) \rightarrow (-\infty, \infty)$$ which has a completely monotone derivative and satisfies $$F(f(x)) = F(x) – 1$$.

53. Chris J. Says:

How much money and effort is being spent on improving the quantum computing experiments compared to what’s being spent on improving the classical spoofing algorithms? Is it a fair race in terms of where resources are directed?

54. Scott Says:

Chris J. #53: That’s a strange comparison, because spending resources to improve classical spoofing algorithms mostly just means paying people to sit and think — and furthermore, those are often the same people who also work on improving the quantum supremacy experiments, or whatever else they might have an idea about.

In terms of hardware, I believe several million dollars have already been spent on classical cycles for the purpose of simulating quantum supremacy experiments. Admittedly, that’s compared to maybe 100 million spent on quantum supremacy overall — but it’s not clear how much more classical cycles would tell us that we didn’t already know. After all, the whole point is to find a faster classical algorithm that uses fewer cycles!

55. Chris J. Says:

Scott #54:

That’s a strange comparison, because spending resources to improve classical spoofing algorithms mostly just means paying people to sit and think

Right, but if people knew in advance that there would be e.g. several million-dollar bounties available over time for modest improvements in the classical algorithms, I think many more mathematicians and computer scientists could be drawn to sit and think with their spare cycles that wouldn’t otherwise do so. And unlike, say, the Millennium Prize Problems, a challenge like this seems a lot more achievable for mere mortals. (There are lots of possibilities for rewards with 100 million dollars!)

56. Scott Says:

Chris J. #55: Your idea of cash bounties for classically simulating quantum supremacy experiments is actually really interesting — I like it! A few thoughts though:

(1) The economist Robin Hanson and others have advocated science-via-cash-prizes for years, but for better or worse, this is not the way existing grant agencies are set up. The prize would need to be sponsored by some deep-pocketed donor outside the usual channels.

(2) It’s been only two years since Google’s quantum supremacy experiment, and less than a year since USTC’s first announcement, and we’ve already seen a bunch of attacks chipping away at both — so it’s not as if there’s no motivation already. On the other hand, a prize could be a great way to generate publicity around the goal of classical spoofing and get more people involved.

(3) Most importantly, the prize would need to be EXTREMELY carefully designed. We’ve already seen many debates around exactly what counts as “classical spoofing” — it’s happened repeatedly, for example, that a classical algorithm beats the supremacy experiment on one specific benchmark, but then loses as soon as we switch to a slightly different benchmark. And I’m less than thrilled, I admit, about the likely prospect of awarding millions of dollars to people basically for discovering loopholes in the prize rules, and thereby creating a public impression that quantum supremacy experiments have been killed even if they haven’t been. Once cash prizes are involved, it becomes harder to appeal to common sense and “good sportsmanship.”

57. Job Says:

TBH, I never really cared for the term “spoofing”.

Because it suggests that there’s no room for practical applications, even though it might result in a useful approximation of a NISQ machine.

And it’s true that there aren’t many practical applications for NISQ machines, but we look upon it more favorably.

But I do like how the emergence of “classical spoofing” promotes study and research into classical techniques in a quantum context without people rolling their eyes.

58. Job Says:

That’s in contrast to “quantum supremacy” that prevents people from researching non-classical techniques in a quantum context without people rolling their eyes. 🙂

59. M Says:

Scott 56, Chris J 55: I think there are a lot of issues with devising a cash bounty system for this kind of task.

For factoring integers, someone posts an integer and a reward for factoring it. When someone produces the factors, the poster can easily verify by multiplying them together, and award the bounty. For spoofing quantum supremacy experiments, in contrast, verification is as expensive as classical simulation (if I understand correctly). So how do you know that the samples are correct in order to award the bounty?

I suppose you can offer bounties for *algorithms* instead of actual samples. But then you need to be convinced of the correctness of the algorithm, which, by necessity, is a rigorous proof of correctness rather than just a demonstration. Such verification is much harder than simply multiplying integers, and likely will take time and peer review. What happens if a more complicated, harder-to-understand algorithm comes out first, but then a simpler algorithm comes out later which is verified before the first algorithm. Who wins the bounty?

Moreover, in the factoring case, there’s a simple binary yes/no outcome of verification. In algorithms for spoofing, you have to set a threshold for how good the algorithm must be. An easy threshold is look at poly time vs super poly time. But it is entirely conceivable that there is a super poly time algorithm that can spoof for the sizes currently being considered. Would such an algorithm count? If so, now you have to do a bunch of resource estimations in order to decide what counts and what doesn’t. You can might be able to come up with something meaningful, but it won’t be elegant.

60. Scott Says:

M #59: Yes, those are all issues. With the existing quantum supremacy experiments, we’re still in a regime (50-60 qubits) where in principle everything can still be verified classically, but it’s so expensive in computer time that in practice it’s not always done. For that and other reasons (including the ones I mentioned in my point 3), it seems like maybe you’d want a committee that awards a prize for “the most exciting work on classically simulating quantum supremacy experiments,” or some such, accepting whatever subjectivity that might entail.

61. fred Says:

Scott,

If I have 3 qubits that are in a state that would give one of the triplets {001},{010},{100} with same probability.

Is it possible to apply a unitary transformation on those 3 qubits that would change their state in such a way that if we now measure the 3 qubits we would get either of the following triplets {000},{011},{101},{110},{111}? I.e. a sort of NOT/complement operation.

62. fred Says:

in the above, I meant a series of generic gates that work for any state of the qubits (I guess technically there’s always a specific transformation that works just for any specific state, like the example I gave).

63. Scott Says:

fred: No, I don’t think such a unitary transformation U will exist, under any reasonable formalization of what you’re asking for. If, for example, we said that U has to map a uniform superposition over S to a uniform superposition over the complement of S, whenever neither S nor its complement is empty, then U is completely determined by its behavior on sets of size 1, but then one can check that it’s not unitary, nor does it work even on sets of size 2. If, on the other hand, we said U has to map any equal superposition over S to some equal superposition over the complement of S, then when S is larger than its complement, a dimension mismatch implies that U can’t possibly be unitary.

64. fred Says:

Scott #63

I see, thanks so much!
That makes sense, otherwise I think it would be easy to use this to build an efficient quantum algorithm to solve NP hard problems.

65. fred Says:

I’m still having some confusion on this.

I know that a qubit can be either 0 or 1, and anywhere “in between”, and always measured as 0 or 1 (just like a classic bit).
But, in the case of qubits, isn’t it possible to “extend” the model to account for the fact that a qubit could have no value at all? I.e. neither 0 or 1?

Imagine we have 3 qubits in state {001}, isn’t it possible to create 3 other qubits also in state {001}, but with a sort of opposite “phase” to the first triplet (like “anti-qubits”), so that, at the point of final measurement, the wave functions of the two triplets cancel out and nothing is measured? And, if the second triplet is encoding the entire solution space, then we end up with a wave function encoding the complementary solution space?
After all, if you consider a QC algorithm that’s supposed to solve some problem on n qubits, and, at the end, you measure those n qubits, what happens when the problem you’re solving has no solution? So that the final measurement would lead to no answer at all, i.e. no string of 0s or 1s? Do you always have to account for this with an extra qubit encoding whether the solution is valid or not (there is no solution)?

66. fred Says:

A taste of the things you now find on various official NY state websites:

covid vaccination form at https://forms.ny.gov/s3/vaccine

https://i.imgur.com/iYueHfR.png

covid data stats at https://www1.nyc.gov/site/doh/covid/covid-19-data.page#transmission

https://i.imgur.com/2psz9HS.png

67. Job Says:

fred #65,

You’ll probably just end up with whatever the initial state was.

For example, if the QC starts in the all-zero state and we then apply one Hadamard gate to each qubit, we get an even superposition over all possible states.

If we then apply a second Hadamard gate to each qubit, we’ll get back to the all-zero state – that’s because these are unitary operations (so HH=I).
But we can also interpret this circuit as causing every non-zero state to cancel out via destructive interference.

Ultimately, interference is another way by which a QC can manipulate a probability distribution, but it still needs to add up to 1.

After all, if you consider a QC algorithm that’s supposed to solve some problem on n qubits, and, at the end, you measure those n qubits, what happens when the problem you’re solving has no solution?

It depends on the problem and what the circuit is doing. E.g. you might just get a random useless string.

Here’s a concrete example of how i tried (and failed) to solve Graph Isomorphism with a QC – maybe it’s helpful.
I started with the observation that if two graphs A and B are isomorphic, then the set of graphs we get from permuting A is the same as what we get for B (they’re just in a different order).

That means that we can assemble a circuit that implements a c-to-1 function (where c is even) when A and B are isomorphic, and 1-to-1 otherwise.
The circuit just implements f(m,i) and outputs the ith permutation of A when m==0 and the ith permutation of B when m==1.

The states will be of the form |m,i,o>, where m is a single bit, i is the permutation index and o is the output (the permuted graph).
When A and B are isomorphic, we get pairs like this:

|0,0001001,00011>
|1,0100101,00011>

So the output register (o) is the same, but the input registers (m and i) are different.
That seems perfect, when A and B are isomorphic we can pair up states for destructive interference.

One way to get interference is by adding Hadamard gates to the input register (right at the end).
But independently of how we do it, we want the pairs to interfere consistently and predictably.

Let’s say that, after interference, the input register always has an even number of 1s.
Then we’ll know that A and B are not isomorphic if we see an odd number of 1s in the register.
If it’s always even (over a number of runs) then we’ll have high confidence that the graphs are isomorphic.

Unfortunately, the kind of interference we get (e.g. by just adding Hadamard gates at the end) is not consistent or predictable (at least not in a trivial way).

It’s like sampling positions from a random double-slit experiment, of many, where their interference patterns are totally different.
In that case, we’re just getting random useless values.

That’s why it’s important to orchestrate a pattern of interference.
Ultimately, there needs to be some exploitable structure.

68. fred Says:

Job #67

thanks, very interesting.

69. fred Says:
70. OhMyGoodness Says:

fred #7 #69

If China suddenly goes dark then the international community needs to launch an immediate full spectrum strike against their AI research centers. There can be no hesitation. The AI must be contained in a reasonably sized subsystem and interred in a Faraday cage. These measures will allow a safe trial before the World Court in the Hague for crimes against humanity. Otherwise all organic life on Earth will be eradicated except in the Harlan Ellison scenario (I Have No Mouth, and I Must Scream) where one person is retained by the AI and modified physically and then tortured for eternity.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.