My problem is that even so I find the questions raised by scalable quantum computing important and interesting, I have read neither Nielsen and Chuan nor even Mermin in depth. Even so the article by Waintal has only 6 pages and feels easy to read, it uses examples like “topological surface code”, “superconducting transmon”, or “semi-conducting singlet-triplet double quantum dot qubit” which are not explained in Nielsen and Chuan or Mermin. I would not have looked at such an article, but comment discussions on scirate are so rare that the time spend to read Michel Dyakonov defense and this referenced article was small.

So in order to be able to give at least some sort of reply, I have now tried to read (in depth) chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan. This was an enjoyable experience, and I especially liked the NMR approach with its direct use of statistical ensembles and the density matrix. But in the end, all of the examples presented in that chapter are obviously unable to allow scalable quantum computing:

As these examples demonstrate, coming up with a good physical realization for a quantum computer is tricky business, fraught with tradeoffs. All of the above schemes are unsatisfactory, in that none allow a large-scale quantum computer to be realized anytime in the near future. However, that does not preclude the possibility, and in fact …

So it is an interesting fact that today’s schemes for quantum computing are no longer obviously unable to allow scalable quantum computing. Waintal’s sophisticated discussion of where they might still have issues limiting their scalability shows just how much the field of scalable quantum computing has advanced.

And still those excellent texts will likely remain (for a long time) the place for the uninitiated reader to learn the basics, even so they don’t explain a single physical scheme which would allow scalable quantum computing.

]]>I think I got the connection. Wow!

In retrospect it should not be surprising that ML meets DP: if the model you train is good, then it must get ride of most or all information about what particular set of examples you’re training your model on (except that this set was taken from some larger population defined as obeying the law your model implement). So DP and ML, same fight under disguise. And it’s even almost the same math:

-DP would say take a database (a two-dimensional matrix where one row = one user, one column = one field), multiply each line by the query (a collection of weights, one per column), transform that into a probability (sum and exponentiate the result for each row, divide by the total if you wish), and finally pick one at random according to its probability.

-ML would say the database is actually a minibatch of inputs to some neuron (a two-dimensional matrix where one row = one exemple, one column = one feature), the weights for the query are the synaptic weights for this neuron (it’s even the same name!), the exponentiation is an activation function (you might try some others we found, DP), the only difference is we don’t pick one winner from the batch. Well, maybe we should?

I think I can see how a QT would see this: the database/inputs is a collection of physical systems one can measure (a two-dimensional matrix where one row = one system, one column = one possible measurement basis), the multiplication stuff is actually a rotation of the measurement basis, and when you actually perform the measurement then the result is at random according to an exponentiation (the squared amplitudes). That’s neat.

So first, do you see anything wrong with the picture above? Second, it seems that there is no place for intrication in this picture. Would it be wrong to stipulate that it defines columns versus rows? (e.g. no intrication between two rows)

]]>No, DNA computing is a fundamentally different proposition from quantum computing, because ultimately, it’s an exotic way to implement the familiar class P (i.e., do classical computation).

OK so it’s DNA computing.

I realize it’s still just P. The part that reminds me of a QC is the output part, since it requires amplifying the signal of whichever unit encounters the solution.

Hey and natural selection could play the role of quantum interference. Both constructive and destructive, in a very real sense.

]]>But do you think bio-chemistry itself will be used as a computing platform?

For example, you can amplify a DNA sequence using a process called PCR. It splits up the DNA and fills in each half. You end up with 2^n copies, after n steps.

Since DNA is more or less programmable and can be scanned/manipulated (using proteins like cas9, used in CRISPR kits), there’s the potential to enable computing with large amounts of parallelism.

And it’s not unlike quantum computing in that you’d need to find a way to amplify a solution once it’s encountered – enough such that you can get it out.

Relative to a QC, it’s far more resource intensive – it’s explicit exponentiality – but it has similar dynamics.

]]>