This is the first installment of a four-part series.
On December 3, Science magazine published a scientific paper by Chinese scientists on the results of experiments with a prototype quantum computer.
It was widely reported in the media that the Chinese system needed only 200 seconds to carry out a computation that would take over two billion years using the fastest supercomputer existing today.
The experiments were designed and carried out by a top-level research group led by Pan Jianwei and Lu Chaoyang of the University of Science and Technology in Hefei, China. Pan is one of the most famous Chinese physicists today, referred to once in Nature magazine as the “Father of the Quantum.”
Pan received his PhD in physics from the University of Vienna, working under the prominent Austrian quantum physicist Anton Zeilinger. Earlier, the same research group under Pan’s leadership developed the world’s first “quantum satellite,” which in 2017 for the first time demonstrated so-called “photon entanglement” over a distance of thousands of kilometers.
Pan, Lu and their colleagues have named their quantum computing device “Jiuzhang,” after Jiuzhang Suanshu (the Nine-Chapter Book on Computation, 九章算术), a classic Chinese treatise on mathematics compiled about 2,000 years ago in the period of the Eastern Han Dynasty.
The Jiuzhang is a technological tour-de-force. It uses high-power femtosecond laser pulses in combination with nonlinear crystals to generate individual photons of light, which pass through an array of 300 beam splitters and 75 mirrors, followed by an array of 100 high-efficiency superconducting detectors.
The whole system must be “tuned” and adjusted with fantastic precision to maintain so-called phase coherence, which is required for the experiment.
This is very impressive, to say the least. But where do we stand now after the reported success of the Jiuzhang?
I asked Scott Aaronson, one of the world’s leading experts on quantum computing, to comment on the Chinese results and to put them into perspective in terms of the ongoing global race to develop usable quantum computers. The interview that follows was conducted on December 3.
Aaronson is Professor of Computer Science at the University of Texas at Austin and director of its Quantum Information Center.
Among other things, he is the co-inventor, together with his student Alex Arkhipov, of “boson sampling,” a specific quantum computational task that was employed by the Chinese researchers (in modified form) as a means to demonstrate the superiority of quantum computing over conventional computers in terms of speed.
Besides his academic research and teaching, Aaronson is a frequent public lecturer and author of a fascinating, wide-ranging book entitled Quantum Computing Since Democritus.
Readers should be forewarned that, given the topic of quantum computing, some concepts in the interview will be unfamiliar. The most important thing for the uninitiated is to have at least a rough idea about what is meant by a “qubit” and a “quantum computation.”
For them, the following introductory indications might be helpful. (Knowledgeable readers are welcome to skip over the following paragraphs to the interview).
A qubit is a physical system which, when measured with a suitable instrument, is always in either one or the other of two possible states – analogous to the 0 and 1 or “on” and “off” of a digital element in a conventional computer but with a very big difference.
When not disturbed by a measuring apparatus, a qubit can exist in a kind of mixture of the two basic states, known in quantum physics as a “superposition.” This might also be described by the idea that the system exists simultaneously in both states, but with different relative “weights.”
These are revealed by how often we get one, compared with the other, of the two possible “answers” when we make repeated measurements. In these so-called quantum systems, repeated experiments give not a single answer, but instead a random-like set of answers displaying a specific statistical pattern.
The additional freedom provided by superposition of states permits qubits to carry vastly more information than a simple digital “0” or “1” classical byte of information. Qubits can be realized in a great variety of ways, for example in the form of photons of light, where the two base states correspond to left-handed versus right-handed, or vertical versus horizontal polarization.
One can also use electrons, which have two opposite so-called spin states. Additionally, ions, atoms and tiny currents in superconducting chips can be made to function as qubits.
The fun starts when we prepare two or more qubits by putting each of them into one of its base states and having them interact with well-characterized systems and/or with each other in some controlled fashion and sequence, and then measure the output with detectors. This is essentially the method of quantum computation.
The interactions result in new, transformed superpositions of states of the qubits. Crucially, they undergo so-called “quantum interference” – sometimes reinforcing each other, sometimes canceling each other out, according to the paradoxical laws of quantum physics. This is the unique, defining feature of quantum computers, which distinguishes them absolutely from classical computers.
The “output” – the statistical array of states we obtain in the final measurements – will generally be very different from the original inputted array of states. We have done a “quantum computation” that generates an output for every input in the manner described.
Choosing the sources, interactions and detectors and their spatial layout might be regarded as a kind of equivalent to programming a conventional computer. However, owing to the key role played by the quantum interference principle, the mathematics is entirely different.
In a popular 2008 article in Scientific American, Aaronson explained, “A good quantum computer algorithm ensures that computational paths leading to a wrong answer cancel out and that paths leading to a correct answer reinforce.”
It is easy to see that with quantum computation we have entered a completely different universe from that of conventional digital computers. There is no simple correspondence between the two. However, it can be shown that in principle quantum computers can carry out any computation that can be done on a conventional computer.
Realizing quantum computation poses enormous technological challenges. A major problem is to suppress all kinds of “noise” and disturbances from the outside, which could destroy the delicate correlations in the system and render the measurements useless.
Starting in the 1990s, error correction methods have been devised to reduce the effect of the noise by digitally processing the data from the measurements.
The first teaser installment of the interview follows:
Jonathan Tennenbaum: Dr Aaronson, the announcement of the quantum computing experiment in China has caused great excitement around the world, not least of all in China itself. I would like if you could comment from your own standpoint about the new results and what they mean for the effort to realize quantum computers.
Scott Aaronson: Well, you know, this is still a developing story. People are still trying to poke holes in this and we will be seeing where it goes. But the context is that there was a race to demonstrate what was called “quantum supremacy,” which basically just means showing that you can do something faster with a quantum computer than we can do with any existing classical computer.
Crucially, this does not have to be useful data. We’re not talking about a useful machine or a scalable machine or a universal machine. I use the analogy of the Wright brothers’ first airplane, which gave a proof of principle. This is an answer to the skeptics, who said that you would never get any speedup with a quantum computer.
People started talking about this goal about a decade ago. The term “quantum supremacy” was coined by John Preskill. I know it’s now regarded by some as not politically correct.
JT: The Chinese authors use the term “computational advantage” of quantum computing compared with conventional digital computers.
SA: What happened a decade ago was that my students and I wrote a paper, and some other people had papers, that said: Look, if your goal is just to demonstrate a quantum speedup and it doesn’t have to be useful, then we think that this could be done with about 50 qubits, or in the case we are talking about, 50 photons.
So we had a theoretical proposal for what you can do, namely to look at what we call a sampling problem.
JT: This is the now-famous boson sampling problem.
SA: Yes. These are not problems with a single right answer, the way people may have heard of Shor’s algorithm.
Part two of the four-part interview will be published soon. Check Asia Times regularly for the next installment