Chinese researchers from the University of Science and Technology of China check on the operation of a quantum computer at a lab in Shanghai in a file photo. Image: Xinhua

This is the second installment of a four-part series. Read part one here.

Jonathan Tennenbaum: Dr Aaronson, the announcement of the quantum computing experiment in China has caused great excitement around the world, not least of all in China itself.  I would like you to comment from your own standpoint about the new results and what they mean for the effort to realize quantum computers.

Scott Aaronson: Well, you know, this is still a developing story. People are still trying to poke holes in this, and we will be seeing where it goes. But the context is that there was a race to demonstrate what was called “quantum supremacy”, which basically just means showing that you can do something faster with a quantum computer than we can do with any existing classical computer.

Crucially, this does not have to be useful data. We’re not talking about a useful machine or a scalable machine or a universal machine. I use the analogy of the Wright brothers’ first airplane, which gave a proof of principle. This is an answer to the skeptics, who said that you would never get any speedup with a quantum computer.

People started talking about this goal about a decade ago. The term “quantum supremacy” was coined by John Preskill.  I know it’s now regarded by some as not politically correct.

JT: The Chinese authors use the term “computational advantage” of quantum computing compared with conventional digital computers.

SA: What happened a decade ago was that my students and I wrote a paper, and some other people had papers, that said: look, if your goal is just to demonstrate a quantum speedup and it doesn’t have to be useful, then we think that this could be done with about 50 qubits or, in the case we are talking about, 50 photons.

So we had a theoretical proposal for what you can do, namely to look at what we call a sampling problem.

JT: This is the now-famous boson sampling problem.

SA: Yes. These are not problems with a single right answer, the way people may have heard of Shor’s algorithm.

JT: The problem of factorizing a given large integer into its prime factors.

SA: Right. That was the famous one from 26 years ago. The sampling proposal is very different from that. We’re not looking for a single right answer. We just specify a distribution of our answers and we ask the quantum computer to output samples from that distribution.

In this case, if we had 50 qubits, for example, then the output could be any 50-bit string. There are about a quadrillion of them. And so, so you probably will never see the same output twice as long as you run this experiment.

But, now the crucial point. In an experiment, not all of the outputs are equally likely. So after we see a bunch of output, you know, it could be a relatively small number, a few thousand or a few million of them, we can then do some statistics on it and you can check to see, are these consistent with having been sampled from this distribution?

The theoretical questions that we had to answer were, first of all, how do you check with your classical computer that the right distribution was being sampled? The best answer we have for that question still involves doing a brute force calculation with your classical computer.

Quantum computer experiment at the Functional Micro-Nanocircuits Laboratory, Bauman Institute, Moscow. Source: Wikimedia 

JT: At the end of their paper, the Chinese researchers say it would take the largest supercomputers available today about two billion years to get the same result that they got in two hundred seconds.

SA: Well, that has to be interpreted with a grain of salt because that’s using the best algorithm [for the supercomputer] that they could think of. This is exactly the question for theorists. What we are pretty confident about is that if the experiment were done perfectly, then any classical algorithm is going to take an amount of time which increases like 2 to the Nth power, where N is the number of photons.

JT: So that goes up very fast as the number of photons increases.

SA: Yeah. So we’re very confident that a classical computer would need that kind of exponential scaling to simulate the ideal version of this experiment.

Now, one trouble is that in the real world, the experiment is noisy. So then you have to say, well, maybe it’s easier for a classical computer to simulate a noisy version of the experiment.

And also you never directly see the distribution that is being sampled. All you see is some specific samples to which you then apply some test, so you have to determine if maybe it would be easier for a classical computer to spoof that test.

JT: That is the benchmarking the Chinese group talks about in its report.

SA: Right. They apply some benchmark to the test. And they say that this benchmark distinguishes their sample from the simplest hypotheses of what could be going on, that are not quantum supremacy.

Now that the paper is out, what’s happening is that all the skeptics are taking a crack at it and saying, maybe we can pick out other things that could be going on that are not quantum supremacy but would fit your test. 

JT: The experiments display impressive technological virtuosity on the part of the Chinese researchers. How would you compare their work with what is being done in other parts of the world?

SA: It’s a good question. The first thing to say is that this is the second claimed demonstration of quantum supremacy. The first one was a year ago, by Google, at a lab Google has in Santa Barbara. They had a completely different engineering approach, using superconductors.

They fabricated a chip and put it in a dilution refrigerator and took it down to a temperature of about 10 millikelvins [just thousandths of a degree above absolute zero]. That makes the chip superconducting. Then you have currents that could flow in two different states, that represent zero or one. When it’s supercooled, they can flow in a superposition.

Google has been working on these superconducting qubits for six years or so. In 2015 they decided: We want to go for quantum supremacy. Not everyone has even been trying for this. Some people said that we should just skip over this. This is just a demonstration that only scientists care about. We should just go straight for useful applications.

But it is important to prove the reality of the phenomenon in question before you start hawking it to the customers.

JT: You wouldn’t have wanted the Wright brothers to start taking passengers right away.

SA: Exactly. When you see a lot of this stuff – either from D-Wave or to some extent from IBM, booking customers for the quantum computer that they’re still building – it feels a bit like the Wright brothers booking passengers.

Anyway, so Google said, in 2015: We want to do quantum supremacy with 50 or so superconducting qubits, and we want to do something like boson sampling, but we’re not going to use bosons.

They could have actually used their superconducting qubits to control photons, like microwave photons, and that would probably have been another hundred million dollars of engineering.

They said: We’re just going to build what we know how to build, which is superconducting qubits. And we’re going to leave it to you to adapt your theory to that. And so, around 2017 a student and I, along with others, did the theoretical work.

JT: So the Chinese experiment is a kind of independent confirmation of what Google did.

SA: Yes, [the Google experiment] was inspired by boson sampling but it was different. It even had some advantages over boson sampling. The output is more completely random-looking, which is good if you’re trying to evade simulation by a classical computer.

So, in 2019, they announced that they had done this with a chip called Sycamore that they built with 53 qubits.

It wasn’t obvious whether, after that, anyone was still going to bother with boson sampling, which was our original proposal. But in China, the group of Chaoyang Lu said: We’re still doing this. And at about the same time as Google’s supremacy announcement, they announced that they had demonstrated boson sampling with 14 detected photons.

JT: Now they say they have something like 70 photons a day.

SA: I think the average is about 50, their largest eventa are about 70. The number of photons varies from one detection to the next. So it’s  a similar regime as the Google experiment, basically, because in both cases the difficulty for a classical computer goes like 2 to the N as far as we know.

JT: What would be the significance of this from a technological standpoint?

SA: It is a strong piece of evidence in favor of photonic quantum computing, that this is a viable path.

Ion trap used in quantum computer experiment. Source: Wikimedia

JT: Was that doubted before?

SA: Yes. Look, I wasn’t sure either. I thought: Maybe people will just give up on that. There were smart people that said, OK, it’s a nice theory, but you’re not going to be able to scale this beyond ten or so photons.

JT: Because it is too hard to control the noise.

SA: Exactly. The fundamental problem with scaling these experiments is that in order to see the quantum interference effect that you’re looking for, you need all the photons to arrive at the detectors at the same time.

The main problem is you need a huge amount of control over the system. You need your photon sources to each spit out a photon exactly when you tell them to. We do have lots of single photon sources, but typically they are non-deterministic; they will spit out a photon when they feel like it.

People know how to get what’s called a heralded source – meaning, whenever it happens to spit out a photon, it actually spits out two photons that are flying in opposite directions. This is useful because as soon as you detect one, then you know there is one going the other way.

But the problem is that when you have got like 50 or a hundred of these sources, what’s the chance that they’re going to all spit out a photon at exactly the same time? It decreases exponentially with the number of sources.

People needed to figure out how to deal with that, to get more control and also improve the boson sampling proposal itself so that it would go with this kind of thing. The experimentalists had some of the key ideas here. 

They said: We can improve what Aaronson and Arkhipov did, we can retrospectively define the input state, to involve whichever sources happened to generate a photon.

JT: So what would you say in summary, in terms of what the Chinese have accomplished with the Jiuzhang?

SA: Let me put it this way. I think whether or not it ends up being a demonstration of quantum supremacy that resists all the attacks of skeptics, it is clearly an impressive experiment. Even the skeptics of the experiment agree about that.

Part three of the four-part series will be published soon. Check Asia Times regularly for the next installment.