In the wee hours of Wednesday morning, IBM gave an unwary world its first publicly accessible quantum computer. You may worry that you can tear up your passwords and throw away your encryption, because now all is lost. However, it is probably a bit early to call time on the world as we know it. You see, the whole computer is just five bits.

This may sound like some kind of publicity stunt; perhaps it’s IBM’s way of winning back some attention from D-Wave’s quantum computing efforts. But a careful look shows that serious science underpins the announcement.

The IBM system is, on a very superficial level, similar to that of D-Wave. Both systems use superconducting quantum interference devices as qubits (quantum bits). But that’s where the similarity ends. As IBM points out, its quantum computer is a universal quantum computer, and D-Wave’s is not.

Another big difference: IBM can address and measure the state of each qubit individually. The company can (and has) measure all the critical features of its device. If you want to know how long a qubit stays in state, IBM can tell you. IBM even shows that randomly addressing multiple qubits does not affect the state of the others too much. Big Blue is truly building its quantum computer from the ground up, while still making sure the technique meets real-world requirements.

And we know quite a bit about the hardware. In 2015, IBM released a detailed schematic of how the circuit is put together and how it is connected to the outside world. The schematic probably isn’t detailed enough to build the circuit in my local clean room, but I bet my friends down the hall who work in the field can. Disclosing details like this is what the industry calls “doing science.”

I was lucky enough to get a copy of the article about the device. The main focus of the hardware right now isn’t computing (which is meaningless with five qubits), but making sure the device computes reliably thanks to good error correction.

## Let me get my red pen

Modern memory and communication systems would not function without error correction. The essential idea is to build some redundancy into the data so that you can *always* tell when data is corrupted and sometimes resolve the issues without having to ask for information again.

A common scheme in classical communications is to take a group of bits, say four, and perform a series of mathematical operations (usually exclusive-or operations) to produce a single bit of extra data (note that I am not giving an outline of an exact scheme here, so in detail the exact figures change slightly). All five bits are sent to the receiver, which performs exactly the same operations.

One can always tell if there is an error in any of the four data bits based on whether the operations return the same value for the fifth bit. Because of the way the order of operations is structured, anywhere from one to two errors can be corrected based on the outcome of the operations. In short, at a cost of about a quarter of the capacity of your communication channel, you can ensure that almost no fatal errors are received.

There are, of course, a ton of implementation details beyond the math that you need to consider to make this work. But it does work. When I briefly worked in the microwave link industry, with error correction turned off, we expected a bit in a billion to be wrong; with error correction enabled, it was expected to be better than one bit per trillion.

However, this is for classic communication and calculations. The quantum world, as usual, is a completely different story.

## For quantum error correction you need a quantum red pen

So in addition to a bit-flip error, you can also have a phase-flip error, an error in the

relationbetween two bits, rather than an error in the value of a particular bit.

Quantum computing has a problem with error correction – a much worse problem than classical computers. Let’s put it in perspective. One option for a qubit is a superconducting quantum interference device where the typical energy difference between a one and zero is on the order of 10.^{-24 }joules. For a classical system, we can choose any energy difference we want by setting thresholds for voltages and/or currents, so we set it to something convenient, usually something on the order of a volt.

To get close to the same 10^{-24}J difference between a one and zero, each chip should operate with a gate voltage of ten microvolts. Thus, classical and quantum computers operate at least on energy scales that differ by about a factor of 10,000.

To make matters worse, a qubit state is not a one or zero, but is a probability of producing a one or zero when measured, and this probability evolves over time. I can take two qubits set identically and measure them some time later. After repeating this many times, I should find that the probability distributions are the same. That’s the theory; in practice they experience slightly different environments, so sometime later in time one will have changed slightly from the other in an unpredictable way.

So in addition to a bit-flip error, you can also have a phase-flip error, an error in the *relation* between two bits rather than an error in the value of a particular bit.

And this makes error correction for a quantum circuit very difficult – and very, very necessary. In classical computers, the first processors could be implemented with minimal (or no) error correction. Quantum computing requires advanced error correction from the start. It’s not just a matter of turning around a bit; you need to know how the different qubits evolve differently over time and how you can correct for this *before measuring them*.

Let me give you an idea of how difficult this is. A typical qubit can have a lifetime of about 50 microseconds and a coherence time of 20 microseconds. What does this mean? It means that once your qubits are set up, you must apply error correction within the first 20 microseconds and complete one step of your computation within 50 microseconds. That doesn’t seem so bad, right?

But there is always a ‘but’. The operations to manipulate a qubit – to perform a logical operation or to correct an error – involve microwave pulses of a certain amount of energy. They can be short, sharp pulses or long, slow pulses, as long as the area of the pulse remains the same. Unfortunately, short, sharp pulses cause problems, so typical pulses are 50 to 60 nanoseconds long. Since you have to get everything done within 50 microseconds, that means you have a total of 1,000 operations for computation and error correction. This makes it an extra difficult problem.

## Get rid of bad neighbors

The researchers at IBM have solved this particularly difficult problem. To make things a lot easier, they have a cluster of four data qubits linked to one qubit, the syndrome bit. The state of the syndrome qubit depends on the state of all other qubits. This connectivity is used to determine the rate at which qubits throw errors into each other, which are then used to correct errors before computation is complete.

It took me a bit of time to figure out the system, but that’s basically how it works. The qubits are electromagnetic waves that oscillate back and forth in an electronic device. A small amount of the wave is coupled out and mixed with the wave from the neighboring qubit. If the two qubits are in the same state, their electromagnetic waves combine in phase and give a strong signal to the syndrome qubit. But if the two qubits are out of phase, they cancel and give no signal to the syndrome qubit. Likewise, the second pair of qubits does the same thing, providing their own combined signal to the syndrome qubit. This is called interference.

The interference between the four data qubits determines the state of the syndrome bit. This can then be used to correct some errors that pop up in the four data qubits (e.g. a flip of one qubit). Essentially, the state of the syndrome qubit determines where to send the signal that fixes the error.

To demonstrate how effective this error correction scheme is, the researchers focused on errors introduced by crosstalk. Crosstalk occurs when a qubit is *not* involved in the current operation causes the value of a qubit that *is* involved to flip. An example of this is when the value of one of the four qubits is used to conditionally flip the value of a second. The two remaining qubits are in a set state, but are not involved in this particular operation. However, because all the qubits are linked, the two bystanders can cause an error by causing the control or data bit to change state unexpectedly.

The crucial thing about IBM’s system is that it can correct the error *during the day* the calculation. That is, we don’t read out the qubit values and then try to do some error correction. Instead, we expect a bystander qubit to generate a wave that interferes and causes an error. This can be remedied by understanding how the error is introduced and undoing it before it causes a problem.

Simply put, the researchers don’t know the state of any of the qubits, but they do know how to flip states, and they know the rate at which qubits cause each other to flip. They use that knowledge for error correction.

The operation takes a certain time. During this time, the state of the bystander qubit interferes via an electromagnetic wave of a certain phase and amplitude. The researchers wait a moment and then flip the bystander qubit. This keeps the amplitude the same, but reverses the phase, turning constructive interference into destructive interference. After waiting the same amount of time, the net effect of the bystander qubit is exactly nothing, because the reversed phase undoes everything the qubit did to its neighbors initially.

Then the researchers fold the bystander back into its original state, so that it can also be used without errors. By carefully sequencing the operational pulses and these error-correcting pulses, researchers can significantly reduce the likelihood of errors.

## How can I play with this?

IBM did not provide a location to experience this beautiful new device. But a quick web search revealed that you can access it here (registration required). Apparently IBM really wants people who have a serious need for quantum computing to sign up, so I haven’t really tried it myself.

Five qubits is still too little to do anything useful anyway. But if you think you have a good reason to use quantum computing in the future, I suggest you sign up and play, because IBM has a very aggressive scaling timeframe: it expects between 50 and 100 within the next decade get qubits. . IBM can do useful things with 50 qubits. That means there should be usable qubit numbers within five years and toys that can do fun tricks a few years later.

If you want to be ready, it might be best to spend some time understanding the system now.