Mon. Sep 26th, 2022
Image of a chip surrounded by complicated supporting hardware.

Today, quantum computing company D-Wave announces the availability of its next-generation quantum annealer, a specialized processor that uses quantum effects to solve optimization and minimization problems. The hardware itself isn’t much of a surprise — D-Wave discussed the details months ago — but D-Wave spoke to Ars about the challenges of building a chip with more than a million individual quantum devices. And the company is tying the release of the hardware to the availability of a new software stack that acts a bit like a middleware between the quantum hardware and classic computers.

Quantum annealing

Quantum computers built by companies like Google and IBM are general purpose port-based machines. They can solve any problem and should show tremendous acceleration for specific problem classes – otherwise they will, once the port count is high enough. At the moment, these quantum computers are limited to a few dozen ports and have no error correction. Bringing them to the scale needed poses a series of difficult technical challenges.

D-Wave’s machine is not for general use; it’s technically a quantum softener, not a quantum computer. It performs calculations that find low-energy states for various configurations of the hardware’s quantum devices. As such, it only works if a computational problem can be translated to an energy minimization problem in one of the possible configurations of the chip. That’s not as restrictive as it might sound, since many forms of optimization can be translated into an energy minimization problem, including things like complex planning problems and protein structures.

It is easiest to think of these configurations as a landscape with a series of peaks and valleys, where solving problems is the equivalent of searching the landscape for the lowest valley. The more quantum devices there are on D-Wave’s chip, the more thoroughly he can sample the landscape. So increasing the number of qubits is absolutely critical to the utility of a quantum annealer.

This idea is quite similar to D-Wave’s hardware, as it is much easier to add qubits to a quantum hardener; the company’s current offering has 2,000. There is also a matter of fault tolerance. While errors in a gate-based quantum computer typically result in a useless output, failures on a D-Wave machine usually mean that the response it returns is energy-efficient, but not the lowest. And for many problems, a reasonably optimized solution can be good enough.

What was less clear is whether the approach offers clear advantages over algorithms that run on classical computers. For gate-based quantum computers, researchers had already worked out the math to demonstrate the potential for quantum supremacy. That is not the case for quantum annealing. In recent years, there have been a number of instances where D-Wave’s hardware showed a distinct advantage over classical computers, to see a combination of algorithm and hardware improvements on the classical side obliterate the difference.

across generations

D-Wave hopes the new system, which it calls Advantage, can demonstrate a significant performance difference. Prior to today, D-Wave offered a 2000 qubit quantum optimizer. The Advantage system scales that number up to 5,000. Just as critically, those qubits are interconnected in complementary ways. As mentioned above, problems are structured as a specific configuration of connections between the machine’s qubits. If no direct connection between any two is available, some qubits must be used to establish the connection and thus are not available for troubleshooting.

The 2,000-qubit machine had a total of 6,000 possible connections between its qubits, for an average of three for each of them. The new machine brings that total to 35,000, an average of seven connections per qubit. Obviously this makes it possible to configure many more problems without using qubits to establish connections. A white paper shared by D-Wave indicates it works as expected: bigger problems fit into the hardware, and fewer qubits need to be used as bridges to connect other qubits.

Each qubit on the chip is in the form of a loop of superconducting wire called a Josephson junction. But there are many more than 5,000 Josephson nodes on the chip. “The bulk of that is involved in superconducting control circuits,” Mark Johnson, D-Wave’s processor leader, told Ars. “They’re really like digital-to-analog converters with memory that we can use to program a particular problem.”

To get the level of control needed, the new chip has more than a million Josephson nodes in total. “Let’s put that in perspective,” Johnson said. “My iPhone has a processor with billions of transistors on it. So in that sense it’s not much. But if you’re familiar with superconducting integrated circuit technology, it’s way on the outside of the curve.” Connecting everything also required more than 100 meters of superconducting wire – all on a chip about the size of a miniature.

While this is all made using standard silicon fabrication tools, that’s just a handy substrate – there are no semiconductor devices on the chip. Johnson couldn’t go into details about the manufacturing process, but he was willing to talk about how these chips are made more generally.

This is not TSMC

One of the major differences between this process and standard chip production is volume. Most of D-Wave’s chips are located in its own facility and accessible to customers through a cloud service; only a handful are bought and installed elsewhere. That means the company doesn’t have to make a lot of chips.

When asked how many it makes, Johnson laughed and said, “I’ll end up being the case of this guy who predicted there would never be more than five computers in this world,” before going on to say, “I think we can achieve our business goals with a dozen of these or less.”

If the company were to make basic semiconductor devices, that would mean doing one wafer and calling it a day. But D-Wave believes that progress has reached the point where it takes a useful device out of every wafer. “We’re constantly pushing way beyond the comfort zone of what you could have at a TSMC or an Intel, where you’re looking for how many 9s I can get in my yield,” Johnson told Ars. “If we have such a high return, we probably didn’t push hard enough.”

Much of that push came in the years leading up to this new processor. Johnson told Ars that the higher connectivity levels required new process technology. †[It’s] the first time we’ve made a significant change to the technology node in about 10 years,” he told Ars. “Our fantastic cross-section is much more complicated. It has more materials, it has more layers, it has more types of devices and more steps in it.”

In addition to the complexity of making the device itself, the fact that it operates at temperatures in the milli-Kelvin range also adds to the design challenges. As Johnson noted, any wire coming into the chip from the outside world is a potential conduit for heat that must be minimized — again, a problem most chip makers don’t have to deal with.

By akfire1

Leave a Reply

Your email address will not be published.