IBM scientists have unveiled two crucial advances toward the creation of a practical quantum computer: an effective way to detect and correct quantum errors, and the design of a silicon chip that can scale up to house a large number of entangled quantum bits.
The power of quantum computing
NEW ATLAS NEEDS YOUR SUPPORT
Upgrade to a Plus subscription today, and read the site without ads.
It's just US$19 a year.UPGRADE NOW
Transistors in classical computers can only shrink so far. The current generation of transistors is 14 nanometers in size, meaning that only about thirty silicon atoms fit between the transistor's "source" and "drain," the two ends of the electronic switch. Once that number gets reduced to only about four or five silicon atoms, the uncertainty brought on by quantum mechanic effects will make it impossible for such a switch to function properly. Electrons will spontaneously and randomly jump from one end of the other in unpredictable ways, creating a current even when the switch is off.
The idea behind quantum computers – first advanced by Richard Feynmann in 1981 – is to harness quantum effects rather than see them as an obstacle. This is done not by building a more advanced transistor, but instead by harnessing the much greater potential of quantum information.
In the weird and wonderful world of quantum computing a quantum bit, or qubit, can assume two values (0 and 1) at the same time. When two or more qubits are linked in a special "entangled" state, this property extends out and the power of qubits grows exponentially. Ten fully entangled qubits would be able to store as much information as 1,024 classical bits; 33 qubits could store one gigabyte; and 300 fully entangled qubits would store as many classical bits as there are atoms in the universe.
Crucially, however, although the information the qubits contain grows exponentially, we'd still be able to manipulate it using a number of operations that is a polynomial function of the number of qubits. In other words – exponential speedups, in a very literal sense.
A quantum computer would not be universally faster for just any algorithm, but it would show exponential speedups for searching and manipulating big data, performing data cryptography, analyzing protein folding to design better drugs, simulating the early Universe, and providing much more accurate weather forecasting, among many other things.
Qubits are finicky
Our success in creating a practical quantum computer will largely depend on our ability to keep all qubits in the very delicate entangled state and correct mistakes effectively and reliably.
Data downloaded from the internet or stored in our hard drives goes through algorithms that detect and correct so-called "bit flips," which happen when a bit erroneously changes its value from 1 to 0 or vice versa.
Errors happen very rarely in classical computes, but they are core issue for a quantum computer. The entangled qubits are much more delicate, and can be severely affected by small changes in temperature and electromagnetic radiation. Quantum bits are subject to bit flips too, but they add to this another dimension of possible errors, referred to as a "phase flip," which affects the way in which the states are entangled. To make things even worse, the act of reading a qubit in order to correct it collapses its quantum state into either a 0 or a 1.
So far, researchers have only able to address either bit flips or phase flips, but never both at the same time.
In what could be a very significant advance for the world of quantum computing, IBM researchers have now found a way to detect both types of quantum errors at the same time, and have demonstrated their advance on an actual four-qubit chip they have created.
The circuit is based on a square lattice of four superconducting qubits on a chip that is roughly a quarter of an inch (6 mm) in size. The qubits are split into two data qubits, which carry the actual information, and two so-called "syndrome qubits," which are independent (not entangled) and perform the error checking on the two data qubits.
There is a very good reason for the qubits to be laid up in a matrix. In order for the qubits to be read without being destroyed, the researchers adopted an error correcting technique that spreads quantum information across multiple qubits, but – crucially – only to their nearest neighbor.
Previous error correction techniques laid up the qubits in an array, and therefore could only correct for either bit flips or phase flips, but not both. The matrix layout allows the qubits to have more neighbors, meaning that both types of error correction can take place simultaneously.
The square chip was designed and manufactured using standard silicon fabrication techniques, and the researchers say that they expect they will be able to show effective error correction even on a scaled-up version of the chip that handles more qubits.
If this is true, then the main remaining obstacle to a practical quantum computer could be to reliably produce superconducting bits that have low enough error rates for the IBM technique to be effective.
The advance is described in today's issue of the scientific journal Nature Communications.
Source: IBMView gallery - 2 images