The story so far: The appeal of quantum computers (QCs) is their ability to leverage quantum physics to solve problems that are too complex for computers that use classical physics. The 2022 Nobel Prize in Physics was awarded for work that rigorously tested such an “experiment” and paved the way for its applications in computing – a testament to the contemporary importance of QCs. Several institutes, companies, and governments have invested in the development of quantum computing systems, ranging from software to solve various problems to electromagnetic and materials science that expands their hardware capabilities. In 2021 alone, the Indian government launched a national mission to study quantum technologies with an allocation of ₹8,000 crore; the army has opened a quantum research center in Madhya Pradesh; and the Department of Science and Technology co-launched another facility in Pune. Given the wide range of applications, understanding what QCs really are is crucial to avoiding the misinformation surrounding it and developing more realistic expectations.
How does a computer use physics?
A macroscopic object – like a ball, a chair or a person – can only be in one place at a time; this location can be accurately predicted; and the effects of the object on its surroundings cannot be transmitted faster than the speed of light. This is the classic “experience” of reality.
For example, you can watch a ball fly through the air and trace its trajectory according to Newton’s laws. You can predict exactly where the ball will be at any given time. If the ball hits the ground, you’ll see it do so for the time it takes light to travel through the atmosphere to you.
Quantum physics describes reality on the subatomic scale, where objects are particles like electrons. In this field, you cannot locate the location of an electron. You can only know that it will be present in a given volume of space, with a probability attached to each point in the volume – like 10% at point A and 5% at point B. When you probe that volume in a more strong, you might find the electron at point B. If you repeatedly probe this volume, you will find the electron at point B 5% of the time.
There are many interpretations of the laws of quantum physics. One is the “Copenhagen interpretation,” which Erwin Schrödinger popularized using a thought experiment he devised in 1935. There is a cat in a closed box with a bowl of poison . There is no way to tell if the cat is alive or dead without opening the box. At that time, the cat is said to exist in a superposition of two states: alive and dead. When you open the box, you force the overlay to collapse into a single state. The state to which it collapses depends on the probability of each state.
Similarly, when you probe the volume, you force the superposition of electron states to reduce to one based on the probability of each state. (Note: This is a simplistic example to illustrate a concept.)
The other “experiment” relevant to quantum computing is entanglement. When two particles are entangled and then separated by an arbitrary distance (even greater than 1,000 km), making an observation on one particle, and thus causing its superposition to collapse, will instantly cause the other’s superposition to collapse particle as well. This phenomenon appears to violate the notion that the speed of light is the ultimate speed limit of the universe. In other words, the superposition of the second particle will collapse into a single state in less than three hundredths of a second, which corresponds to the time it takes light to travel 1,000 km. (Note: The “many worlds” interpretation has grown in popularity over the Copenhagen interpretation. There is no “collapse” here, which automatically removes some of these confusing issues.)
How would a computer use the overlay?
The bit is the fundamental unit of a classical computer. Its value is 1 if a corresponding transistor is on and 0 if the transistor is off. The transistor can be in one of two states at the same time – on or off – so a bit can have one of two values at the same time, 0 or 1.
The qubit is the fundamental unit of a QC. It is typically a particle like an electron. (Google and IBM are known to use transmons, where bonded electron pairs oscillate between two superconductors to denote the two states.) Some information is encoded directly on the qubit: if an electron’s spin is pointing up, it means 1; when the spin points down, it means 0.
But instead of being 1 or 0, the information is encoded in an overlay: say, 45% 0 plus 55% 1. This is totally different from the two separate states of 0 and 1 and is a third type of state.
Qubits are entangled to ensure they work together. If a qubit is probed to reveal its state, some or all of the other qubits will be probed as well, depending on the calculation performed. The computer’s final output is the state in which all qubits have collapsed.
A qubit can encode two states. Five qubits can encode 32 states. A computer with N qubits can encode 2N states – whereas a computer with N transistors can only encode 2×N states. Thus, a computer based on qubits can access more states than a computer based on transistors, and thus access more ways of calculation and solutions to more complex problems.
How come we don’t use them?
The researchers discovered the basics and used CQs to model the binding energy of hydrogen bonds and simulate a wormhole model. But to solve most practical problems, like finding the shape of an undiscovered drug, exploring space independently, or factoring large numbers, they face tough challenges.
A practical QC requires at least 1000 qubits. Today’s largest quantum processor has 433 qubits. There are no theoretical limits on larger processors; the barrier is related to engineering.
Qubits exist in superposition under specific conditions, notably at very low temperatures (~0.01 K), with protection against radiation and protection against physical shock. Tap your finger on the table and the states of the qubit sitting on it might collapse. Hardware or electromagnetic defects in the circuitry between the qubits could also “corrupt” their states and skew the final result. Researchers have yet to build CQs that completely eliminate these disturbances in systems of a few tens of qubits.
Correcting errors is also tricky. The no-cloning theorem states that it is impossible to clone a qubit’s states perfectly, which means that engineers cannot create a copy of a qubit’s states in a classical system to work around the problem. One solution is to tangle each qubit with a group of error-correcting physical qubits. A physical qubit is a system that mimics a qubit. But reliable error correction requires that each qubit be attached to thousands of physical qubits.
Researchers also need to create QCs that don’t amplify errors when more qubits are added. This challenge is linked to a fundamental problem: unless the error rate is kept below a certain threshold, more qubits will only increase the informational noise.
Practical QCs will require at least hundreds of thousands of qubits, working with superconducting circuits that we have not yet built, apart from other components such as firmware, circuit optimization, compilers and algorithms that use the possibilities of quantum physics. Quantum supremacy itself – a QC doing something a classical computer cannot – is therefore at least decades away.
The billions invested in this technology today are based on speculative profits, while companies promising developers access to quantum circuits in the cloud often offer physical qubits with notable error rates.
The interested reader can build and simulate rudimentary quantum circuits using IBM’s “Quantum Composer” in the browser.
#Explained #challenges #quantum #computing