akshdñaSOKDs

slijdhSKJDHsd

SÑDJHñsdjchñsDJCsd

akshdñaSOKDs

slijdhSKJDHsd

SÑDJHñsdjchñsDJCsd

Quantum computers are here to stay. In ten years from now, they will be widely adopted by many industries, as was the case for transistors and lasers invented during the first quantum revolution. But before any far-flung application of the technology can take place, the industry has yet to demonstrate its practical usefulness.

Recently, IBM announced a 433 qubit quantum computer – by far the largest existing on the market. At the same time, the company has reported outstanding improvements on the performance of the processors. While this is an unprecedented advancement in the technology, the computational power of the device remains difficult to harness, due to its susceptibility to errors. Indeed, most of the disruptive quantum algorithms, such as Grover’s search or Shor’s factoring, rely on fault tolerance. They cannot be implemented successfully unless the underlying hardware is practically error-free. To reach such a high level of computational accuracy, error-correction algorithms leveraging massive redundancy must be used. Such algorithms achieve practically error-free computing with imperfect devices but require millions of qubits with operational error rates orders of magnitude smaller than current1— far beyond the hardware capabilities in the near term.

So there is a long way to go before Quantum Processing Units (QPUs) will reach fault-tolerance for sizes necessary to tackle any relevant computational problems. But do we have to wait until the distant fault-tolerance to reap the reward? Couldn’t we find use for the still evolving devices and strive for use cases and business impact already in the near term?

In the past ten years a lot of effort has been dedicated into developing algorithms for noisy near-term quantum processors. These efforts culminated a few years back in the demonstration of quantum advantage, in which a quantum processor outperformed best classical computers at a non-trivial (but useless) task2. The demonstration later led to development of an efficient classical method competitive with the performance of the quantum hardware, opening again the race for establishing quantum advantage3.

Moving beyond showing quantum advantage, demonstration of any useful quantum advantage is certainly still missing, even if potential algorithms to this end have been suggested. For example, the variational quantum eigensolver (VQE) or the quantum approximate optimization algorithm (QAOA) are able to use small, noisy devices by reducing the quantum computational load with a hybrid scheme implementing some parts of the algorithm in usual classical processors.

The difficulty, however, lies in scaling such approaches to relevant problem sizes. Existing algorithms work well for very few qubits (<5) but struggle for increasingly larger systems. For VQE, the flagship algorithm for quantum chemistry calculations, this means limiting the application to extremely small molecules easily computable with existing classical methods. If such algorithms were to be useful in the context of e.g. drug development, the scaling to larger qubit numbers is required.

Indeed, the use of near-term quantum computers for any relevant problem requires solving three roadblocks:

©Algorithmiq 2021 | Design by Bitflow