Blog

Harnessing Tensor Networks for Error Mitigation in Quantum Computing

As the field of quantum computing rapidly advances, a primary challenge remains: suppressing errors and increasing the number of logical qubits. The long-term goal is to achieve fault-tolerant quantum computing by implementing error-correcting codes that can handle the noise below a certain threshold. However, in the near term, where we don’t yet have the luxury of large-scale, fault-tolerant quantum systems, we must rely on error mitigation techniques to reduce the impact of noise on quantum computations. These methods are particularly crucial for medium-depth quantum circuits, which are increasingly being used for quantum simulations of molecules, chemical binding affinities, and complex quantum dynamics.

Is error mitigation useful for near-term quantum computing?

Recent discussions within the quantum computing community, particularly in light of some new no-go theorems (e.g., arXiv:2210.11505 and arXiv:2407.12768), have raised concerns about the feasibility of near-term quantum computing. These discussions often emphasise that error mitigation in quantum computing comes with exponential costs—a factor typically considered in resource estimates for specific error mitigation strategies—and argue that this might result in a “loss of scalable quantum advantage.” However, this does not rule out practical quantum advantage, particularly if error rates \( \epsilon \) are sufficiently low, as observed with the latest generation of quantum devices (see table below for typical error rates and qubit counts N).

Other authors argue that noisy quantum circuits can be simulated classically with high accuracy, implying that error mitigation might be unnecessary [arXiv:2407.12768]. However, this view relies on a particular definition of “scaling” that may not align with practical approaches in error mitigation.

Algorithmiq’s CSO Guillermo García-Pérez explains this in a very clear manner:

“This article is looking at the cost of simulating very large circuits (e.g., very deep) with a fixed error rate. But that’s not the scaling one cares about. We want to simulate the largest affordable circuits on a given device. What is affordable is determined by the measurement overhead, which is in turn dictated by the average number of errors in the circuit (\(\epsilon N L \)). If one fixes this number, one can increase the circuit size (\(NL\)) as the error rates (\(\epsilon\)) decrease with improvements in the hardware. Fixing the measurement overhead, the scaling analysis shows that, as quantum technology improves, the challenge of simulating even noisy circuits classically becomes exponentially harder. In short, as the technology improves, we will reach the scale at which the circuits cannot be classically simulated but can be efficiently error mitigated, which will enable practical quantum advantage.”

Given these ongoing debates, it’s essential to explore innovative methods like tensor networks for error mitigation, which could provide a significant edge in overcoming these challenges. Let’s dive deeper into how this approach can be leveraged to push the boundaries of what near-term quantum devices can achieve.

The role of error mitigation in quantum computing

Error mitigation strategies are crucial for improving the utility of near-term quantum devices. Some methods are agnostic to the specific nature of the noise, offering a universal approach. However, knowing the exact noise model generally allows for more efficient error removal. Algorithmiq’s Tensor-network Error Mitigation (TEM) method is a hybrid quantum-classical algorithm designed for performing noise mitigation entirely at the classical post-processing stage [arXiv:2307.11740, arXiv:2403.13542]. In TEM, the output state of the quantum computer is measured using informationally-complete (IC) measurements [arXiv:2401.18049, arXiv:2407.02923], and the results are then processed through a tensor network that represents the inverse of the noise channel affecting the quantum processor.

TEM has several advantages over purely classical tensor network methods. For instance, in TEM, the tensor network doesn’t need to account for the quantum state itself or the observable’s evolution in the Heisenberg picture. Instead, it models the inverse of the noise channel, which approaches identity as the noise decreases. This means that the classical computational complexity required by TEM also decreases with decreasing noise levels, making it a more efficient approach than classical-only methods for certain scenarios.

Why Tensor Network Error Mitigation (TEM) is the optimal error mitigation method?

TEM is not just another error mitigation technique—it is optimal with respect to theoretical bounds, meaning that no method can achieve a smaller measurement overhead. The measurement overhead refers to the number of additional measurements required to perform efficient error mitigation, a critical factor in the feasibility of quantum computations. In the table below, we see how the measurement overhead of different error mitigation methods compare for N-qubit circuits with depth L, showing the advantageous scaling of TEM [arXiv:2403.13542].

error mitgation bounds

TEM has the potential to enable quantum advantage in complex scenarios, such as a 100×100 circuit configuration, or 5000 entangling gates, with noise rates at the Heron level. This capability marks a significant milestone, as it enables quantum simulations and computations that were previously unattainable due to noise limitations.

In addition to its optimal measurement efficiency, TEM significantly improves the accuracy and reliability of digital quantum simulations, making quantum algorithms more precise and dependable. This enhancement is crucial for pushing the boundaries of what current quantum technology can achieve, opening the door to experiments that were once beyond reach.

Another key advantage of TEM is its cost-effectiveness. Since TEM handles noise mitigation entirely in the post-processing stage, there is no need to add extra circuits to the quantum computer, which not only reduces costs but also minimises the risk of introducing additional errors due to the imperfections of quantum devices. This reduction in effective cost per experiment makes TEM a highly attractive option for quantum researchers and industry practitioners alike.

Tensor networks in error mitigation

The complexity of the noise-mitigating tensor network is directly linked to its bond dimension—the key parameter that quantifies the computational complexity of the classical post-processing. Interestingly, as the noise in the quantum device decreases, the bond dimension required for effective error mitigation also decreases. By truncating the least significant terms in the tensor network, we can focus on mitigating the most critical noise components while keeping the computational complexity manageable.

This approach is vital because, as quantum computing continues to evolve, noise remains the primary hurdle to overcome. Error mitigation methods like TEM will have a significant impact on the development and practical application of quantum technologies in the near term.

In April 2024, Algorithmiq organised a strategic workshop, Quantum Now, bringing together major actors in quantum computing: IBM, AWS, Google, Q-Ctrl, Phasecraft, Caltech, EPFL, ICFO, Nvidia, and many others. The scope was to discuss in detail the resources needed to achieve quantum advantage and value with near-term quantum computers. The outcomes of this meeting are now summarised in a perspective paper which will be soon announced to the public, containing the combined input of all these key players.

In this paper we debunk commonly believed myths on near-term quantum computing and we identify the use cases and applications that are possible with current hardware, using best-in-class error mitigation methods. We show that applications in the fields of quantum chaos, many-body physics, Hubbard dynamics, and small molecule chemistry simulations, requiring circuit volumes ranging from 100×100 to 100×10000, may be implemented using the most powerful error mitigation methods and for error rates typical of current devices. We also argue that existing near-term algorithms can be capable of providing practical quantum advantage at this scale.

Universality and industrial relevance for quantum computing

In the industrial context, computational resources are often deployed with utility-driven goals. Algorithms that can solve a broad class of problems without customisation are highly valued, as they reduce research and development costs and accelerate production cycles. However, while problem-specific solutions can outperform generic ones, they are often too costly and time-consuming to implement on a large scale.

This is where the universality of quantum algorithms becomes important. For quantum computing to offer a real advantage, it must provide superior performance within this universal framework compared to classical algorithms. For instance, in many-body quantum dynamics, developing a universal algorithm that can handle various interactions and initial states without customisation is key to achieving practical quantum advantage.

Classical tensor network simulations are currently the best universal method for these types of problems, but they are limited by available computational resources. As problem complexity grows, classical simulations must shift from exact to approximate methods, which can lead to significant errors, especially in highly entangled quantum states.

In contrast, a hybrid quantum-classical approach like TEM can offer more accurate estimations with less computational overhead, especially in scenarios where classical methods struggle.

Connecting error mitigation and error correction

Interestingly, there is a connection between error mitigation and error correction in quantum computing. TEM’s sampling cost, when compared to a universal lower bound, suggests that TEM with an optimal bond dimension can have a similar effect as a quantum error correction code with a low distance (e.g., distance 3) [arXiv:2403.13542].

This connection hints at a potential trade-off between error correction and error mitigation, which could accelerate the development of large-scale quantum algorithms. As quantum processors become more sophisticated, combining error correction with advanced error mitigation techniques could extend the scale and accuracy of quantum simulations.

At Algorithmiq, we are working at the full integration of error mitigation and error correction, since we believe that this will be one of the key challenges of the pre fault-tolerant era, and a necessary cornerstone towards fully fledged fault-tolerant quantum computing.

Why error mitigation in quantum computing matters

The bright future of quantum computing relies heavily on effective error mitigation techniques. With approaches like TEM, we can push the boundaries of what near-term quantum devices can achieve, bringing us closer to practical and advantageous quantum computations. As the field progresses, the interplay between error correction and error mitigation will likely play a critical role in the evolution of quantum technologies, helping to overcome the noise barrier and unlock the full potential of quantum computing.

Author

Sabrina Prof. Sabrina Maniscalco

Contributions

TEM is part of Algorithmiq’s software platform Aurora. Elements of TEM are included in patents filed by Algorithm Ltd with the European Patent Office and the US Patent Office. Sergei Filippov and Guillermo García-Pérez conceived the algorithm. Sergei Filippov, Matea Leahy, Matteo Rossi, Boris Sokolov, Guillermo García-Pérez, Ramón L. Panadés-Barrueta, Francesca Pietracaprina, Ludmila Botelho, and Roberto Di Remigio Eikås implemented the algorithm.