
While the amount of innovation and technical advancement in the Quantum Computing (QC) realm has been incredible over the past 12 months or so, it is still very hard to quantify the power of existing QCs, to compare one QC against another, or to compare a QC against a classical computer.
Last week (February 23rd) IonQ boldly announced that their latest QC named Aria with 32 qubits, “has achieved a record 20 algorithmic qubits and has furthered its lead as the most powerful quantum computer in the industry…” But is this actually true? In November 2021 IBM unveiled their Eagle 127-qubit quantum processor. Isn’t 127 a lot more than 32? What gives here?
In this post I suggest an annual “Quantum Games” or world Olympics in order to spur innovation and friendly competition as well as collaboration. I’ll describe this in added detail towards the end of this post, but first, let’s set out some of the parameters in QC so that these games have added context.
The Guts of Quantum Computation
While digging into all the details of the full quantum stack and the various types of algorithms being written and run on existing QCs is beyond the scope of this post, some background will be helpful in understanding the nuances involved in building, operating and measuring QCs.
The fundamental core to a QC involves qubits, which can be electrons, atoms, photons or other tiny elements. Storing such tiny elements in a given state, then precisely manipulating and measuring them, has significant challenges including maintaining near absolute-zero temperatures and/or vacuums. Over the past 12-months or so, QC companies have gone from creating machines with 10’s of qubits to now machines with 100’s of qubits, with many predicting that this “order of magnitude increase” can be repeated each year. It is generally thought that we will need to implement ~1,000,000 physical qubits in order to achieve consistent quantum advantage (i.e., when QCs can surpass classical computers performing real-world applications), so if that cadence of 10x improvement per year can be maintained, quantum advantage could be achieved within 4-5 years.
However, there are many other factors in building and implementing QCs beyond simply the number of qubits. QCs derive most of their computational advantages due to principles of Superposition and Entanglement but because qubits are so sensitive and fragile, any noise in the system threatens to sabotage the computational power via decoherence. Therefore, in the current NISQ (noisy intermediate-scale quantum) environment a lot of the qubits are earmarked for error correction. In addition, due to a non-cloning feature of quantum mechanics, there is no “quantum RAM” and therefore some of the qubits need to be allocated to storage overhead (i.e., noting the result of a predecessor calculation in an algorithm). Without digging into all of the technical detail, you can think of QCs needing to address all of the following:
- Placing all the physical qubits in an original state, including requisite cryogenics, vacuums, microwave pulses and/or laser pulses, etc.
- Manipulating the various qubits to establish Superposition
- Applying gates to the qubits to program algorithms, including entangling certain qubits
- Applying error correction overhead to confirm the algorithms are performing the desired calculations before decoherence
- Applying compiler-level logic as well as various other layers in the QC stack
- Measuring the readouts of the 10’s of thousands of “shots” of each algorithm run (QC algorithms are based on the variational metric because QC calculations are probabilistic and need to be repeated many times to average to the answer)
- Resetting the system between calculations
While the above list is not meant as an actual blueprint, it is intended to give some sense to the various activities underway in a working QC. There are performance bottlenecks and areas for performance enhancement in each of these activities. Let’s categorize them for ease of further discussion.
Key QC Performance Metrics
There are four core functions or parameters of performance needed to measure QC power:
- Scale, or the total number qubits
- Quality, or the ability of the qubits to implement circuits before errors enter the system
- Speed, or the number of circuits that can be implemented at a given time
- Context, of the type of calculation being measured. Some focus on the physical system and others focus on the applications…some focus on simulations and some on optimization, etc.

[1] Based on BCG analysis which included many other competing benchmarks. See References for link.
Here is a bit more color on each of these four proposed benchmarking strategies.
IBM: Has proposed a three-prong measurement set of metrics including the number of qubits, quantum volume (an indication of the quality of circuits and how faithfully curcuits are implemented) and speed as measured in CLOPS (circuit layer operations per second) which indicates how many circuits can run in a given time. While this seems like a fairly straight-forward and objective set of metrics, the criticism has been that the metrics are based on a random set of gates (theory being this keeps it objective), and therefore it doesn’t factor in real-world usage.
QED-C: The US Quantum Economic Development Consortium, which was established as a result of 2018’s National Quantum Initiative Act, has developed a suite of application benchmarks targeted towards practical applications and based on executing common quantum algorithms or programs. Given that these benchmarks were derived from industry input, this seems like a broadly validated set of measurements.
IonQ: Has proposed #AQ or algorithmic qubits as the yardstick, and has used this standard to perform apples-to-apples comparisons with other leading QC makers. They claim that by using the series of algorithmic benchmarks developed by QED-C, they are featuring important real-world algorithms, and by taking one metric (#AQ), advocate an easy measurement to track and compare. They claim that having an #AQ of 20 means they can execute a reference quantum circuit over 20 qubits that contains over 400 (20 x 20) entangling gate operations and expect the results to be correct with meaningful confidence. Below is their latest announcement with the metric shown for their latest Aria machine compared to Quantinuum Model H1.1, IBM’s Falcon and Rigetti’s Aspen M-1, with the size of the rectangle outlined in pink denoting the QC “size”.

SupermarQ: Just last week Super.tech released SupermarQ, another application-centric benchmarking suite for QCs. The target applications mirror real-world problems in a variety of domains such as finance, chemistry, energy and encryption.
While these are some useful ways to consider measuring QC performance, it is important to realize that these firms are battling over very modest performance yardsticks in the scheme of the eventual potential of QC. If we assume a scale of 1-100 where 100 is a robust QC that consistently achieves quantum advantage, current machines are roughly in the 5-10 range now, so arguing whether a given machine is a 5 out of 100 or an 8 out of 100 is not that meaningful from a practical sense.
That said, in addition to the metrics proposed in the table above, there are other proposed benchmarking strategies including Mirror Circuits by Sandia National Labs, Quantum LINPACK by UC Berkely, and Q-Score by Atos, among others. In fact, to provide standards against which to measure quantum computing progress and drive current research toward specific goals, DARPA announced its Quantum Benchmarking program. Its aim is to re-invent key quantum computing metrics, make those metrics testable, and estimate the required quantum and classical resources needed to reach critical performance thresholds.
For now, my advice is to use caution when describing the power of a given Quantum Computer. While the number of qubits is important, it is not the only important metric. Focusing just on numbers of qubits is like assessing the performance of a high-end automobile solely by the number of cylinders in the engine. Clearly there are many other factors that impact drivability and performance, and a similar analogy applies to QC.
So Let the Games Begin!
Given that some benchmarks favor optimization strategies, some favor simulation, some focus on contrived theoretical tasks and others try to reflect real-world applications, some are great at 2-qubit gates but not at larger entanglements, etc., it unfortunately does not look like a universally accepted standard is going to be agreed upon in the near-future. So instead, what if there were an annual contest like a global QC decathlon? I think it would be reasonably easy to agree on a set of measurement algorithms, similar to those proposed by QED-C. Different entrants could compete to win the fasted correct results in several different categories of algorithms and problems, with the start-and-stop times agreed upon and a panel of experts to arbitrate any discrepancies among entrants. Gold, Silver and Bronze medals could be awarded for each category with an overall “best in show” award to the team that wins the most individual events or achieved the highest overall score.
I’ll nominate myself as one of the judges. I’d certainly love a front-row seat to watch as the players competed, each driving the best of each other. What do you think?
Disclosure: I have no beneficial positions in stocks discussed in this review, nor do I have any business relationship with any company mentioned in this post. I wrote this article myself and express it as my own opinion.
References:
Langione, Bobier, Krayer, Park and Kumar, “The Race to Quantum Advantage Depends on Benchmarking,” Boston Consulting Group, published February 23, 2022.
IonQ press release entitled “IonQ Aria Further Lead As World’s Most Powerful Quantum Computer”, issued February 23, 2022.
“IBM Quantum breaks the 100-qubit processor barrier,” International Business Machines, November 16, 2021.
Cross, Bishop, Sheldon, Nation, Gambetta, “Validating quantum computers using randomized model circuits,” Physical Review A October 11, 2019.
“Driving quantum performance: more qubits, higher Quantum Volume, and now a proper measure of speed,” International Business Machines, accessed February 27, 2022.
If you enjoyed this post, please visit my website and enter your email to receive future posts and updates: http://quantumtech.blog | Russ Fein is a venture investor with deep interests in Quantum Computing (QC). For more of his thoughts about QC please visit the link to the left. For more information about his firm, please visit Corporate Fuel. Russ can be reached at russ@quantumtech.blog. |