Reproducibility and comparability remain fundamental challenges in the empirical evaluation of Quantum Software Engineering (QSE) tools. These challenges arise from the intrinsic probabilistic behavior of quantum programs and the variability introduced by quantum hardware and simulator backends. While existing quantum benchmarks typically offer only static circuit definitions, they often leave researchers responsible for data generation and execution, hindering the creation of standardized and reproducible experiments. In this paper, we introduce QSimBench, a novel, open-source benchmark suite and accompanying Python library designed to address these limitations. QSimBench provides over 20 million precomputed shot-level execution traces: 20 , 000 20,000 measurements each, across 14 14 quantum algorithms, multiple circuit sizes ( 4 − 15 4−15 qubits), and six Qiskit-based backends including both ideal and noise-injected simulations. Each configuration includes not only detailed measurement histories but also the original circuit in OpenQASM, the backend's noise model, and configuration metadata, ensuring full transparency and enabling in-depth analysis. QSimBench empowers researchers to conduct scalable, reproducible, and statistically rigorous evaluations of QSE tools, such as those for monitoring, orchestration, and statistical post-processing, without the need for expensive or difficult-to-reproduce quantum executions. The benchmark is distributed with a lightweight Python API that facilitates fast data retrieval, flexible sampling strategies, and seamless integration into experimental pipelines. QSimBench is publicly available, aiming to foster collaborative and reproducible research within the quantum software community.

QSimBench: An Execution-Level Benchmark Suite for Quantum Software Engineering

Bisicchia, Giuseppe
Co-primo
;
Bocci, Alessandro
Co-primo
;
Brogi, Antonio
2025-01-01

Abstract

Reproducibility and comparability remain fundamental challenges in the empirical evaluation of Quantum Software Engineering (QSE) tools. These challenges arise from the intrinsic probabilistic behavior of quantum programs and the variability introduced by quantum hardware and simulator backends. While existing quantum benchmarks typically offer only static circuit definitions, they often leave researchers responsible for data generation and execution, hindering the creation of standardized and reproducible experiments. In this paper, we introduce QSimBench, a novel, open-source benchmark suite and accompanying Python library designed to address these limitations. QSimBench provides over 20 million precomputed shot-level execution traces: 20 , 000 20,000 measurements each, across 14 14 quantum algorithms, multiple circuit sizes ( 4 − 15 4−15 qubits), and six Qiskit-based backends including both ideal and noise-injected simulations. Each configuration includes not only detailed measurement histories but also the original circuit in OpenQASM, the backend's noise model, and configuration metadata, ensuring full transparency and enabling in-depth analysis. QSimBench empowers researchers to conduct scalable, reproducible, and statistically rigorous evaluations of QSE tools, such as those for monitoring, orchestration, and statistical post-processing, without the need for expensive or difficult-to-reproduce quantum executions. The benchmark is distributed with a lightweight Python API that facilitates fast data retrieval, flexible sampling strategies, and seamless integration into experimental pipelines. QSimBench is publicly available, aiming to foster collaborative and reproducible research within the quantum software community.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1342255
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact