Open-loop optimal control applied to epidemic outbreaks is a valuable tool to develop control principles and inform future preparedness guidelines. A drawback of this approach is its assumption of complete knowledge of both transmission dynamics and the effects of policy measures. As a result, such methods lack responsiveness to real-time conditions, since they do not integrate feedback from the evolving epidemic state. Overcoming this requires a closed-loop approach. We propose a novel closed-loop method for real-time social distancing responses using a general Reinforcement Learning (RL)-based decision-support framework. It enables adaptive management of social distancing policies during an epidemic, thereby balancing direct health costs (e.g., hospitalizations, deaths) with indirect (economic, social, psychological) costs from prolonged interventions. The framework builds on and compares with a COVID-19 model that was previously used for open-loop assessments, thereby capturing key disease characteristics like asymptomatic transmission, healthcare saturation, and quarantine. We test the framework by evaluating optimal real-time responses for a severe outbreak under varying priorities of indirect costs by public authorities. The full spectrum of policy strategies—elimination, suppression, and mitigation—emerges depending on the cost prioritization as a result of closed-loop adaptability. The framework supports timely, informed decisions by governments and health authorities during current or future pandemics.

Real-time responses to epidemics: a Reinforcement-Learning approach

Gemignani G.
Primo
;
Landi A.;Pisaneschi G.
Penultimo
;
Manfredi P.
Ultimo
2026-01-01

Abstract

Open-loop optimal control applied to epidemic outbreaks is a valuable tool to develop control principles and inform future preparedness guidelines. A drawback of this approach is its assumption of complete knowledge of both transmission dynamics and the effects of policy measures. As a result, such methods lack responsiveness to real-time conditions, since they do not integrate feedback from the evolving epidemic state. Overcoming this requires a closed-loop approach. We propose a novel closed-loop method for real-time social distancing responses using a general Reinforcement Learning (RL)-based decision-support framework. It enables adaptive management of social distancing policies during an epidemic, thereby balancing direct health costs (e.g., hospitalizations, deaths) with indirect (economic, social, psychological) costs from prolonged interventions. The framework builds on and compares with a COVID-19 model that was previously used for open-loop assessments, thereby capturing key disease characteristics like asymptomatic transmission, healthcare saturation, and quarantine. We test the framework by evaluating optimal real-time responses for a severe outbreak under varying priorities of indirect costs by public authorities. The full spectrum of policy strategies—elimination, suppression, and mitigation—emerges depending on the cost prioritization as a result of closed-loop adaptability. The framework supports timely, informed decisions by governments and health authorities during current or future pandemics.
2026
Gemignani, G.; D'Onofrio, A.; Landi, A.; Pisaneschi, G.; Manfredi, P.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1348007
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact