In urgent edge computing scenarios, where rapid decision-making is paramount, traditional centralized methods often struggle with dynamic workloads. This paper introduces a decentralized Q-learning framework for workload offloading, enabling individual edge nodes to make adaptive decisions based solely on local observations and limited neighbor communications. By incorporating a cooperative mechanism—where agents weight neighbor Q-values according to their current load—the proposed method achieves enhanced task allocation under both normal and emergency conditions. Simulation results indicate improved responsiveness and resource efficiency, laying the groundwork for future studies on scalable, decentralized learning algorithms in real-world critical environments.
Decentralized Q-Learning for Workload Offloading in Urgent Edge Computing Scenarios
Valerio Besozzi;Marco Danelutto;Patrizio Dazzi;Matteo Della Bartola;Luca Ferrucci;Jacopo Massa;
2025-01-01
Abstract
In urgent edge computing scenarios, where rapid decision-making is paramount, traditional centralized methods often struggle with dynamic workloads. This paper introduces a decentralized Q-learning framework for workload offloading, enabling individual edge nodes to make adaptive decisions based solely on local observations and limited neighbor communications. By incorporating a cooperative mechanism—where agents weight neighbor Q-values according to their current load—the proposed method achieves enhanced task allocation under both normal and emergency conditions. Simulation results indicate improved responsiveness and resource efficiency, laying the groundwork for future studies on scalable, decentralized learning algorithms in real-world critical environments.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


