In critical scenarios like disaster response or real-time monitoring, efficient and adaptive task scheduling is essential. This paper presents a decentralized framework using Nash Q-learning, a game-theoretic reinforcement learning technique, for urgent edge computing. Tasks are allocated among agents based on strategies derived from multi-agent interactions modeled as a Markov Game. The method promotes decentralized cooperation and improves responsiveness by adapting to dynamic conditions. We provide a formal model and outline its applicability to various edge environments.

Game-Theoretic Reinforcement Learning for Task Optimization Under Time-Sensitive Constraints

Dazzi, Patrizio;
2025-01-01

Abstract

In critical scenarios like disaster response or real-time monitoring, efficient and adaptive task scheduling is essential. This paper presents a decentralized framework using Nash Q-learning, a game-theoretic reinforcement learning technique, for urgent edge computing. Tasks are allocated among agents based on strategies derived from multi-agent interactions modeled as a Markov Game. The method promotes decentralized cooperation and improves responsiveness by adapting to dynamic conditions. We provide a formal model and outline its applicability to various edge environments.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1331827
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact