In critical scenarios like disaster response or real-time monitoring, efficient and adaptive task scheduling is essential. This paper presents a decentralized framework using Nash Q-learning, a game-theoretic reinforcement learning technique, for urgent edge computing. Tasks are allocated among agents based on strategies derived from multi-agent interactions modeled as a Markov Game. The method promotes decentralized cooperation and improves responsiveness by adapting to dynamic conditions. We provide a formal model and outline its applicability to various edge environments.
Game-Theoretic Reinforcement Learning for Task Optimization Under Time-Sensitive Constraints
Dazzi, Patrizio;
2025-01-01
Abstract
In critical scenarios like disaster response or real-time monitoring, efficient and adaptive task scheduling is essential. This paper presents a decentralized framework using Nash Q-learning, a game-theoretic reinforcement learning technique, for urgent edge computing. Tasks are allocated among agents based on strategies derived from multi-agent interactions modeled as a Markov Game. The method promotes decentralized cooperation and improves responsiveness by adapting to dynamic conditions. We provide a formal model and outline its applicability to various edge environments.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


