The interactions between humans and wildlife are crucial and necessitate innovative solutions to mitigate potential conflicts. The regions involved often suffer from inconsistent and limited network coverage, which is essential for real-time surveillance and immediate intervention. The success of services such as wildlife monitoring, poaching prevention, and emergency response heavily depends on reliable and continuous connectivity. Software-defined wide-area Network (SD-WAN) is a promising technology that allows for the integration of different access communication technologies at a low cost. In this study, we apply Reinforcement Learning (RL) techniques to SD-WAN tunnel selection to improve network performance and reliability in rural areas. We have implemented a custom SD-WAN environment to simulate network conditions, utilizing a Deep Q-Network agent to learn an optimal tunnel selection policy in a typical scenario of variable coverage where the traditional methods of Traffic Engineering can suffer in such a dynamic environment.

Enhancing Wildlife Monitoring in Variable Network Coverage Areas with Deep RL-based SD-WANs

Borgianni L.;Bua C.;Ghadir S.;Ghadir D.;Giordano S.
2024-01-01

Abstract

The interactions between humans and wildlife are crucial and necessitate innovative solutions to mitigate potential conflicts. The regions involved often suffer from inconsistent and limited network coverage, which is essential for real-time surveillance and immediate intervention. The success of services such as wildlife monitoring, poaching prevention, and emergency response heavily depends on reliable and continuous connectivity. Software-defined wide-area Network (SD-WAN) is a promising technology that allows for the integration of different access communication technologies at a low cost. In this study, we apply Reinforcement Learning (RL) techniques to SD-WAN tunnel selection to improve network performance and reliability in rural areas. We have implemented a custom SD-WAN environment to simulate network conditions, utilizing a Deep Q-Network agent to learn an optimal tunnel selection policy in a typical scenario of variable coverage where the traditional methods of Traffic Engineering can suffer in such a dynamic environment.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1347651
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact