The increasing complexity of cyber threats has necessitated the adoption of Intrusion Detection Systems (IDSs) which often rely on Artificial Intelligence (AI) models to detect malicious activities in network traffic. In distributed environments, interconnected private sub-networks pose additional challenges, as threats can spread while evading local detection. However, the use of AI models in these systems raises issues related to transparency, privacy, and data security. In this context, this study proposes a decentralized IDS based on Federated Learning (FL) and Explainable Artificial Intelligence (XAI) techniques. The network traffic classification model is based on a Multi-Layer Perceptron (MLP) neural network, while the SHapley Additive exPlanations (SHAP) method is employed to provide interpretable explanations of the system decisions. Fed-SHAP, based on the federated fuzzy c-means clustering, is used to generate a global SHAP background dataset to be adopted for generating consistent and reliable explanations. The system is evaluated on realistic scenarios with non-Independent and Identically Distributed (IID) network data. Experimental results demonstrate that the proposed federated approach maintains comparable accuracy with respect to centralized models while ensuring data protection and explanation consistency in federated environments.

Leveraging Explainability of AI-Based Intrusion Detection Systems in a Federated Environment

Ducange, Pietro;Fazzolari, Michela;Marcelloni, Francesco;Miglionico, Giustino Claudio
2025-01-01

Abstract

The increasing complexity of cyber threats has necessitated the adoption of Intrusion Detection Systems (IDSs) which often rely on Artificial Intelligence (AI) models to detect malicious activities in network traffic. In distributed environments, interconnected private sub-networks pose additional challenges, as threats can spread while evading local detection. However, the use of AI models in these systems raises issues related to transparency, privacy, and data security. In this context, this study proposes a decentralized IDS based on Federated Learning (FL) and Explainable Artificial Intelligence (XAI) techniques. The network traffic classification model is based on a Multi-Layer Perceptron (MLP) neural network, while the SHapley Additive exPlanations (SHAP) method is employed to provide interpretable explanations of the system decisions. Fed-SHAP, based on the federated fuzzy c-means clustering, is used to generate a global SHAP background dataset to be adopted for generating consistent and reliable explanations. The system is evaluated on realistic scenarios with non-Independent and Identically Distributed (IID) network data. Experimental results demonstrate that the proposed federated approach maintains comparable accuracy with respect to centralized models while ensuring data protection and explanation consistency in federated environments.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1344232
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact