The current era is characterized by an increasing pervasiveness of applications and services based on data processing and often built on Artificial Intelligence (AI) and, in particular, Machine Learning (ML) algorithms. In fact, extracting insights from data is so common in daily life of individuals, companies, and public entities and so relevant for the market players, to become an important matter of interest for institutional organizations. The theme is so relevant that ad hoc regulations have been proposed. One important aspect is given by the capability of the applications to tackle the data privacy issue. Additionally, depending on the specific application field, paramount importance is given to the possibility for the humans to understand why a certain AI/ML-based application is providing that specific output. In this paper, we discuss the concept of Federated Learning of eXplainable AI (XAI) models, in short FED-XAI, purposely designed to address these two requirements simultaneously. AI/ML models are trained with the simultaneous goals of preserving the data privacy (Federated Learning (FL) side) and ensuring a certain level of explainability of the system (XAI side). We first introduce the motivations at the foundation of FL and XAI, along with their basic concepts; then, we discuss the current status of this field of study, providing a brief survey regarding approaches, models, and results. Finally, we highlight the main future challenges.

Fed-XAI: Federated Learning of Explainable Artificial Intelligence Models

José Luis Corcuera Bárcena;Mattia Daole;Pietro Ducange;Francesco Marcelloni;Alessandro Renda;Fabrizio Ruffini;Alessio Schiavo
2022-01-01

Abstract

The current era is characterized by an increasing pervasiveness of applications and services based on data processing and often built on Artificial Intelligence (AI) and, in particular, Machine Learning (ML) algorithms. In fact, extracting insights from data is so common in daily life of individuals, companies, and public entities and so relevant for the market players, to become an important matter of interest for institutional organizations. The theme is so relevant that ad hoc regulations have been proposed. One important aspect is given by the capability of the applications to tackle the data privacy issue. Additionally, depending on the specific application field, paramount importance is given to the possibility for the humans to understand why a certain AI/ML-based application is providing that specific output. In this paper, we discuss the concept of Federated Learning of eXplainable AI (XAI) models, in short FED-XAI, purposely designed to address these two requirements simultaneously. AI/ML models are trained with the simultaneous goals of preserving the data privacy (Federated Learning (FL) side) and ensuring a certain level of explainability of the system (XAI side). We first introduce the motivations at the foundation of FL and XAI, along with their basic concepts; then, we discuss the current status of this field of study, providing a brief survey regarding approaches, models, and results. Finally, we highlight the main future challenges.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1161330
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact