Services based on Artificial Intelligence (AI) are becoming increasingly pervasive in our society. At the same time, however, we are also witnessing a growing awareness towards the ethical aspects and the trustworthiness of AI tools, especially in high stakes domains, such as the healthcare one. In this paper, we propose the adoption of AI techniques for predicting Parkinson’s Disease progression with the overarching aim of accommodating the urgent need for trustworthiness. We address two key requirements towards trustworthy AI, namely privacy preservation in learning AI models and their explainability. As for the former aspect, we consider the (rather common) case of medical data coming from different health institutions, assuming that they cannot be shared due to privacy concerns. To address this shortcoming, we leverage federated learning (FL) as a paradigm for collaborative model training among multiple parties without any disclosure of private raw data. As for the latter aspect, we focus on highly interpretable models, i.e., those for which humans are able to understand how decisions have been taken. An extensive experimental analysis carried out on a well-known Parkinson Telemonitoring dataset highlights how the proposed approach based on FL of fuzzy rule-based systems allows achieving, simultaneously, data privacy and interpretability. Results are reported for different data partitioning scenarios, also comparing the interpretable-by-design model with an opaque neural network model.

Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson’s Disease Progression

Corcuera Barcena J. L.;Ducange P.;Marcelloni F.;Renda A.;Ruffini F.
2023-01-01

Abstract

Services based on Artificial Intelligence (AI) are becoming increasingly pervasive in our society. At the same time, however, we are also witnessing a growing awareness towards the ethical aspects and the trustworthiness of AI tools, especially in high stakes domains, such as the healthcare one. In this paper, we propose the adoption of AI techniques for predicting Parkinson’s Disease progression with the overarching aim of accommodating the urgent need for trustworthiness. We address two key requirements towards trustworthy AI, namely privacy preservation in learning AI models and their explainability. As for the former aspect, we consider the (rather common) case of medical data coming from different health institutions, assuming that they cannot be shared due to privacy concerns. To address this shortcoming, we leverage federated learning (FL) as a paradigm for collaborative model training among multiple parties without any disclosure of private raw data. As for the latter aspect, we focus on highly interpretable models, i.e., those for which humans are able to understand how decisions have been taken. An extensive experimental analysis carried out on a well-known Parkinson Telemonitoring dataset highlights how the proposed approach based on FL of fuzzy rule-based systems allows achieving, simultaneously, data privacy and interpretability. Results are reported for different data partitioning scenarios, also comparing the interpretable-by-design model with an opaque neural network model.
2023
978-3-031-44063-2
978-3-031-44064-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1214148
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact