Federated Learning (FL) lets multiple data owners collaborate in training a global model without any violation of data privacy, which is a crucial requirement for enhancing users’ trust in Artificial Intelligence (AI) systems. Despite the significant momentum recently gained by the FL paradigm, most of the existing approaches in the field neglect another key pillar for the trustworthiness of AI systems, namely explainability. In this paper, we propose a novel approach for FL of fuzzy regression trees (FRTs), which are generally acknowledged as highly interpretable by-design models. The proposed FL procedure is designed for the scenario of horizontally partitioned data and is based on the transmission of aggregated statistics from the clients to a central server for the tree induction procedure. It is shown that the proposed approach faithfully approximates the ideal case in which the tree induction algorithm is applied on the union of all local datasets, while still ensuring privacy preservation. Furthermore, the FL approach brings benefits, in terms of generalization capability, compared to the local learning setting in which each participant learns its own FRT based only on the private, local, dataset. The adoption of linear models in the leaf nodes ensures a competitive level of performance, as assessed by an extensive experimental campaign on benchmark datasets. The analysis of the results covers both the aspects of accuracy and interpretability of FRT. Finally, we discuss the application of the proposed federated FRT to the task of Quality of Experience forecasting in an automotive case-study.

Increasing trust in AI through privacy preservation and model explainability: Federated Learning of Fuzzy Regression Trees

Corcuera Barcena J. L.;Ducange P.;Marcelloni F.;Renda A.
2025-01-01

Abstract

Federated Learning (FL) lets multiple data owners collaborate in training a global model without any violation of data privacy, which is a crucial requirement for enhancing users’ trust in Artificial Intelligence (AI) systems. Despite the significant momentum recently gained by the FL paradigm, most of the existing approaches in the field neglect another key pillar for the trustworthiness of AI systems, namely explainability. In this paper, we propose a novel approach for FL of fuzzy regression trees (FRTs), which are generally acknowledged as highly interpretable by-design models. The proposed FL procedure is designed for the scenario of horizontally partitioned data and is based on the transmission of aggregated statistics from the clients to a central server for the tree induction procedure. It is shown that the proposed approach faithfully approximates the ideal case in which the tree induction algorithm is applied on the union of all local datasets, while still ensuring privacy preservation. Furthermore, the FL approach brings benefits, in terms of generalization capability, compared to the local learning setting in which each participant learns its own FRT based only on the private, local, dataset. The adoption of linear models in the leaf nodes ensures a competitive level of performance, as assessed by an extensive experimental campaign on benchmark datasets. The analysis of the results covers both the aspects of accuracy and interpretability of FRT. Finally, we discuss the application of the proposed federated FRT to the task of Quality of Experience forecasting in an automotive case-study.
2025
Corcuera Barcena, J. L.; Ducange, P.; Marcelloni, F.; Renda, A.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1270693
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact