Federated Learning has been recently adopted in several contexts as a solution to train a Machine Learning model while preserving users’ privacy. Even though it avoids data sharing among the users involved in the training, it is common to use it in conjunction with a privacy-preserving technique like DP due to potential privacy issues. Unfortunately, often the application of privacy protection strategies leads to a degradation of the model’s performance. Therefore, in this paper, we propose a framework that allows the training of a collective model through Federated Learning using a hybrid architecture that enables clients to mix within the same learning process collaborations with (semi-)trusted entities and collaboration with untrusted participants. To reach this goal we design and develop a process that exploits both the classic Client-Server and the Peerto-Peer training mechanism. To evaluate how our methodology could impact the model utility we present an experimental analysis using three popular datasets. Experimental results demonstrate the effectiveness of our approach in reducing, in some cases, up to 32% the model accuracy degradation caused by the use of DP.

Enhancing Privacy and Utility in Federated Learning: A Hybrid P2P and Server-Based Approach with Differential Privacy Protection

Corbucci L.;Monreale A.;Pellungrini R.
2024-01-01

Abstract

Federated Learning has been recently adopted in several contexts as a solution to train a Machine Learning model while preserving users’ privacy. Even though it avoids data sharing among the users involved in the training, it is common to use it in conjunction with a privacy-preserving technique like DP due to potential privacy issues. Unfortunately, often the application of privacy protection strategies leads to a degradation of the model’s performance. Therefore, in this paper, we propose a framework that allows the training of a collective model through Federated Learning using a hybrid architecture that enables clients to mix within the same learning process collaborations with (semi-)trusted entities and collaboration with untrusted participants. To reach this goal we design and develop a process that exploits both the classic Client-Server and the Peerto-Peer training mechanism. To evaluate how our methodology could impact the model utility we present an experimental analysis using three popular datasets. Experimental results demonstrate the effectiveness of our approach in reducing, in some cases, up to 32% the model accuracy degradation caused by the use of DP.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1309609
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact