Federated Learning (FL) is a crucial approach for training large-scale AI models while preserving data locality, eliminating the need for centralised data storage. In collaborative learning settings, ensuring data quality is essential, and in FL, maintaining privacy requires limiting the knowledge accessible to the central orchestrator, which evaluates and manages client contributions. Accurately measuring and regulating the marginal impact of each client’s contribution needs specialised techniques. This work examines the relationship between one such technique—Shapley Values—and a client’s vulnerability to Membership inference attacks (MIAs). Such a correlation would suggest that the contribution index could reveal high-risk participants, potentially allowing a malicious orchestrator to identify and exploit the most vulnerable clients. Conversely, if no such relationship is found, it would indicate that contribution metrics do not inherently expose information exploitable for powerful privacy attacks. Our empirical analysis in a cross-silo FL setting demonstrates that leveraging contribution metrics in federated environments does not substantially amplify privacy risks.

Can Contributing More Put You at a Higher Leakage Risk? The Relationship Between Shapley Value and Training Data Leakage Risks in Federated Learning

Zuziak, Maciej
Co-primo
Membro del Collaboration Group
;
Rinzivillo, Salvatore
Supervision
2025-01-01

Abstract

Federated Learning (FL) is a crucial approach for training large-scale AI models while preserving data locality, eliminating the need for centralised data storage. In collaborative learning settings, ensuring data quality is essential, and in FL, maintaining privacy requires limiting the knowledge accessible to the central orchestrator, which evaluates and manages client contributions. Accurately measuring and regulating the marginal impact of each client’s contribution needs specialised techniques. This work examines the relationship between one such technique—Shapley Values—and a client’s vulnerability to Membership inference attacks (MIAs). Such a correlation would suggest that the contribution index could reveal high-risk participants, potentially allowing a malicious orchestrator to identify and exploit the most vulnerable clients. Conversely, if no such relationship is found, it would indicate that contribution metrics do not inherently expose information exploitable for powerful privacy attacks. Our empirical analysis in a cross-silo FL setting demonstrates that leveraging contribution metrics in federated environments does not substantially amplify privacy risks.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1334532
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact