Federated Learning (FL) aims to train artificial intelligence models without the need to share private raw data, thereby preserving privacy and security. Typically, it is assumed that all participating FL clients will act honestly to develop an accurate model. However, some clients may behave deceptively, manipulating their data to bias the model's predictions and also degrade its generalization ability. This paper addresses the issue of fairness in FL from the perspective of client truthfulness. We introduce Federated Objective (FedObj), a novel aggregation method designed to minimize the impact of malicious clients and thereby improve the overall model's robustness to such behavior. Our results show that FedObj achieves state-of-the-art performance in standard scenarios and outperforms conventional strategies when deceptive clients are involved. FedObj is a valuable approach for the collaborative development of trustworthy and fair AI systems, as it is significantly resilient to the misleading practices of malicious FL clients.

Federated Objective: Assessing Client Truthfulness in Federated Learning

Garofalo, Marco
Primo
;
Villari, Massimo
2024-01-01

Abstract

Federated Learning (FL) aims to train artificial intelligence models without the need to share private raw data, thereby preserving privacy and security. Typically, it is assumed that all participating FL clients will act honestly to develop an accurate model. However, some clients may behave deceptively, manipulating their data to bias the model's predictions and also degrade its generalization ability. This paper addresses the issue of fairness in FL from the perspective of client truthfulness. We introduce Federated Objective (FedObj), a novel aggregation method designed to minimize the impact of malicious clients and thereby improve the overall model's robustness to such behavior. Our results show that FedObj achieves state-of-the-art performance in standard scenarios and outperforms conventional strategies when deceptive clients are involved. FedObj is a valuable approach for the collaborative development of trustworthy and fair AI systems, as it is significantly resilient to the misleading practices of malicious FL clients.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1324849
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact