Federated Learning (FL) aims to train artificial intelligence models without the need to share private raw data, thereby preserving privacy and security. Typically, it is assumed that all participating FL clients will act honestly to develop an accurate model. However, some clients may behave deceptively, manipulating their data to bias the model's predictions and also degrade its generalization ability. This paper addresses the issue of fairness in FL from the perspective of client truthfulness. We introduce Federated Objective (FedObj), a novel aggregation method designed to minimize the impact of malicious clients and thereby improve the overall model's robustness to such behavior. Our results show that FedObj achieves state-of-the-art performance in standard scenarios and outperforms conventional strategies when deceptive clients are involved. FedObj is a valuable approach for the collaborative development of trustworthy and fair AI systems, as it is significantly resilient to the misleading practices of malicious FL clients.
Federated Objective: Assessing Client Truthfulness in Federated Learning
Garofalo, MarcoPrimo
;Villari, Massimo
2024-01-01
Abstract
Federated Learning (FL) aims to train artificial intelligence models without the need to share private raw data, thereby preserving privacy and security. Typically, it is assumed that all participating FL clients will act honestly to develop an accurate model. However, some clients may behave deceptively, manipulating their data to bias the model's predictions and also degrade its generalization ability. This paper addresses the issue of fairness in FL from the perspective of client truthfulness. We introduce Federated Objective (FedObj), a novel aggregation method designed to minimize the impact of malicious clients and thereby improve the overall model's robustness to such behavior. Our results show that FedObj achieves state-of-the-art performance in standard scenarios and outperforms conventional strategies when deceptive clients are involved. FedObj is a valuable approach for the collaborative development of trustworthy and fair AI systems, as it is significantly resilient to the misleading practices of malicious FL clients.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


