Trustworthiness is often operationalized through a defined set of requirements — e.g., robustness, reliability, transparency, explainability, fairness, accountability, privacy — that systems are expected to meet to be considered trustworthy. With the advent of Artificial Intelligence (AI), the concept of trustworthiness has evolved significantly, expanding beyond merely technical dimensions to encompass ethical and legal nuances. Despite these broadened considerations, the predominant checklist-oriented approach, in which machines are deemed trustworthy upon fulfilling a predetermined set of criteria, remains prevalent. Although this approach might appear natural and beneficial, it risks oversimplifying its inherent complexity.This paper explores this approach with a dual purpose. First, it argues that the conceptual ambiguity currently surrounding trustworthiness crucially impedes effective interdisciplinary communication and risks promoting superficial compliance — i.e., "ethics-washing" — where systems might be labeled trustworthy primarily due to their technical performance, without factually respecting their ethical implications and broader societal impacts, or vice versa. Second, it acknowledges that trustworthiness cannot be merely an intrinsic system attribute but is fundamentally context-dependent, user-relative, and dynamically evolving within human–machine interactions - thus becoming a dynamic concept shaped by these elements. In this light, it is also important not to neglect the notion of perceived trustworthiness, which is introduced to highlight the role of subjective evaluations shaped by contextual and individual factors. These aspects collectively influence trust in AI-powered systems.By integrating technical rigor, ethical awareness, and user-centric evaluations, this paper seeks to refine conceptual clarity, particularly within the domain of requirements engineering.

Beyond the Checklist: Rethinking Trustworthiness in AI System

Melis, Beatrice;
2025-01-01

Abstract

Trustworthiness is often operationalized through a defined set of requirements — e.g., robustness, reliability, transparency, explainability, fairness, accountability, privacy — that systems are expected to meet to be considered trustworthy. With the advent of Artificial Intelligence (AI), the concept of trustworthiness has evolved significantly, expanding beyond merely technical dimensions to encompass ethical and legal nuances. Despite these broadened considerations, the predominant checklist-oriented approach, in which machines are deemed trustworthy upon fulfilling a predetermined set of criteria, remains prevalent. Although this approach might appear natural and beneficial, it risks oversimplifying its inherent complexity.This paper explores this approach with a dual purpose. First, it argues that the conceptual ambiguity currently surrounding trustworthiness crucially impedes effective interdisciplinary communication and risks promoting superficial compliance — i.e., "ethics-washing" — where systems might be labeled trustworthy primarily due to their technical performance, without factually respecting their ethical implications and broader societal impacts, or vice versa. Second, it acknowledges that trustworthiness cannot be merely an intrinsic system attribute but is fundamentally context-dependent, user-relative, and dynamically evolving within human–machine interactions - thus becoming a dynamic concept shaped by these elements. In this light, it is also important not to neglect the notion of perceived trustworthiness, which is introduced to highlight the role of subjective evaluations shaped by contextual and individual factors. These aspects collectively influence trust in AI-powered systems.By integrating technical rigor, ethical awareness, and user-centric evaluations, this paper seeks to refine conceptual clarity, particularly within the domain of requirements engineering.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1332948
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact