In the field of Artificial Intelligence (AI), there is an increasing focus on enhancing trustworthiness especially in critical sectors such as in the management of civil infrastructure. This paper proposes the adoption of a framework based on Hybrid Distributed Ledger Technology (Hybrid-DLT) as a technological solution for improving trustworthiness. We detail three specific applications in the sector of critical infrastructure maintenance: Explainable AI (XAI) for risk classification, structural defects recognition, and real-time monitoring through IoT. The proposed approach employs tamper-resistant ledgers for tracking key processes such as dataset collection, model training, and inference generation, thereby ensuring non-repudiability for recorded actions and enabling auditability. We demonstrate how this strengthens the explainability mechanisms of AI models and enables the production of verifiable data lineage and certified inferences. Our framework can be applied to existing AI solutions, enhancing their trustworthiness.

Trustworthy AI for infrastructure monitoring: a blockchain-based approach

Severino, Fabio
;
Canciani, Andrea;Gervasi, Vincenzo;Natali, Agnese;Salvatore, Walter;
2024-01-01

Abstract

In the field of Artificial Intelligence (AI), there is an increasing focus on enhancing trustworthiness especially in critical sectors such as in the management of civil infrastructure. This paper proposes the adoption of a framework based on Hybrid Distributed Ledger Technology (Hybrid-DLT) as a technological solution for improving trustworthiness. We detail three specific applications in the sector of critical infrastructure maintenance: Explainable AI (XAI) for risk classification, structural defects recognition, and real-time monitoring through IoT. The proposed approach employs tamper-resistant ledgers for tracking key processes such as dataset collection, model training, and inference generation, thereby ensuring non-repudiability for recorded actions and enabling auditability. We demonstrate how this strengthens the explainability mechanisms of AI models and enables the production of verifiable data lineage and certified inferences. Our framework can be applied to existing AI solutions, enhancing their trustworthiness.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1284688
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact