In the field of Artificial Intelligence (AI), there is an increasing focus on enhancing trustworthiness especially in critical sectors such as in the management of civil infrastructure. This paper proposes the adoption of a framework based on Hybrid Distributed Ledger Technology (Hybrid-DLT) as a technological solution for improving trustworthiness. We detail three specific applications in the sector of critical infrastructure maintenance: Explainable AI (XAI) for risk classification, structural defects recognition, and real-time monitoring through IoT. The proposed approach employs tamper-resistant ledgers for tracking key processes such as dataset collection, model training, and inference generation, thereby ensuring non-repudiability for recorded actions and enabling auditability. We demonstrate how this strengthens the explainability mechanisms of AI models and enables the production of verifiable data lineage and certified inferences. Our framework can be applied to existing AI solutions, enhancing their trustworthiness.
Trustworthy AI for infrastructure monitoring: a blockchain-based approach
Severino, Fabio
;Canciani, Andrea;Gervasi, Vincenzo;Natali, Agnese;Salvatore, Walter;
2024-01-01
Abstract
In the field of Artificial Intelligence (AI), there is an increasing focus on enhancing trustworthiness especially in critical sectors such as in the management of civil infrastructure. This paper proposes the adoption of a framework based on Hybrid Distributed Ledger Technology (Hybrid-DLT) as a technological solution for improving trustworthiness. We detail three specific applications in the sector of critical infrastructure maintenance: Explainable AI (XAI) for risk classification, structural defects recognition, and real-time monitoring through IoT. The proposed approach employs tamper-resistant ledgers for tracking key processes such as dataset collection, model training, and inference generation, thereby ensuring non-repudiability for recorded actions and enabling auditability. We demonstrate how this strengthens the explainability mechanisms of AI models and enables the production of verifiable data lineage and certified inferences. Our framework can be applied to existing AI solutions, enhancing their trustworthiness.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.