Sleep disorders and their diagnosis are a significant public health concern. Automated sleep stage classification using deep learning models has shown promising results, but these models often lack transparency and interpretability. In this study, we propose an eXplainable Artificial Intelligence (XAI) approach to enhance the interpretability of cutting-edge deep learning sleep stage classification models. The proposed approach consists of a three-steps framework: (i) employing contrastive learning to order a neural network latent space based on input similarity; (ii) mining meaningful instances from that space; and (iii) explaining those instances by a customized XAI methodology. By doing this we are capable of extracting human-comprehensible insights about the model decision-making process, enhancing the applicability of the proposed approach in real-world clinical scenarios. The explanations provided point out high and low-representative sleep epochs of each sleep phase. These sleep epochs are analyzed considering both the single sleep epoch and the sequence of adjacent sleep epochs for the sleep phase classification.Our approach proved to maintain the original model performances, improve the model interpretability, and confirm that the network decision-making process is valid even from the perspective of a physician.
Building neural networks’ latent space to extract instance-based explanations for sleep staging
Gagliardi, Guido;Alfeo, Antonio Luca;Cimino, Mario G. C. A.;Valenza, Gaetano;De Vos, Maarten
2025-01-01
Abstract
Sleep disorders and their diagnosis are a significant public health concern. Automated sleep stage classification using deep learning models has shown promising results, but these models often lack transparency and interpretability. In this study, we propose an eXplainable Artificial Intelligence (XAI) approach to enhance the interpretability of cutting-edge deep learning sleep stage classification models. The proposed approach consists of a three-steps framework: (i) employing contrastive learning to order a neural network latent space based on input similarity; (ii) mining meaningful instances from that space; and (iii) explaining those instances by a customized XAI methodology. By doing this we are capable of extracting human-comprehensible insights about the model decision-making process, enhancing the applicability of the proposed approach in real-world clinical scenarios. The explanations provided point out high and low-representative sleep epochs of each sleep phase. These sleep epochs are analyzed considering both the single sleep epoch and the sequence of adjacent sleep epochs for the sleep phase classification.Our approach proved to maintain the original model performances, improve the model interpretability, and confirm that the network decision-making process is valid even from the perspective of a physician.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


