Convolutional Neural Networks have demonstrated high accuracy in medical image analysis, but the opaque nature of such deep learning models hinders their widespread acceptance and clinical adoption. To address this issue, we present XAIMed, a diagnostic support tool specifically designed to be easy to use for physicians. XAIMed supports diagnostic processes involving the analysis of medical images through Convolutional Neu- ral Networks. Besides the model prediction, XAIMed also provides visual explanations using four state-of-art eXplainable AI methods: LIME, RISE, Grad-CAM, and Grad-CAM++. These methods produce saliency maps which highlight image regions that are most influential for a model decision. We also introduce a simple strategy for aggregating the different saliency maps into a unified view which reveals a coarse-grained level of agreement among the explanations. The application features an intuitive graphical user interface and is designed in a modular fashion thus facilitating the integration of new tasks, new models, and new explanation methods
XAIMed: A Diagnostic Support Tool for Explaining AI Decisions on Medical Images
Daole, Mattia;Ducange, Pietro;Marcelloni, Francesco;Miglionico, Giustino;Renda, Alessandro;Schiavo, Alessio
2024-01-01
Abstract
Convolutional Neural Networks have demonstrated high accuracy in medical image analysis, but the opaque nature of such deep learning models hinders their widespread acceptance and clinical adoption. To address this issue, we present XAIMed, a diagnostic support tool specifically designed to be easy to use for physicians. XAIMed supports diagnostic processes involving the analysis of medical images through Convolutional Neu- ral Networks. Besides the model prediction, XAIMed also provides visual explanations using four state-of-art eXplainable AI methods: LIME, RISE, Grad-CAM, and Grad-CAM++. These methods produce saliency maps which highlight image regions that are most influential for a model decision. We also introduce a simple strategy for aggregating the different saliency maps into a unified view which reveals a coarse-grained level of agreement among the explanations. The application features an intuitive graphical user interface and is designed in a modular fashion thus facilitating the integration of new tasks, new models, and new explanation methodsI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.