Deep neural network explainability is a critical issue in Artificial Intelligence (AI). This work aims to develop a method to explain a deep residual Convolutional Neural Network able to automatically classify mammograms into breast density classes. Breast density, a risk factor for breast cancer, is defined as the amount of fibroglandular tissue compared to fat tissue visible on a mammogram. We studied the explainability of the classifier to understand the reasons behind its predictions, in fact with a deep multi-layer structure, it acts like a black-box. As there is no well-established method, we explored different possible analyses and visualization techniques. The main obtained results were the achievement of a performance improvement in terms of accuracy and a contribution to assess trust in the model. This is fundamental for a potential application in clinical practice.
Explainability of a CNN for breast density assessment
Scapicchio C.;Lizzi F.;Fantacci M. E.Ultimo
2021-01-01
Abstract
Deep neural network explainability is a critical issue in Artificial Intelligence (AI). This work aims to develop a method to explain a deep residual Convolutional Neural Network able to automatically classify mammograms into breast density classes. Breast density, a risk factor for breast cancer, is defined as the amount of fibroglandular tissue compared to fat tissue visible on a mammogram. We studied the explainability of the classifier to understand the reasons behind its predictions, in fact with a deep multi-layer structure, it acts like a black-box. As there is no well-established method, we explored different possible analyses and visualization techniques. The main obtained results were the achievement of a performance improvement in terms of accuracy and a contribution to assess trust in the model. This is fundamental for a potential application in clinical practice.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.