Integrating an automatic target recognition (ATR) system into real-world applications presents a challenge as it may frequently encounter new samples from unseen classes. To overcome this challenge, it is necessary to adopt incremental learning, which enables the continuous acquisition of new knowledge while retaining previous knowledge. This article introduces a novel, multipurpose interpretability metric for ATR systems that employs synthetic aperture radar images. The metric leverages the local interpretable model-agnostic explanation algorithm, enhancing human decision-making by providing a secondary measure alongside the conventional classification score. In addition, the proposed metric is employed to analyze the robustness of convolutional neural networks by examining the impact of target features and irrelevant background correlations on recognition results. Finally, we demonstrate the effectiveness of the proposed metric in the context of incremental learning. By utilizing the proposed interpretability metric, we select exemplars in an incremental learning scenario, resulting in improved performance and showcasing the application potential of our proposed methodology. The network is fine-tuned sequentially with unknown samples recognized by the Openmax classifier and exemplars from the old known classes, which are selected based on the proposed interpretability metric. The effectiveness of this approach is demonstrated using the publicly available MSTAR dataset.

LIME-Assisted Automatic Target Recognition With SAR Images: Toward Incremental Learning and Explainability

Elisa Giusti;Selenia Ghio;Giulio Meucci;Marco Martorella
2023-01-01

Abstract

Integrating an automatic target recognition (ATR) system into real-world applications presents a challenge as it may frequently encounter new samples from unseen classes. To overcome this challenge, it is necessary to adopt incremental learning, which enables the continuous acquisition of new knowledge while retaining previous knowledge. This article introduces a novel, multipurpose interpretability metric for ATR systems that employs synthetic aperture radar images. The metric leverages the local interpretable model-agnostic explanation algorithm, enhancing human decision-making by providing a secondary measure alongside the conventional classification score. In addition, the proposed metric is employed to analyze the robustness of convolutional neural networks by examining the impact of target features and irrelevant background correlations on recognition results. Finally, we demonstrate the effectiveness of the proposed metric in the context of incremental learning. By utilizing the proposed interpretability metric, we select exemplars in an incremental learning scenario, resulting in improved performance and showcasing the application potential of our proposed methodology. The network is fine-tuned sequentially with unknown samples recognized by the Openmax classifier and exemplars from the old known classes, which are selected based on the proposed interpretability metric. The effectiveness of this approach is demonstrated using the publicly available MSTAR dataset.
2023
Hosein Oveis, Amir; Giusti, Elisa; Ghio, Selenia; Meucci, Giulio; Martorella, Marco
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1302589
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 15
  • ???jsp.display-item.citation.isi??? 11
social impact