To be employed in real-world applications, explainable arti ficial intelligence (XAI) techniques need to provide explanations that are comprehensible to experts and decision-makers with no machine learning (ML) background, thus allowing for the validation of the ML model via their domain knowledge. To this aim, XAI approaches based on feature importance and coun terfactuals can be employed, although both have some limitations: the last provide only local explanations, whereas the first can be very com putationally expensive. A less computationally-expensive global feature importance measure can be derived by considering the instances close to the model decision boundary and analyzing how often some minor changes in one feature’s values do affect the classification outcome. However, the validation of XAI techniques in the literature rarely em ploys the application domain knowledge due to the burden of formalizing it, e.g., providing some degree of expected importance for each feature. Still, given an ML model, it is difficult to determine whether an XAI technique may inject a bias in the explanation (e.g., overestimating or underestimating the importance of a feature) in the absence of such ground truth. To address this issue, we test our feature importance approach both with the UCI benchmark datasets and real-world smart manufacturing data characterized by annotations provided by domain experts about the expected importance of each feature. If compared to the state-of the-art, the employed approach results to be reliable and convenient in terms of computation time, as well as more concordant with the expected importance provided by the domain expert.

Matching the expert’s knowledge via a counterfactual-based feature importance measure

Antonio Luca Alfeo;Mario G. C. A. Cimino;Guido Gagliardi
2023-01-01

Abstract

To be employed in real-world applications, explainable arti ficial intelligence (XAI) techniques need to provide explanations that are comprehensible to experts and decision-makers with no machine learning (ML) background, thus allowing for the validation of the ML model via their domain knowledge. To this aim, XAI approaches based on feature importance and coun terfactuals can be employed, although both have some limitations: the last provide only local explanations, whereas the first can be very com putationally expensive. A less computationally-expensive global feature importance measure can be derived by considering the instances close to the model decision boundary and analyzing how often some minor changes in one feature’s values do affect the classification outcome. However, the validation of XAI techniques in the literature rarely em ploys the application domain knowledge due to the burden of formalizing it, e.g., providing some degree of expected importance for each feature. Still, given an ML model, it is difficult to determine whether an XAI technique may inject a bias in the explanation (e.g., overestimating or underestimating the importance of a feature) in the absence of such ground truth. To address this issue, we test our feature importance approach both with the UCI benchmark datasets and real-world smart manufacturing data characterized by annotations provided by domain experts about the expected importance of each feature. If compared to the state-of the-art, the employed approach results to be reliable and convenient in terms of computation time, as well as more concordant with the expected importance provided by the domain expert.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1217098
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact