Deep graph networks (DGNs) are a family of machine learning models for structured data which are finding heavy application in life sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and DGNs. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimizing a multiobjective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighboring samples for the query graph, which is then used to fit an interpretable model for the predictive behavior of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOX21 and Estimated SOLubility (ESOL) and by quantitative results on a benchmark dataset for explanations, CYCLIQ.
Explaining Deep Graph Networks via Input Perturbation
Bacciu, Davide;Numeroso, Danilo
2023-01-01
Abstract
Deep graph networks (DGNs) are a family of machine learning models for structured data which are finding heavy application in life sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and DGNs. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimizing a multiobjective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighboring samples for the query graph, which is then used to fit an interpretable model for the predictive behavior of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOX21 and Estimated SOLubility (ESOL) and by quantitative results on a benchmark dataset for explanations, CYCLIQ.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.