Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.

Logically explainable malware detection

Giannini, Francesco;
2024-01-01

Abstract

Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1347032
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact