Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.
Logically explainable malware detection
Giannini, Francesco;
2024-01-01
Abstract
Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


