Crime prediction has become a valuable tool for enhancing predictive policing, enabling law enforcement agencies to allocate resources more effectively and implement proactive crime prevention strategies, particularly in high-crime areas. The use of artificial intelligence (AI) has revolutionized this field by analyzing vast amounts of data to identify patterns and anticipate criminal activities with unprecedented accuracy. This paper aims to review the literature on AI-based crime prediction, analyzing 142 studies that focus on crimes against individuals, society, and property. Despite the promising potential of AI in crime prediction, significant challenges remain, particularly regarding the trustworthiness of AI systems, which is essential for their social acceptance. To address these issues, this review explores the explainability of AI-based prediction models, with a specific focus on the role of explainable AI (XAI). The findings highlight the importance of XAI in building trust in these models by offering more transparent and interpretable insights into how AI systems make decisions. However, the review also reveals that the integration of XAI remains underdeveloped in the current literature. By improving the transparency of AI systems, XAI has the potential to lead to more accurate, trustworthy, and fair crime predictions, ultimately facilitating more effective and equitable crime prevention efforts.

Artificial Intelligence in Crime Prediction: A Survey With a Focus on Explainability

Francesco Marcelloni;Fabrizio Ruffini
2025-01-01

Abstract

Crime prediction has become a valuable tool for enhancing predictive policing, enabling law enforcement agencies to allocate resources more effectively and implement proactive crime prevention strategies, particularly in high-crime areas. The use of artificial intelligence (AI) has revolutionized this field by analyzing vast amounts of data to identify patterns and anticipate criminal activities with unprecedented accuracy. This paper aims to review the literature on AI-based crime prediction, analyzing 142 studies that focus on crimes against individuals, society, and property. Despite the promising potential of AI in crime prediction, significant challenges remain, particularly regarding the trustworthiness of AI systems, which is essential for their social acceptance. To address these issues, this review explores the explainability of AI-based prediction models, with a specific focus on the role of explainable AI (XAI). The findings highlight the importance of XAI in building trust in these models by offering more transparent and interpretable insights into how AI systems make decisions. However, the review also reveals that the integration of XAI remains underdeveloped in the current literature. By improving the transparency of AI systems, XAI has the potential to lead to more accurate, trustworthy, and fair crime predictions, ultimately facilitating more effective and equitable crime prevention efforts.
2025
Ersöz, Filiz; Ersöz, Taner; Marcelloni, Francesco; Ruffini, Fabrizio
File in questo prodotto:
File Dimensione Formato  
Artificial_Intelligence_in_Crime_Prediction_A_Survey_With_a_Focus_on_Explainability.pdf

accesso aperto

Tipologia: Versione finale editoriale
Licenza: Creative commons
Dimensione 6.93 MB
Formato Adobe PDF
6.93 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1325047
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact