(a) Situation faced: Digital transformation in the insurance value chain is fostering the adoption of artificial intelligence, namely, of deep learning methods, for enabling the improvement and the automation of two relevant tasks in the claim management process, i.e., (i) sensitive data detection and anonymization and (ii) manipulation detection on images. The proposed approach is technically feasible, lightweight, and sufficiently scalable due to the properties offered by currently available cloud platforms, and it also yields a sensible reduction in operational costs. (b) Action taken: Since well-established guidelines to address insurance digitalization use-cases requiring deep learning do not yet exist, we propose a customized data science workflow for designing and developing two prototypes that tackle: (i) sensitive data detection and anonymization and (ii) manipulation detection on claim images. We propose a six-step method that is implemented using deep convolutional neural networks in Keras and TensorFlow and is seamlessly integrable with the most frequently used cloud environments. During prototyping, different training and testing iterations were carried out, thus progressively fine-tuning detection models, up to the achievement of the desired performance. (c) Results achieved: The developed prototypes are able to (i) robustly anonymize claim images and (ii) robustly detect manipulations on claim images (robustness means that, from a statistical viewpoint, the declared performance level is preserved even in the presence of highly heterogeneous distributions of the input data). The technical realization relies on open-source software and on the availability of cloud platforms, this last both for training purposes and for scalability issues. This demonstrates the applicability of our methodology, given a reliable analysis of the available resources, including the preparation of an appropriate training dataset for the models. (d) Lessons learned: The present work demonstrates the feasibility of the proposed deep learning-based six-step methodology for image anonymization and manipulation detection purposes and discusses challenges and learnings during implementation. Specifically, key learnings include the importance of business translation, data quality, data preparation, and model training.

Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach

Martini, Antonella
2021-01-01

Abstract

(a) Situation faced: Digital transformation in the insurance value chain is fostering the adoption of artificial intelligence, namely, of deep learning methods, for enabling the improvement and the automation of two relevant tasks in the claim management process, i.e., (i) sensitive data detection and anonymization and (ii) manipulation detection on images. The proposed approach is technically feasible, lightweight, and sufficiently scalable due to the properties offered by currently available cloud platforms, and it also yields a sensible reduction in operational costs. (b) Action taken: Since well-established guidelines to address insurance digitalization use-cases requiring deep learning do not yet exist, we propose a customized data science workflow for designing and developing two prototypes that tackle: (i) sensitive data detection and anonymization and (ii) manipulation detection on claim images. We propose a six-step method that is implemented using deep convolutional neural networks in Keras and TensorFlow and is seamlessly integrable with the most frequently used cloud environments. During prototyping, different training and testing iterations were carried out, thus progressively fine-tuning detection models, up to the achievement of the desired performance. (c) Results achieved: The developed prototypes are able to (i) robustly anonymize claim images and (ii) robustly detect manipulations on claim images (robustness means that, from a statistical viewpoint, the declared performance level is preserved even in the presence of highly heterogeneous distributions of the input data). The technical realization relies on open-source software and on the availability of cloud platforms, this last both for training purposes and for scalability issues. This demonstrates the applicability of our methodology, given a reliable analysis of the available resources, including the preparation of an appropriate training dataset for the models. (d) Lessons learned: The present work demonstrates the feasibility of the proposed deep learning-based six-step methodology for image anonymization and manipulation detection purposes and discusses challenges and learnings during implementation. Specifically, key learnings include the importance of business translation, data quality, data preparation, and model training.
2021
Andreozzi, Alessandra; Ricciardi Celsi, Lorenzo; Martini, Antonella
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1110718
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact