Stroke is a leading cause of disability and mortality worldwide, necessitating early detection for effective intervention. This study introduces a novel, mobile-enabled solution for early stroke detection, leveraging a lightweight deep learning (DL) approach to identify acute and non-acute stroke symptoms from facial features in real time. The proposed system utilizes the YOLOv8n model, a state-of-the-art object detection architecture, which has been fine-tuned on a custom dataset tailored for stroke-related facial anomalies. To ensure compatibility with resource-constrained devices, the trained YOLOv8n model was converted to TensorFlow Lite, a framework optimized for mobile deployment. The system is integrated into an Android mobile application using Flutter, a cross-platform development framework, enabling seamless execution and real-time video streaming from the device's camera. This cutting-edge implementation allows for continuous health monitoring, providing users with immediate feedback on potential stroke symptoms. The lightweight nature of the TensorFlow Lite model ensures efficient performance on mobile devices without compromising accuracy. Experimental results demonstrate the system's ability to detect stroke-related facial asymmetries and anomalies with high precision, making it a promising tool for early diagnosis and timely medical intervention. By combining advanced DL techniques with mobile technology, this work paves the way for accessible, real-time health monitoring solutions, particularly in remote or underserved areas where immediate medical attention is often unavailable.
Early Stroke Detection: A Mobile Application for Real-Time Stroke Diagnosis Using Video and Lightweight Deep Learning
Elhanashi, Abdussalam;Donati, Massimiliano;Saponara, Sergio;
2025-01-01
Abstract
Stroke is a leading cause of disability and mortality worldwide, necessitating early detection for effective intervention. This study introduces a novel, mobile-enabled solution for early stroke detection, leveraging a lightweight deep learning (DL) approach to identify acute and non-acute stroke symptoms from facial features in real time. The proposed system utilizes the YOLOv8n model, a state-of-the-art object detection architecture, which has been fine-tuned on a custom dataset tailored for stroke-related facial anomalies. To ensure compatibility with resource-constrained devices, the trained YOLOv8n model was converted to TensorFlow Lite, a framework optimized for mobile deployment. The system is integrated into an Android mobile application using Flutter, a cross-platform development framework, enabling seamless execution and real-time video streaming from the device's camera. This cutting-edge implementation allows for continuous health monitoring, providing users with immediate feedback on potential stroke symptoms. The lightweight nature of the TensorFlow Lite model ensures efficient performance on mobile devices without compromising accuracy. Experimental results demonstrate the system's ability to detect stroke-related facial asymmetries and anomalies with high precision, making it a promising tool for early diagnosis and timely medical intervention. By combining advanced DL techniques with mobile technology, this work paves the way for accessible, real-time health monitoring solutions, particularly in remote or underserved areas where immediate medical attention is often unavailable.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


