ndividuals with visual impairments face significant challenges navigating environments, especially with tasks such as object identification and traversing unfamiliar spaces. Often, their needs are inadequately addressed, leading to applications that do not meet their specific requirements. Traditional object detection models frequently lack this demographic's accuracy, speed, and efficiency. However, recent Internet of Things (IoT) advancements offer promising solutions, providing real-time guidance and alerts about potential hazards through IoT-enabled navigation apps and smart city infrastructure. This paper presents an extension of our MoSIoT framework, incorporating the YOLOv8 convolutional neural network for precise object detection and a specialized decision layer to improve environmental understanding. Additionally, advanced distance measurement techniques are incorporated to provide crucial information on object proximity. Our model demonstrates increased efficiency and adaptability across diverse environments using transfer learning and robust regularization techniques. Systematic evaluation indicates significant improvements in object detection accuracy, measured by mean Average Precision at 50% Intersection over Union (mAP50) from 0.44411 to 0.51809 and mAP50-95 from 0.24936 to 0.29586 for visually impaired individuals, ensuring reliable real-time feedback for safe navigation. These enhancements significantly improve the MoSIoT framework, thereby greatly enhancing accessibility, safety, independence, and mobility for users with visual impairments.

Empowering Visual Navigation: A Deep-Learning Solution for Enhanced Accessibility and Safety Among the Visually Impaired

Leporini, Barbara;
2025-01-01

Abstract

ndividuals with visual impairments face significant challenges navigating environments, especially with tasks such as object identification and traversing unfamiliar spaces. Often, their needs are inadequately addressed, leading to applications that do not meet their specific requirements. Traditional object detection models frequently lack this demographic's accuracy, speed, and efficiency. However, recent Internet of Things (IoT) advancements offer promising solutions, providing real-time guidance and alerts about potential hazards through IoT-enabled navigation apps and smart city infrastructure. This paper presents an extension of our MoSIoT framework, incorporating the YOLOv8 convolutional neural network for precise object detection and a specialized decision layer to improve environmental understanding. Additionally, advanced distance measurement techniques are incorporated to provide crucial information on object proximity. Our model demonstrates increased efficiency and adaptability across diverse environments using transfer learning and robust regularization techniques. Systematic evaluation indicates significant improvements in object detection accuracy, measured by mean Average Precision at 50% Intersection over Union (mAP50) from 0.44411 to 0.51809 and mAP50-95 from 0.24936 to 0.29586 for visually impaired individuals, ensuring reliable real-time feedback for safe navigation. These enhancements significantly improve the MoSIoT framework, thereby greatly enhancing accessibility, safety, independence, and mobility for users with visual impairments.
2025
9789819605729
9789819605736
File in questo prodotto:
File Dimensione Formato  
Empowering visual navigation.pdf

non disponibili

Tipologia: Versione finale editoriale
Licenza: NON PUBBLICO - accesso privato/ristretto
Dimensione 1.73 MB
Formato Adobe PDF
1.73 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1279327
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact