In the last decades, smart power wheelchairs have being used by people with motor skill impairment in order to improve their autonomy, independence and quality of life. The most recent power wheelchairs feature many technological devices, such as laser scanners to provide automatic obstacle detection or robotic arms to perform simple operations like pick and place. However, if a motor skill impaired user was able to control a very complex robotic arm, paradoxically he would not need it. For that reason, in this paper we present an autonomous control system based on Computer Vision algorithms which allows the user to interact with buttons or elevator panels via a robotic arm in a simple and easy way. Scale-Invariant Feature Transform (SIFT) algorithm has been used to detect and track buttons. Objects detected by SIFT are mapped in a tridimensional reference system collected with Parallel and Tracking Mapping (PTAM) algorithm. Real word coordinates are obtained using a Maximum-Likelihood estimator, fusing the PTAM coordinates with distance information provided by a proximity sensor. The visual servoing algorithm has been developed in Robotic Operative System (ROS) Environment, in which the previous algorithms are implemented as different nodes. Performances have been analyzed in a test scenario, obtaining good results on the real position of the selected objects.

Object detection and spatial coordinates extraction using a monocular camera for a wheelchair mounted robotic arm

Palla, Alessandro
Membro del Collaboration Group
;
Meoni, Gabriele
Membro del Collaboration Group
;
Fanucci, Luca
Membro del Collaboration Group
2017-01-01

Abstract

In the last decades, smart power wheelchairs have being used by people with motor skill impairment in order to improve their autonomy, independence and quality of life. The most recent power wheelchairs feature many technological devices, such as laser scanners to provide automatic obstacle detection or robotic arms to perform simple operations like pick and place. However, if a motor skill impaired user was able to control a very complex robotic arm, paradoxically he would not need it. For that reason, in this paper we present an autonomous control system based on Computer Vision algorithms which allows the user to interact with buttons or elevator panels via a robotic arm in a simple and easy way. Scale-Invariant Feature Transform (SIFT) algorithm has been used to detect and track buttons. Objects detected by SIFT are mapped in a tridimensional reference system collected with Parallel and Tracking Mapping (PTAM) algorithm. Real word coordinates are obtained using a Maximum-Likelihood estimator, fusing the PTAM coordinates with distance information provided by a proximity sensor. The visual servoing algorithm has been developed in Robotic Operative System (ROS) Environment, in which the previous algorithms are implemented as different nodes. Performances have been analyzed in a test scenario, obtaining good results on the real position of the selected objects.
2017
Palla, Alessandro; Frigerio, Alessandro; Meoni, Gabriele; Fanucci, Luca
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/908419
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact