Introduction: The human Mirror System (hMS) transforms visual information into motor knowledge and thus mediates understanding of actions done by others. This feature relies on an abstract, supramodal sensory representation of motor acts; hMS is also recruited when individuals receive clues of the occurring action with no visual features (e.g. actions sound) [4] and is activated by action sounds also in congenitally blind individuals [8]. Here we applied a Multi Voxel Pattern Analysis (MVPA), recently used to discriminate Action Feature (AF) in visual stimuli [7], to classify a set of hand-made actions. Specifically, we tested the hypothesis that classification of action feature in the human brain is not strictly dependent from a given sensory modality but rather relies on the pattern information in specific supramodal regions. Methods: We used a fMRI (GE Signa 1.5T; TR 2.5s, 21 5-mm axial slices, 128x128 pixels) sparse sampling six-run block design to examine neural activity in 8 congenitally blind (6F, 44±16 yrs) and 14 sighted (5F, 32±13 yrs) right-handed healthy volunteers while they alternated between the auditory and visual (sighted only) presentation of hand-executed action or environmental stimuli, and the execution of a "virtual" tool or object manipulation task (Pantomime). After standard preprocessing using AFNI [2], BOLD responses to each stimulus were identified in cortical surface voxels. To separate action from environmental stimuli in blind (sounds only) and sighted (sounds and videos) subjects, we built three distinct linear Support Vector Machine (SVM) binary classifiers [5]. A Recursive Feature Elimination (RFE) algorithm was used to prune undiscriminative voxels [3]. To uncover all voxels potentially contributing to the supramodal representation of actions, a knock-out procedure was implemented, removing all overlapping voxels across the discriminative maps of each classifier [1]. Finally, the three SVM classifiers were applied to these common voxels. Results: The three SVM classifiers, trained separately, were able to identify the AF during learning with a mean accuracy (Acc) on cross-validation leave-one-subject-out of 94% in each experimental condition. Further, the three classifiers overall classified the motor pantomime as an action (recall [Rec] 75%). In an across conditions evaluation, only the video classifier was able to significantly classify auditory stimuli in the sighted (Acc 57%, p<0.01; Rec 66%) and blind group (Acc 55%, p<0.05; Rec 57%). The knock-out procedure removed overlapping voxels across discriminative maps, located mainly in hMS-related areas (left inferior and superior parietal, left ventral and dorsal premotor area, middle temporal cortex) and in bilateral striate and extrastriate, right temporo-parietal, dorsolateral and medial prefrontal cortex, bilateral precuneus and posterior cingulate. After this step, the video classifier was not able to identify auditory stimuli neither in sighted nor in blind subjects, even if still able to classify video stimuli, thus indicating that the classifier relies on a visual-specific AF. Finally, using only common voxels, the video classifier achieved on auditory stimuli a 62% accuracy (p<0.0001; Rec 73%) and a 63% accuracy (p<0.0001; Rec 70%) in blind subjects; also the audio classifier trained on sighted identified action videos (Acc 57%, p<0.01; Rec 61%). Conclusions: For the first time, a MPVA-based classifier successfully discriminated the neural "space" of action representation, extracted the AF in the perceived stimuli, and thus separated actions from non-action stimuli with a distributed representation in a network including the hMS in both sighted and blind individuals independently from the sensory modality of stimuli. That the concept of an action in the brain relies on a more abstract neural representation contributes to explain how individuals deprived of sight since birth may learn by and interact effectively with others [6]. References: 1)Carlson TA (2003), ‘Patterns of Activity in the Categorical Representation of Objects’, J. Cog. Neorosci., vol. 15, no. 5, pp. 704–717. 2)Cox RW (1996), ‘AFNI: software for analysis and visualization of functional magnetic resonance neuroimages’, Comput Biomed Res, vol. 29, pp. 162-173. 3)De Martino F (2008), ‘Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns’, Neuroimage, vol. 43, pp. 44-58. 4)Galati G (2008), ‘A selective representation of the meaning of actions in the auditory mirror system’, Neuroimage, vol. 40, pp. 1274-1286. 5)Joachims T (1999), ‘Making large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning’, Schölkopf and Burges and Smola (ed.), MIT Press 6)Matteau I (2010), ‘Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals’, Brain Res Bull, vol. 82, no. 5-6; pp. 264-270. 7)Oosterhof NN (2010), ‘Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex’, J Neurophysiol, vol. 104, pp. 1077-1089. 8)Ricciardi E (2009), ‘Do we really need vision? How blind people "see" the actions of others’, J Neurosci, vol. 29, pp. 9719-9724.

Modality-indipendent Classification of Action Feature in the Human Brain

HANDJARAS, GIACOMO;BERNARDI, GIULIO;RICCIARDI, EMILIANO;PIETRINI, PIETRO
2011-01-01

Abstract

Introduction: The human Mirror System (hMS) transforms visual information into motor knowledge and thus mediates understanding of actions done by others. This feature relies on an abstract, supramodal sensory representation of motor acts; hMS is also recruited when individuals receive clues of the occurring action with no visual features (e.g. actions sound) [4] and is activated by action sounds also in congenitally blind individuals [8]. Here we applied a Multi Voxel Pattern Analysis (MVPA), recently used to discriminate Action Feature (AF) in visual stimuli [7], to classify a set of hand-made actions. Specifically, we tested the hypothesis that classification of action feature in the human brain is not strictly dependent from a given sensory modality but rather relies on the pattern information in specific supramodal regions. Methods: We used a fMRI (GE Signa 1.5T; TR 2.5s, 21 5-mm axial slices, 128x128 pixels) sparse sampling six-run block design to examine neural activity in 8 congenitally blind (6F, 44±16 yrs) and 14 sighted (5F, 32±13 yrs) right-handed healthy volunteers while they alternated between the auditory and visual (sighted only) presentation of hand-executed action or environmental stimuli, and the execution of a "virtual" tool or object manipulation task (Pantomime). After standard preprocessing using AFNI [2], BOLD responses to each stimulus were identified in cortical surface voxels. To separate action from environmental stimuli in blind (sounds only) and sighted (sounds and videos) subjects, we built three distinct linear Support Vector Machine (SVM) binary classifiers [5]. A Recursive Feature Elimination (RFE) algorithm was used to prune undiscriminative voxels [3]. To uncover all voxels potentially contributing to the supramodal representation of actions, a knock-out procedure was implemented, removing all overlapping voxels across the discriminative maps of each classifier [1]. Finally, the three SVM classifiers were applied to these common voxels. Results: The three SVM classifiers, trained separately, were able to identify the AF during learning with a mean accuracy (Acc) on cross-validation leave-one-subject-out of 94% in each experimental condition. Further, the three classifiers overall classified the motor pantomime as an action (recall [Rec] 75%). In an across conditions evaluation, only the video classifier was able to significantly classify auditory stimuli in the sighted (Acc 57%, p<0.01; Rec 66%) and blind group (Acc 55%, p<0.05; Rec 57%). The knock-out procedure removed overlapping voxels across discriminative maps, located mainly in hMS-related areas (left inferior and superior parietal, left ventral and dorsal premotor area, middle temporal cortex) and in bilateral striate and extrastriate, right temporo-parietal, dorsolateral and medial prefrontal cortex, bilateral precuneus and posterior cingulate. After this step, the video classifier was not able to identify auditory stimuli neither in sighted nor in blind subjects, even if still able to classify video stimuli, thus indicating that the classifier relies on a visual-specific AF. Finally, using only common voxels, the video classifier achieved on auditory stimuli a 62% accuracy (p<0.0001; Rec 73%) and a 63% accuracy (p<0.0001; Rec 70%) in blind subjects; also the audio classifier trained on sighted identified action videos (Acc 57%, p<0.01; Rec 61%). Conclusions: For the first time, a MPVA-based classifier successfully discriminated the neural "space" of action representation, extracted the AF in the perceived stimuli, and thus separated actions from non-action stimuli with a distributed representation in a network including the hMS in both sighted and blind individuals independently from the sensory modality of stimuli. That the concept of an action in the brain relies on a more abstract neural representation contributes to explain how individuals deprived of sight since birth may learn by and interact effectively with others [6]. References: 1)Carlson TA (2003), ‘Patterns of Activity in the Categorical Representation of Objects’, J. Cog. Neorosci., vol. 15, no. 5, pp. 704–717. 2)Cox RW (1996), ‘AFNI: software for analysis and visualization of functional magnetic resonance neuroimages’, Comput Biomed Res, vol. 29, pp. 162-173. 3)De Martino F (2008), ‘Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns’, Neuroimage, vol. 43, pp. 44-58. 4)Galati G (2008), ‘A selective representation of the meaning of actions in the auditory mirror system’, Neuroimage, vol. 40, pp. 1274-1286. 5)Joachims T (1999), ‘Making large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning’, Schölkopf and Burges and Smola (ed.), MIT Press 6)Matteau I (2010), ‘Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals’, Brain Res Bull, vol. 82, no. 5-6; pp. 264-270. 7)Oosterhof NN (2010), ‘Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex’, J Neurophysiol, vol. 104, pp. 1077-1089. 8)Ricciardi E (2009), ‘Do we really need vision? How blind people "see" the actions of others’, J Neurosci, vol. 29, pp. 9719-9724.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/200780
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact