Is it possible to use images to model verb semantic similarities? Starting from this core question, we developed two textual distributionalsemantic models and a visual one. We found it particularly interesting and challenging to investigate this Part of Speech since verbsare not often analysed in researches focused on multimodal distributional semantics. After the creation of the visual and textualdistributional space, the three models were evaluated in relation to SimLex-999, a gold standard resource. Through this evaluation,we demonstrate that, using visual distributional models, it is possible to extract meaningful information and to effectively capture thesemantic similarity between verbs.
Representing Verbs with Visual Argument Vectors
Irene Sucameli
Primo
;Alessandro Lenci
Secondo
2020-01-01
Abstract
Is it possible to use images to model verb semantic similarities? Starting from this core question, we developed two textual distributionalsemantic models and a visual one. We found it particularly interesting and challenging to investigate this Part of Speech since verbsare not often analysed in researches focused on multimodal distributional semantics. After the creation of the visual and textualdistributional space, the three models were evaluated in relation to SimLex-999, a gold standard resource. Through this evaluation,we demonstrate that, using visual distributional models, it is possible to extract meaningful information and to effectively capture thesemantic similarity between verbs.File | Dimensione | Formato | |
---|---|---|---|
Sucameli-Lenci-2020-Representing-Verbs-with-Visual-Argument-Vectors-annotated.pdf
accesso aperto
Descrizione: Articolo principale
Tipologia:
Versione finale editoriale
Licenza:
Creative commons
Dimensione
245.29 kB
Formato
Adobe PDF
|
245.29 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.