While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage forthematic fitmodelinghas been limited, as they have been reported to lag behind syntax-based count models. In this paper, we propose a completeevaluation of count models and word embeddings on thematic fit estimation, by taking into account a larger number of parametersand verb roles and introducing also dependency-based embeddings in the comparison. Our results show a complex scenario, wherea determinant factor for the performance seems to be the availability to the model of reliable syntactic information for building thedistributional representations of the roles.
Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?
Enrico Santus;Alessandro LenciSecondo
;
2020-01-01
Abstract
While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage forthematic fitmodelinghas been limited, as they have been reported to lag behind syntax-based count models. In this paper, we propose a completeevaluation of count models and word embeddings on thematic fit estimation, by taking into account a larger number of parametersand verb roles and introducing also dependency-based embeddings in the comparison. Our results show a complex scenario, wherea determinant factor for the performance seems to be the availability to the model of reliable syntactic information for building thedistributional representations of the roles.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.