The paper provides an experimental investigation of the phenomena of catastrophic forgetting for Neural Machine Translation systems. We introduce and describe the continual incremental language learning setting and its analogy with the classical continual learning scenario. The experiments measure the performance loss of a naive incremental training strategy against a jointly trained baseline, and we show the mitigating effect of the replay strategy. To this end, we also introduce a prioritized replay buffer strategy informed by the specific application domain.
Continual Incremental Language Learning for Neural Machine Translation
Resta, Michele;Bacciu, Davide
2022-01-01
Abstract
The paper provides an experimental investigation of the phenomena of catastrophic forgetting for Neural Machine Translation systems. We introduce and describe the continual incremental language learning setting and its analogy with the classical continual learning scenario. The experiments measure the performance loss of a naive incremental training strategy against a jointly trained baseline, and we show the mitigating effect of the replay strategy. To this end, we also introduce a prioritized replay buffer strategy informed by the specific application domain.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.