Large Language Models (LLMs) struggle to keep up with the fast-changing nature of real-world information, as their pre-trained knowledge quickly becomes outdated. This work addresses the challenge of keeping LLMs up to date with factual knowledge (adaptation) while avoiding forgetting the relevant existing knowledge. Leveraging temporally-aligned Wikipedia and Wikidata dumps, we extract a continuous data stream and evaluate the performance of an incrementally trained GPT-2 across different time periods. Additionally, we extend our analysis to real-world news data using the RealTimeData dataset, examining how LLMs respond to novel facts, such as the COVID-19 pandemic. Our methodology includes synthetic data generation and SmartReview, a continual learning strategy that avoids forgetting by rehearsing on a carefully selected subset of the old data. Experimental results highlight that pretrained models require continual learning and demonstrate the effectiveness of replay-based approaches in mitigating forgetting. In particular, SmartReview provides a strong replay-based baseline that limits forgetting and enhances adaptation. This work advances the study of continual learning in LLMs, offering insights into the development of more temporally-aware and reliable AI systems.

Adapting language models with continual learning for temporal drifts

Antonio Carta
;
Lucia C. Passaro
2026-01-01

Abstract

Large Language Models (LLMs) struggle to keep up with the fast-changing nature of real-world information, as their pre-trained knowledge quickly becomes outdated. This work addresses the challenge of keeping LLMs up to date with factual knowledge (adaptation) while avoiding forgetting the relevant existing knowledge. Leveraging temporally-aligned Wikipedia and Wikidata dumps, we extract a continuous data stream and evaluate the performance of an incrementally trained GPT-2 across different time periods. Additionally, we extend our analysis to real-world news data using the RealTimeData dataset, examining how LLMs respond to novel facts, such as the COVID-19 pandemic. Our methodology includes synthetic data generation and SmartReview, a continual learning strategy that avoids forgetting by rehearsing on a carefully selected subset of the old data. Experimental results highlight that pretrained models require continual learning and demonstrate the effectiveness of replay-based approaches in mitigating forgetting. In particular, SmartReview provides a strong replay-based baseline that limits forgetting and enhances adaptation. This work advances the study of continual learning in LLMs, offering insights into the development of more temporally-aware and reliable AI systems.
2026
Carta, Antonio; Roberto Marinelli, Alberto; Passaro, Lucia C.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1356948
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact