Continually learning from non-independent and identically distributed (non-i.i.d.) data poses a significant challenge in deep learning, particularly in resource-constrained environments. Visual models trained via supervised learning often suffer from overfitting, catastrophic forgetting, and biased representations when faced with sequential tasks. In contrast, pre-trained language models demonstrate greater robustness in managing task sequences due to their generalized knowledge representations, albeit at the cost of high computational resources. Leveraging this advantage, we propose a novel learning strategy, Continual Visual Mapping (CVM), which continuously maps visual representations into a fixed knowledge space derived from a language model. By anchoring learning to this fixed space, CVM enables training small, efficient visual models, making it particularly suited for scenarios where adapting large pre-trained visual models is computationally or data-prohibitive. Empirical evaluations across five benchmarks demonstrate that CVM consistently outperforms state-of-the-art continual learning methods, showcasing its potential to enhance generalization and mitigate challenges in resource-constrained continual learning settings.

Continually learn to map visual concepts to language models in resource-constrained environments

Hurtado Julio;Passaro Lucia;Lomonaco Vincenzo
2025-01-01

Abstract

Continually learning from non-independent and identically distributed (non-i.i.d.) data poses a significant challenge in deep learning, particularly in resource-constrained environments. Visual models trained via supervised learning often suffer from overfitting, catastrophic forgetting, and biased representations when faced with sequential tasks. In contrast, pre-trained language models demonstrate greater robustness in managing task sequences due to their generalized knowledge representations, albeit at the cost of high computational resources. Leveraging this advantage, we propose a novel learning strategy, Continual Visual Mapping (CVM), which continuously maps visual representations into a fixed knowledge space derived from a language model. By anchoring learning to this fixed space, CVM enables training small, efficient visual models, making it particularly suited for scenarios where adapting large pre-trained visual models is computationally or data-prohibitive. Empirical evaluations across five benchmarks demonstrate that CVM consistently outperforms state-of-the-art continual learning methods, showcasing its potential to enhance generalization and mitigate challenges in resource-constrained continual learning settings.
2025
Rebillard, Clea; Julio, Hurtado; Krutsylo, Andrii; Passaro, Lucia; Lomonaco, Vincenzo
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1321407
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact