Given the recent advantages in multimodal image pretraining where visual models trained with semantically dense textual super- vision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain. We perform exhaustive experiments on the NoisyArt dataset which is a collection of artwork images collected from public resources on the web. On such dataset CLIP achieve impressive results on (zero-shot) classification and promising results in both artwork-to-artwork and description-to-artwork domain.
Exploiting CLIP-Based Multi-modal Approach for Artwork Classification and Retrieval
Uricchio T.;
2022-01-01
Abstract
Given the recent advantages in multimodal image pretraining where visual models trained with semantically dense textual super- vision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain. We perform exhaustive experiments on the NoisyArt dataset which is a collection of artwork images collected from public resources on the web. On such dataset CLIP achieve impressive results on (zero-shot) classification and promising results in both artwork-to-artwork and description-to-artwork domain.File | Dimensione | Formato | |
---|---|---|---|
2309.12110.pdf
accesso aperto
Tipologia:
Documento in Post-print
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
767.99 kB
Formato
Adobe PDF
|
767.99 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.