Conversational Information Access systems have undergone widespread adoption due to the natural and seamless interactions they enable with the user. In particular, they provide an effective interaction interface for both Conversational Search (CS) and Conversational Recommendation (CR) scenarios. Despite their inherent similarities, current research frequently address CS and CR systems as distinct and isolated entities. The integration of these two capabilities would enable to address complex information access scenarios, including the exploration of unfamiliar features of recommended products, which leads to richer dialogues and enhanced user satisfaction. At current time, the evaluation of integrated by-design CS and CR systems is severely hindered by the limited availability of comprehensive datasets that jointly address both tasks. To bridge this gap, we introduce CoSRec, the first dataset for joint Conversational Search and Recommendation (CSR) evaluation. The CoSRec test set includes 20 high-quality conversations, with human-made annotations for the quality of conversations, and manually crafted relevance judgments for products and documents. In addition, we provide auxiliary training resources, including partially annotated dialogues and raw conversations, to support diverse learning paradigms. CoSRec is the first resource to model CS and CR tasks within a unified framework, facilitating the design, development, and evaluation of systems capable of dynamically alternating between answering user queries and offering personalized recommendations.

A Dataset for Joint Conversational Search and Recommendation

Marco Alessio;Franco Maria Nardini;Raffaele Perego;
2025-01-01

Abstract

Conversational Information Access systems have undergone widespread adoption due to the natural and seamless interactions they enable with the user. In particular, they provide an effective interaction interface for both Conversational Search (CS) and Conversational Recommendation (CR) scenarios. Despite their inherent similarities, current research frequently address CS and CR systems as distinct and isolated entities. The integration of these two capabilities would enable to address complex information access scenarios, including the exploration of unfamiliar features of recommended products, which leads to richer dialogues and enhanced user satisfaction. At current time, the evaluation of integrated by-design CS and CR systems is severely hindered by the limited availability of comprehensive datasets that jointly address both tasks. To bridge this gap, we introduce CoSRec, the first dataset for joint Conversational Search and Recommendation (CSR) evaluation. The CoSRec test set includes 20 high-quality conversations, with human-made annotations for the quality of conversations, and manually crafted relevance judgments for products and documents. In addition, we provide auxiliary training resources, including partially annotated dialogues and raw conversations, to support diverse learning paradigms. CoSRec is the first resource to model CS and CR tasks within a unified framework, facilitating the design, development, and evaluation of systems capable of dynamically alternating between answering user queries and offering personalized recommendations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1342831
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact