With the explosion of online video consumption, assessing and anticipating how users will evaluate the content they watch has become increasingly important. Traditional methods based on explicit user feedback are often limited in their ability to do this, as they can be time-consuming and expensive to collect. This study explores techniques to predict users’ ratings about a video’s ability to evoke emotions through emotional signals. In particular, it is proposed a method of emotional analysis that uses valence and arousal data as key signals for predicting user ratings through systems that use machine-learning techniques. Hence, an experiment in the wild involved 112 participants who completed questionnaires to create a dataset of emotional data and video quality ratings to train different intelligent systems. The best system comprised a Medium Gaussian Support Vector Machine (SVM) classifier that detected users’ ratings between Ineffective and Effective based on valence and arousal features as input, achieving an accuracy higher than 87%. The result demonstrated that it is possible to predict users’ ratings on the ability of the movie to elicit emotion, using users’ emotional states in terms of valence and arousal. The system has several advantages, such as eliminating the need for user reports, predicting user ratings in real-time more quickly and dynamically, and utilizing only the initial emotional state to predict users’ ratings. In addition, it has potential applications in advertising, education, and entertainment fields. Advertisers could better understand how consumers perceive their products and create more effective advertising campaigns; educational institutions could develop more engaging and effective learning materials; entertainment providers could create more popular and successful content.

Video Quality Prediction: An Exploratory Study With Valence and Arousal Signals

Antonio Di Tecco
;
Pierfrancesco Foglia;Cosimo Antonio Prete
2024-01-01

Abstract

With the explosion of online video consumption, assessing and anticipating how users will evaluate the content they watch has become increasingly important. Traditional methods based on explicit user feedback are often limited in their ability to do this, as they can be time-consuming and expensive to collect. This study explores techniques to predict users’ ratings about a video’s ability to evoke emotions through emotional signals. In particular, it is proposed a method of emotional analysis that uses valence and arousal data as key signals for predicting user ratings through systems that use machine-learning techniques. Hence, an experiment in the wild involved 112 participants who completed questionnaires to create a dataset of emotional data and video quality ratings to train different intelligent systems. The best system comprised a Medium Gaussian Support Vector Machine (SVM) classifier that detected users’ ratings between Ineffective and Effective based on valence and arousal features as input, achieving an accuracy higher than 87%. The result demonstrated that it is possible to predict users’ ratings on the ability of the movie to elicit emotion, using users’ emotional states in terms of valence and arousal. The system has several advantages, such as eliminating the need for user reports, predicting user ratings in real-time more quickly and dynamically, and utilizing only the initial emotional state to predict users’ ratings. In addition, it has potential applications in advertising, education, and entertainment fields. Advertisers could better understand how consumers perceive their products and create more effective advertising campaigns; educational institutions could develop more engaging and effective learning materials; entertainment providers could create more popular and successful content.
2024
DI TECCO, Antonio; Foglia, Pierfrancesco; Prete, COSIMO ANTONIO
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1228287
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact