Personalized moderation interventions in online social networks foster healthier interactions by adapting responses to both individual traits and contextual factors. However, implementing such interventions is challenging due to transparency concerns and the lack of ground-truth behavioral data from expert psychologists. Interpretability is crucial for addressing these challenges, as it enables platforms to tailor moderation strategies while ensuring fairness and user trust. In this paper, we present an unsupervised, data-driven framework to build an interpretable predictive model capable of distinguishing between toxic and non-toxic users with different personality traits. We leverage personality representations from an external resource to uncover behavioral profiles through clustering, utilizing embeddings of both toxic and non-toxic users. Then, we model users with features capturing linguistic and affective dimensions, training an interpretable personality detector capable of distinguishing between behavioral profiles in a transparent and explainable manner. A case study on Reddit demonstrates the effectiveness of our approach, highlighting how an interpretable model can achieve competitive performance comparable to a black-box alternative while offering meaningful insights into toxic and non-toxic users behavior.
Unsupervised and Interpretable Detection of User Personalities in Online Social Networks
Cascione A.;Pollacci L.;Guidotti R.
2025-01-01
Abstract
Personalized moderation interventions in online social networks foster healthier interactions by adapting responses to both individual traits and contextual factors. However, implementing such interventions is challenging due to transparency concerns and the lack of ground-truth behavioral data from expert psychologists. Interpretability is crucial for addressing these challenges, as it enables platforms to tailor moderation strategies while ensuring fairness and user trust. In this paper, we present an unsupervised, data-driven framework to build an interpretable predictive model capable of distinguishing between toxic and non-toxic users with different personality traits. We leverage personality representations from an external resource to uncover behavioral profiles through clustering, utilizing embeddings of both toxic and non-toxic users. Then, we model users with features capturing linguistic and affective dimensions, training an interpretable personality detector capable of distinguishing between behavioral profiles in a transparent and explainable manner. A case study on Reddit demonstrates the effectiveness of our approach, highlighting how an interpretable model can achieve competitive performance comparable to a black-box alternative while offering meaningful insights into toxic and non-toxic users behavior.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


