Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framework, we find that agent populations consistently converge toward agreement, not through sycophancy or blind conformity, but via a structured and asymmetric persuasion process. Agents are more likely to accept, and thus be persuaded by, opinions that are more agreeable relative to the discussion framing, revealing a directional bias in how opinions evolve. LLM agents selectively adopt peers’ views, showing neither bounded confidence nor indiscriminate agreement. Moreover, agents frequently produce fallacious arguments, and are significantly influenced by them: logical fallacies, especially those of relevance and credibility, play a measurable role in driving opinion change. These results not only uncover emergent behaviours in agents’ dynamics, but also highlight the dual role of LLMs as both generators and victims of flawed reasoning, raising important considerations for their deployment in socially sensitive contexts.

Selective agreement, not sycophancy: investigating opinion dynamics in LLM interactions

Cau, Erica;Pansanella, Valentina;Pedreschi, Dino;Rossetti, Giulio
2025-01-01

Abstract

Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framework, we find that agent populations consistently converge toward agreement, not through sycophancy or blind conformity, but via a structured and asymmetric persuasion process. Agents are more likely to accept, and thus be persuaded by, opinions that are more agreeable relative to the discussion framing, revealing a directional bias in how opinions evolve. LLM agents selectively adopt peers’ views, showing neither bounded confidence nor indiscriminate agreement. Moreover, agents frequently produce fallacious arguments, and are significantly influenced by them: logical fallacies, especially those of relevance and credibility, play a measurable role in driving opinion change. These results not only uncover emergent behaviours in agents’ dynamics, but also highlight the dual role of LLMs as both generators and victims of flawed reasoning, raising important considerations for their deployment in socially sensitive contexts.
2025
Cau, Erica; Pansanella, Valentina; Pedreschi, Dino; Rossetti, Giulio
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1337708
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact