The article examines the accountability gap arising from potential user overreliance on outputs of generative large language models (LLMs) in decision-making processes due to automation bias, favoured by anthropomorphism and the phenomenon of factually incorrect text generation, known as ‘hallucination’. It critiques the techno-solutionism proposing a human-in-the-loop solution, arguing that solving the ‘hallucination’ issue from a purely technical perspective can paradoxically exacerbate user overreliance on algorithmic outputs due to anthropomorphism and automation bias. It also critiques the regulatory optimism in human oversight, challenging its adequacy in effectively addressing automation bias by comparing the EU Artificial Intelligence Act’s Article 14 with the notion of ‘meaningful’ human intervention under the EU General Data Protection Regulation. It finally proposes a comprehensive socio-technical framework that integrates human factors, promotes AI literacy, ensures appropriate levels of automation for different usage contexts, and implements cognitive forcing functions by design. The article cautions against overemphasizing human oversight as a panacea and instead advocates for implementing accountability measures along the entire Artificial Intelligence system’s value chain to appropriately calibrate user trust in generative LLMs.

Human, all too human: accounting for automation bias in generative large language models

Carnat, Irina
Primo
2024-01-01

Abstract

The article examines the accountability gap arising from potential user overreliance on outputs of generative large language models (LLMs) in decision-making processes due to automation bias, favoured by anthropomorphism and the phenomenon of factually incorrect text generation, known as ‘hallucination’. It critiques the techno-solutionism proposing a human-in-the-loop solution, arguing that solving the ‘hallucination’ issue from a purely technical perspective can paradoxically exacerbate user overreliance on algorithmic outputs due to anthropomorphism and automation bias. It also critiques the regulatory optimism in human oversight, challenging its adequacy in effectively addressing automation bias by comparing the EU Artificial Intelligence Act’s Article 14 with the notion of ‘meaningful’ human intervention under the EU General Data Protection Regulation. It finally proposes a comprehensive socio-technical framework that integrates human factors, promotes AI literacy, ensures appropriate levels of automation for different usage contexts, and implements cognitive forcing functions by design. The article cautions against overemphasizing human oversight as a panacea and instead advocates for implementing accountability measures along the entire Artificial Intelligence system’s value chain to appropriately calibrate user trust in generative LLMs.
2024
Carnat, Irina
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1314308
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact