The article analyzes the ethical non-neutrality of generative language models (LLMs), showing how implicit design values shape both their potential and their risks. Drawing on reflections in algoethics, it explores two dimensions: ethics by design, concerning principles such as transparency, fairness, and non-discrimination in system development, and ethics in design, which addresses human responsibility in interacting with these models. Issues such as bias, hallucinations, fake news, and filter bubbles highlight the need for close human oversight (human in the loop) to prevent manipulation and systemic discrimination. Within this framework, the European AI Act represents an initial regulatory attempt through risk categorization, yet the ethi-cal challenge remains of governing technologies as a pharmakon – at once remedy and threat. The article concludes with a call for interdisciplinary dialogue among ethics, linguistics, law, and social sciences to foster shared responsibility in the use of LLMs, safeguarding human autonomy, dignity, and imagination
Intelligenza artificiale generativa, linguistica e sfide etiche
Veronica Neri
2025-01-01
Abstract
The article analyzes the ethical non-neutrality of generative language models (LLMs), showing how implicit design values shape both their potential and their risks. Drawing on reflections in algoethics, it explores two dimensions: ethics by design, concerning principles such as transparency, fairness, and non-discrimination in system development, and ethics in design, which addresses human responsibility in interacting with these models. Issues such as bias, hallucinations, fake news, and filter bubbles highlight the need for close human oversight (human in the loop) to prevent manipulation and systemic discrimination. Within this framework, the European AI Act represents an initial regulatory attempt through risk categorization, yet the ethi-cal challenge remains of governing technologies as a pharmakon – at once remedy and threat. The article concludes with a call for interdisciplinary dialogue among ethics, linguistics, law, and social sciences to foster shared responsibility in the use of LLMs, safeguarding human autonomy, dignity, and imaginationI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


