Artificial Intelligence (AI) has rapidly become a pervasive force in contemporary society, shaping how we communicate, learn, and represent human diversity. This paper investigates how AI generates images of people with severe disabilities and examines the cultural, ethical, and educational implications of these representations. AI tools, widely accessible and easy to use, are increasingly integrated into the daily lives of both adults and youth. However, the uncritical use of AI can perpetuate stereotypes, invisibilize marginalized groups, and reinforce existing social biases. This research was conducted using six popular AI tools—ChatGPT, Copilot, DeepSeek, Gemini, Aria (Opera), and Canva (Dream Lab)—through 120 test prompts requesting the generation of an image of a person with a severe disability. The study identifies five categories of AI responses: (1) inability to generate images, (2) refusal to produce disability-related images, (3) requests for more detailed information, (4) idyllic but unrealistic depictions, and (5) images focused on dependency and care. By combining philosophical analysis with empirical data, this paper argues for the necessity of educating to difference and fostering critical media literacy to challenge the biases embedded in AI systems. It concludes by proposing pathways for inclusive AI design and pedagogical strategies to promote the visibility and dignity of people with severe disabilities.

Artificial Intelligence and Severe Disability: Representation, Bias, and Educational Implications

EDOARDO GHEZZANI
;
SILVIA DADA
2026-01-01

Abstract

Artificial Intelligence (AI) has rapidly become a pervasive force in contemporary society, shaping how we communicate, learn, and represent human diversity. This paper investigates how AI generates images of people with severe disabilities and examines the cultural, ethical, and educational implications of these representations. AI tools, widely accessible and easy to use, are increasingly integrated into the daily lives of both adults and youth. However, the uncritical use of AI can perpetuate stereotypes, invisibilize marginalized groups, and reinforce existing social biases. This research was conducted using six popular AI tools—ChatGPT, Copilot, DeepSeek, Gemini, Aria (Opera), and Canva (Dream Lab)—through 120 test prompts requesting the generation of an image of a person with a severe disability. The study identifies five categories of AI responses: (1) inability to generate images, (2) refusal to produce disability-related images, (3) requests for more detailed information, (4) idyllic but unrealistic depictions, and (5) images focused on dependency and care. By combining philosophical analysis with empirical data, this paper argues for the necessity of educating to difference and fostering critical media literacy to challenge the biases embedded in AI systems. It concludes by proposing pathways for inclusive AI design and pedagogical strategies to promote the visibility and dignity of people with severe disabilities.
2026
Ghezzani, Edoardo; Dada, Silvia
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1329910
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact