Artificial Intelligence (AI) has rapidly become a pervasive force in contemporary society, shaping how we communicate, learn, and represent human diversity. This paper investigates how AI generates images of people with severe disabilities and examines the cultural, ethical, and educational implications of these representations. AI tools, widely accessible and easy to use, are increasingly integrated into the daily lives of both adults and youth. However, the uncritical use of AI can perpetuate stereotypes, invisibilize marginalized groups, and reinforce existing social biases. This research was conducted using six popular AI tools—ChatGPT, Copilot, DeepSeek, Gemini, Aria (Opera), and Canva (Dream Lab)—through 120 test prompts requesting the generation of an image of a person with a severe disability. The study identifies five categories of AI responses: (1) inability to generate images, (2) refusal to produce disability-related images, (3) requests for more detailed information, (4) idyllic but unrealistic depictions, and (5) images focused on dependency and care. By combining philosophical analysis with empirical data, this paper argues for the necessity of educating to difference and fostering critical media literacy to challenge the biases embedded in AI systems. It concludes by proposing pathways for inclusive AI design and pedagogical strategies to promote the visibility and dignity of people with severe disabilities.
Artificial Intelligence and Severe Disability: Representation, Bias, and Educational Implications
EDOARDO GHEZZANI
;SILVIA DADA
2026-01-01
Abstract
Artificial Intelligence (AI) has rapidly become a pervasive force in contemporary society, shaping how we communicate, learn, and represent human diversity. This paper investigates how AI generates images of people with severe disabilities and examines the cultural, ethical, and educational implications of these representations. AI tools, widely accessible and easy to use, are increasingly integrated into the daily lives of both adults and youth. However, the uncritical use of AI can perpetuate stereotypes, invisibilize marginalized groups, and reinforce existing social biases. This research was conducted using six popular AI tools—ChatGPT, Copilot, DeepSeek, Gemini, Aria (Opera), and Canva (Dream Lab)—through 120 test prompts requesting the generation of an image of a person with a severe disability. The study identifies five categories of AI responses: (1) inability to generate images, (2) refusal to produce disability-related images, (3) requests for more detailed information, (4) idyllic but unrealistic depictions, and (5) images focused on dependency and care. By combining philosophical analysis with empirical data, this paper argues for the necessity of educating to difference and fostering critical media literacy to challenge the biases embedded in AI systems. It concludes by proposing pathways for inclusive AI design and pedagogical strategies to promote the visibility and dignity of people with severe disabilities.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


