Artificial Intelligence (AI) is rapidly transforming healthcare, offering advanced tools for clinical decision support, personalized treatments, and efficient resource management. AI-based algorithms are already demonstrating remarkable capabilities in various areas, such as analyzing medical images for faster and more accurate diagnoses, optimizing treatment plans in oncology, and predicting patient outcomes based on real-time data [4, 7]. These advances hold immense potential to revolutionize clinical workflows and significantly improve patient care. The urgency of addressing AI challenges in healthcare becomes particularly evident when we consider the critical need for transparency and trust. While AI-based algorithms demonstrate remarkable capabilities in areas like medical imaging and treatment optimization, their potential remains constrained by fundamental concerns about comprehensibility and user acceptance. The opacity of these systems poses a significant barrier, potentially undermining the very technological advances that promise to revolutionize clinical workflows and patient care. [5]. However, the integration of AI in healthcare also presents significant challenges, particularly regarding transparency, informed consent, and both ethical and economic sustainability. Human-Centered Design (HCD) offers a powerful approach to address these issues. By actively involving stakeholders such as clinicians, patients, ethics committees, and technical experts in the design of AI systems, HCD ensures that these technologies are tailored to end-user needs and promote transparency, usability, and trust [10, 11]. The lack of transparency in AI algorithms poses substantial problems for both patients and clinicians. Patients may hesitate to accept treatment recommendations when the reasoning behind them remains unclear, while clinicians may be reluctant to rely on AI-driven insights if they cannot understand the underlying logic. This opacity risks hindering the widespread adoption of potentially beneficial AI solutions [3, 4]. Accessible Explainable AI emerges as a key element to address these challenges [1, 2, 3]. By ensuring that algorithmic decisions are understandable to diverse user groups, Accessible XAI fosters trust and facilitates informed decision-making. Techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) provide valuable methods for elucidating model mechanisms [8, 9]. However, to be truly effective in clinical practice, these methods often require customization to meet the specific needs and literacy levels of different users, including clinicians, patients, and families. Tailored explanations not only build trust but also contribute to sustainability by ensuring that diverse user groups, including those with limited health literacy, can access and benefit from AI technologies. Sustainability complements these efforts by addressing economic, social, and environmental dimensions. For instance, equitable access to AI technologies requires mitigating healthcare disparities [14, 15], implementing multilingual interfaces [16, 17], and adopting culturally sensitive design practices [18, 19]. Promoting the long-term cost-effectiveness of AI solutions necessitates minimizing the environmental impact of computational processes, optimizing resource use, and establishing partnerships that support ongoing updates and maintenance [19]. To achieve this level of customization and inclusivity, a Human-Centered approach is crucial. Human-Centred Design (HCD), through methodologies like co-design and meta-design, emphasizes collaborative and participatory approaches, which are especially critical in healthcare contexts like pediatric telerehabilitation [12, 13]. This paper discusses ongoing work to define a comprehensive framework that integrates Accessible XAI, HCD, and sustainability to holistically address these challenges in AI-based healthcare. Although applicable to various healthcare settings, the framework is particularly relevant for pediatric rehabilitation, where co-design with families, children, and clinicians is instrumental in developing transparent, user-friendly, and sustainable AI solutions. The next sections detail the ethical, practical, and socio-economic barriers to AI adoption and illustrate how the proposed framework can help overcome them. by emphasizing the importance of designing AI solutions that are both user-friendly and ethically grounded. Building on recent insights [20], the framework seeks to extend co-design principles with a specific focus on pediatric telerehabilitation, while also incorporating sustainability considerations (economic, social, environmental) into the AI development lifecycle.
Human-Centered Design for Accessible and Sustainable XAI in Healthcare
Giovanni Arras
Membro del Collaboration Group
;Tommaso Turchi
Membro del Collaboration Group
;Giuseppe Prencipe
Membro del Collaboration Group
;Giuseppina Sgandurra
Membro del Collaboration Group
2025-01-01
Abstract
Artificial Intelligence (AI) is rapidly transforming healthcare, offering advanced tools for clinical decision support, personalized treatments, and efficient resource management. AI-based algorithms are already demonstrating remarkable capabilities in various areas, such as analyzing medical images for faster and more accurate diagnoses, optimizing treatment plans in oncology, and predicting patient outcomes based on real-time data [4, 7]. These advances hold immense potential to revolutionize clinical workflows and significantly improve patient care. The urgency of addressing AI challenges in healthcare becomes particularly evident when we consider the critical need for transparency and trust. While AI-based algorithms demonstrate remarkable capabilities in areas like medical imaging and treatment optimization, their potential remains constrained by fundamental concerns about comprehensibility and user acceptance. The opacity of these systems poses a significant barrier, potentially undermining the very technological advances that promise to revolutionize clinical workflows and patient care. [5]. However, the integration of AI in healthcare also presents significant challenges, particularly regarding transparency, informed consent, and both ethical and economic sustainability. Human-Centered Design (HCD) offers a powerful approach to address these issues. By actively involving stakeholders such as clinicians, patients, ethics committees, and technical experts in the design of AI systems, HCD ensures that these technologies are tailored to end-user needs and promote transparency, usability, and trust [10, 11]. The lack of transparency in AI algorithms poses substantial problems for both patients and clinicians. Patients may hesitate to accept treatment recommendations when the reasoning behind them remains unclear, while clinicians may be reluctant to rely on AI-driven insights if they cannot understand the underlying logic. This opacity risks hindering the widespread adoption of potentially beneficial AI solutions [3, 4]. Accessible Explainable AI emerges as a key element to address these challenges [1, 2, 3]. By ensuring that algorithmic decisions are understandable to diverse user groups, Accessible XAI fosters trust and facilitates informed decision-making. Techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) provide valuable methods for elucidating model mechanisms [8, 9]. However, to be truly effective in clinical practice, these methods often require customization to meet the specific needs and literacy levels of different users, including clinicians, patients, and families. Tailored explanations not only build trust but also contribute to sustainability by ensuring that diverse user groups, including those with limited health literacy, can access and benefit from AI technologies. Sustainability complements these efforts by addressing economic, social, and environmental dimensions. For instance, equitable access to AI technologies requires mitigating healthcare disparities [14, 15], implementing multilingual interfaces [16, 17], and adopting culturally sensitive design practices [18, 19]. Promoting the long-term cost-effectiveness of AI solutions necessitates minimizing the environmental impact of computational processes, optimizing resource use, and establishing partnerships that support ongoing updates and maintenance [19]. To achieve this level of customization and inclusivity, a Human-Centered approach is crucial. Human-Centred Design (HCD), through methodologies like co-design and meta-design, emphasizes collaborative and participatory approaches, which are especially critical in healthcare contexts like pediatric telerehabilitation [12, 13]. This paper discusses ongoing work to define a comprehensive framework that integrates Accessible XAI, HCD, and sustainability to holistically address these challenges in AI-based healthcare. Although applicable to various healthcare settings, the framework is particularly relevant for pediatric rehabilitation, where co-design with families, children, and clinicians is instrumental in developing transparent, user-friendly, and sustainable AI solutions. The next sections detail the ethical, practical, and socio-economic barriers to AI adoption and illustrate how the proposed framework can help overcome them. by emphasizing the importance of designing AI solutions that are both user-friendly and ethically grounded. Building on recent insights [20], the framework seeks to extend co-design principles with a specific focus on pediatric telerehabilitation, while also incorporating sustainability considerations (economic, social, environmental) into the AI development lifecycle.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


