As the integration of Artificial Intelligence into daily decision-making processes intensifies, the need for clear communication between humans and AI systems becomes crucial. The Adaptive XAI (AXAI) workshop focuses on the design and development of intelligent interfaces that can adaptively explain AI’s decision-making processes and our engagement with those processes. In line with the human-centric principles of the Future Artificial Intelligence Research (FAIR) project1, this workshop seeks to explore, understand and develop interfaces that dynamically adapt, thereby creating explanations of AI-based systems that both relate to and resonate with a range of users with different explanation-based requirements. As AI’s role in our lives becomes ever more embedded, the ways in which such systems explain elements about the system need to be malleable and responsive to the ever-evolving individual’s cognitive state, relating to contextual needs/focus and to the social setting. For instance, easy to use and effective interaction modalities like Visual Languages can provide users with intuitive mechanisms to interact with, adjust, and reshape AI narratives. This ensures that a richer, more tailored understanding can be provided, allowing explanations to emerge in line with the users’ demands and the ever-shifting contexts they find themselves in, both as individuals and as part of a group. The Adaptive XAI workshop extends an invitation to scholars, designers, and tech-nologists to collaboratively shape the future of human-XAI interplay.

Adaptive XAI: Towards Intelligent Interfaces for Tailored AI Explanations

Tommaso Turchi
;
Alessio Malizia;
2024-01-01

Abstract

As the integration of Artificial Intelligence into daily decision-making processes intensifies, the need for clear communication between humans and AI systems becomes crucial. The Adaptive XAI (AXAI) workshop focuses on the design and development of intelligent interfaces that can adaptively explain AI’s decision-making processes and our engagement with those processes. In line with the human-centric principles of the Future Artificial Intelligence Research (FAIR) project1, this workshop seeks to explore, understand and develop interfaces that dynamically adapt, thereby creating explanations of AI-based systems that both relate to and resonate with a range of users with different explanation-based requirements. As AI’s role in our lives becomes ever more embedded, the ways in which such systems explain elements about the system need to be malleable and responsive to the ever-evolving individual’s cognitive state, relating to contextual needs/focus and to the social setting. For instance, easy to use and effective interaction modalities like Visual Languages can provide users with intuitive mechanisms to interact with, adjust, and reshape AI narratives. This ensures that a richer, more tailored understanding can be provided, allowing explanations to emerge in line with the users’ demands and the ever-shifting contexts they find themselves in, both as individuals and as part of a group. The Adaptive XAI workshop extends an invitation to scholars, designers, and tech-nologists to collaboratively shape the future of human-XAI interplay.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1230447
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact