Explainable Artificial Intelligence (XAI) aims to reduce the inherent opaqueness of modern Machine Learning (ML) systems and make it more interpretable. This has its highest potential benefits in critical societal application domains. However, current literature in XAI is yet unable to be fully effective to real world scenarios due to lack of expressiveness and easiness of understanding by domain experts outside AI. This PhD research proposal seeks to advance the field of Multimodal eXplainable AI (MulXAI) in a user-centric manner and achieve XAI concrete applicability in real-world scenarios. This through the introduction of WAIT, a MulXAI framework. WAIT is capable of profiling users generate enriched, adapted explanations in terms of data modality, explanation technique, explanation scope and verbosity. This will be accomplished through the implementation of an active profiling strategy which iteratively tailors the interpretable output according to user preferences, attitudes, domain expertise, and other relevant factors. The explanation process will be able to capture intra-modal and inter-modal relationships. WAIT’s explainability module will produce enriched and comprehensive explanations adapting to diverse users and stakeholders trough external knowledge provided. This paper provides an overview of the work conducted since the start of the PhD, presents a brief but comprehensive review of the relevant MulXAI literature, outlines key challenges and foundational objectives of the research, and discusses the preliminary results obtained along with future steps.

WAIT, I Can Explain! Introducing Weighted AI Integration for Tailored Explanations

Claudio Giovannoni
Primo
2024-01-01

Abstract

Explainable Artificial Intelligence (XAI) aims to reduce the inherent opaqueness of modern Machine Learning (ML) systems and make it more interpretable. This has its highest potential benefits in critical societal application domains. However, current literature in XAI is yet unable to be fully effective to real world scenarios due to lack of expressiveness and easiness of understanding by domain experts outside AI. This PhD research proposal seeks to advance the field of Multimodal eXplainable AI (MulXAI) in a user-centric manner and achieve XAI concrete applicability in real-world scenarios. This through the introduction of WAIT, a MulXAI framework. WAIT is capable of profiling users generate enriched, adapted explanations in terms of data modality, explanation technique, explanation scope and verbosity. This will be accomplished through the implementation of an active profiling strategy which iteratively tailors the interpretable output according to user preferences, attitudes, domain expertise, and other relevant factors. The explanation process will be able to capture intra-modal and inter-modal relationships. WAIT’s explainability module will produce enriched and comprehensive explanations adapting to diverse users and stakeholders trough external knowledge provided. This paper provides an overview of the work conducted since the start of the PhD, presents a brief but comprehensive review of the relevant MulXAI literature, outlines key challenges and foundational objectives of the research, and discusses the preliminary results obtained along with future steps.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1304908
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact