W3: On the importance of explanation content for explainable AI: Towards an explanation content research agenda

Call for Participation

On the importance of explanation content for explainable AI: Towards an explanation content research agenda

Organizers

Helmut Degen
Siemens Corporation, USA
helmut.degen@siemens.com

George Margetis
Foundation for Research and Technology – Hellas (FORTH), Greece
gmarget@ics.forth.gr

Stavroula Ntoa
Foundation for Research and Technology – Hellas (FORTH), Greece
stant@ics.forth.gr

Motivation of the Workshop

Explainable Artificial Intelligence (XAI) research has made strong progress in the development and comparison of explanation types, including causal, counterfactual, contrastive, example-based, case-based, rule-based, feature attribution–based, and hybrid explanations. This body of work has significantly advanced the technical capabilities for generating explanations and has contributed to a growing understanding of which technically available explanations are useful for users.

Despite this progress, XAI research has remained largely feasibility-driven, prioritizing what models can produce over what users need to know. Comparatively little attention has been devoted to the content of explanations, that is, the information elements that explanations should include to support users’ understanding, decision making, and appropriate reliance on AI systems. Additional concerns refer to how explanation content should be structured, presented, and adapted to different stakeholders and user roles, goals, tasks, and decision contexts (Hoffman et al. 2023; Szymanski et al. 2025). Based on a secondary analysis of 73 human-participant XAI evaluation studies reviewed by Kim et al. (2024), only two studies openly elicited explanation content from end users without predefined constraints, directly informing system design, while five more restricted users to predefined options.

This gap is particularly acute in industrial domains, where effective explanations must be grounded in domain-specific knowledge, workflows, and the consequences of decisions made partly based on explanations. In manufacturing, energy, logistics, healthcare, and similar sectors, users such as automation engineers, operators, maintainers, and safety auditors require explanations that speak directly to the concepts, terminology, and decision contexts of their application domain. Explanations derived from model-intrinsic properties are often insufficient or potentially misleading in these settings.

This workshop is motivated by the need to understand what makes explanations effective and how to generate and appropriately present them to end users. The workshop will explore explanation content as a first-class research object in XAI. By bringing together researchers from HCI, AI, cognitive science, and related disciplines, the workshop aims to pursue three interconnected aims: establishing shared theoretical foundations, addressing methodological gaps, and exploring practical implementation challenges in human-centered explanation content.

Aim of the Workshop

The workshop aims to advance the study of explanation content in XAI through three interconnected aims: theoretical, methodological, and practical.

First, the workshop seeks to establish a shared understanding of explanation content and its role within the broader landscape of XAI and HCI research. To this end, participants will critically compare theoretical approaches and conceptual frameworks from HCI, cognitive science, linguistics, human-centered AI, and related areas with the aim of developing a common ontological foundation for explanation content research.

Second, the workshop will investigate the methodological challenges of eliciting, designing, and evaluating explanation content in a human-centered way. This includes discussing methods for identifying user information needs, eliciting explanation content and mapping it to user questions and tasks, designing explanation content, and evaluating whether explanations effectively support users’ understanding and decision making.

Third, the workshop will explore the practical implementation of explanation content in real-world AI systems. While many XAI methods focus on generating explanations from model-intrinsic properties, integrating appropriate explanation content into deployed AI systems raises additional challenges related to system design, user interfaces, domain constraints, and organizational contexts.

By addressing these three aims, the workshop seeks to elevate explanation content from a largely unexamined research concern to a better defined, grounded foundation for both research and practice in human-centered AI systems. In doing so, the workshop aims to shift the current XAI research from a feasibility-driven focus on explanation types (e.g., causal, counterfactual, example-based explanations) towards a theoretically grounded, user needs-driven focus on explanation content as the basis for effective explanations.

Expected Workshop outcome

The workshop aims to produce several outcomes that contribute to the advancement of explanation content research in XAI:

  • A shared characterization of the negative consequences of poorly grounded or underspecified explanation content in AI systems.
  • Preliminary building blocks for an ontological and theoretical foundation for explanation content in XAI.
  • A mapping of open research questions, methodological gaps, and key challenges for eliciting, designing, evaluating, generating, deploying, and maintaining explanation content.
  • A prioritized research agenda for explanation content in XAI, outlining directions for future theoretical and empirical research.

Participants who are interested will have the opportunity to contribute to a shared follow-up publication, synthesizing the workshop discussions and outcomes.

Workshop topics

Topics include, but are not limited to:

  • Theoretical foundations and conceptual frameworks for explanation content in XAI
  • Ontological foundations for explanation content
  • Relationships between explanation types, explanation formats, and explanation content
  • Methods for eliciting explanation content from users and stakeholders, and categorizing it according to their roles, expertise levels, tasks, and decision contexts
  • Methods for evaluating the effectiveness of explanation content in supporting user understanding, decision making, and appropriate reliance
  • Design and visualization of explanation content for different stakeholders, including end-users, developers, auditors, and regulators
  • Explanation content requirements in regulatory and compliance contexts, including implications of frameworks such as the EU AI Act
  • Explanation content for generative AI and large language models
  • Domain-specific challenges in explanation content design (e.g., healthcare, finance, education, industry)
  • Practical approaches for integrating explanation content into deployed AI systems
  • Case studies of explanation content elicitation, design, and integration in AI systems

References

Hoffman, R. R., Mueller, S. T., Klein, G., Jalaeian, M., and Tate, C. 2023. Explainable AI: roles and stakeholders, desirements and challenges. Frontiers in Computer Science 5, 1117848. https://doi.org/10.3389/fcomp.2023.1117848

Kim, J., Maathuis, H., and Sent, D. 2024. Human-centered evaluation of explainable AI applications: a systematic review. Frontiers in Artificial Intelligence 7, 1456486. https://doi.org/10.3389/frai.2024.1456486

Szymanski, M., Vanden Abeele, V., and Verbert, K. 2025. Disentangling stakeholder role and expertise in user-centered explainable AI. In Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization (UMAP '25). ACM, New York, NY, USA, 32–39. https://doi.org/10.1145/3699682.3728351

Workshop agenda

Workshop event: 08:30 pm – 12:30 pm, Sunday, 26 July 2026

The following is a framework for the program of the Workshop:

Time

Program event

30 min.

Collect experiences from all workshop participants regarding the need for human-centered explanation content

30 min.

Explore negative consequences of not identifying explanation content in a human-centered way

45 min.

Identify and discuss theoretical building blocks; identify research questions

45 min.

Identify methodological challenges and relevant research questions

45 min.

Identify practical challenges and relevant research questions

45 min.

Build and prioritize research agenda; define follow up question

Guidelines to prospective authors

Submission for the Workshop

Interested participants should submit a position paper (around 1000 words without references) on the importance of explanation content for explainable AI.

Prospective authors should submit their proposals in PDF format through the HCII Conference Management System (CMS).

Submission for the Conference Proceedings

The contributions to be presented in the context of Workshops will not be automatically included in the Conference proceedings.

However, after consultation with the Workshop organizer(s), authors of accepted Workshop proposals who are registered for the Conference are welcome to submit, through the Conference Management System (CMS), an extended version of their Workshop contribution to be considered, following further peer review, for presentation at the Conference and inclusion in the “Late Breaking” volumes of the Conference proceedings, either in the LNCS as a long paper (typically 12 pages, but no less than 10 and no more than 20 pages), or in the CCIS as a short paper/extended poster abstract (typically 6 pages, but no less than 4 and no more than 11).

Workshop organizers are also encouraged to consider and explore the (additional) possibility of preparing a paper (short or long) which will present the collaborative efforts of their Workshop participants, and can be submitted in October 2026 to be considered for publication in the context of the HCII 2027 Conference Proceedings.

Workshop deadlines

Submission of Workshop contributions

May 30, 2026

Authors notified of decisions on acceptance

June 15, 2026

Finalization of Workshop organization and registration of participants

June 30, 2026

Workshop organizers

Helmut_Degen

Helmut Degen
Dr. Helmut Degen is Senior Key Expert for User Experience at Siemens Corporation, Princeton, NJ, USA. Helmut conducts explainable AI (XAI) research for industrial applications at Siemens with a focus on human-computer interaction. Helmut is also the co-chair of the yearly, international conference on Artificial Intelligence in Human-Computer Interaction (AI-HCI) affiliated with the HCI International Conference. He has received a Masters of Science (in German “Diplom-Informatiker”) from the Karlsruhe Institute of Technology and a PhD in Information Science from the Freie Universität Berlin (both in Germany).

George Margetis

George Margetis
Dr. George Margetis is a computer scientist specializing in Human-Computer Interaction (HCI), Human-Centered Artificial Intelligence (HCAI), Ambient Intelligence (AmI), Extended Reality (XR), and Digital Accessibility. Since 2021, he leads the Human-Centered AI research and development activities of the HCI Laboratory of FORTH-ICS, Greece. In this role, he has advocated for human-centric and inclusive design of AI systems, ensuring outcomes that are technologically robust, transparent, usable, and aligned with human values. He is also the co-chair of the yearly, international conference on Human-Centered Design, Operation and Evaluation of Mobile Communications (MOBILE), affiliated with the HCI International Conference.

Stavroula Ntoa

Stavroula Ntoa
Dr. Stavroula Ntoa is a Computer Scientist specializing in Design for All, software accessibility, usability engineering, and User Experience (UX) research and design. She is a Principal Researcher at the HCI Laboratory of FORTH-ICS, Greece, leading the accessible UX research and design activities of the lab. Her research interests focus on Design for All and Universal Access of modern interactive technologies, adaptive and intelligent interfaces, as well as inclusiveness and user experience research in intelligent and Artificial Intelligence environments. She serves as co-chair of the annual International Conference on Artificial Intelligence in Human-Computer Interaction, affiliated with the HCI International Conference.

Registration regulation

Workshops will run as 'hybrid' events. Organizers are themselves expected to attend ‘on-site’, while participants will have the option to attend either 'on-site' or 'on-line'. The total number of participants per Workshop cannot be less than 8 or exceed 25.

Workshops are ‘closed’ events, i.e. only authors of accepted submissions for a Workshop will be able to register to attend the specific Workshop.

Workshop registration is complimentary for registered Conference participants or requires a fee of $95 per Workshop for non-registered Conference participants.