The challenges of providing explanations of AI systems when they do not behave like users expect
Authors: M Riveiro, S Thill
Published: 2025
Publication: Proceedings of the 30th ACM Conference on User ..., 2022 - dl.acm.org
Users prefer factual explanations when AI outputs match expectations and mechanistic explanations when outputs deviate, with preferences influenced by response format (multiple-choice vs free text).
Methods: Participants were presented with scenarios involving an automated text classifier and asked to express their preference for explanations either through multiple-choice or free text responses.
Key Findings: User-desired content of AI explanations based on whether system behaviour aligns or deviates from expectations.
Limitations: Sensitivity to experimental setups suggests variability in findings depending on response format; generalizability might be limited due to specific scenario-based evaluations.
Institution: Linköping University, University of Skövde
Research Area: Explainable AI, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Citations: 30
DOI: 10.1145/3503252.3531306