Authors: P Spitzer, K Morrison, V Turri, M Feng, A Perer
Year: 2025
Published in: ACM Transactions on ..., 2025 - dl.acm.org
Institution: Carnegie Mellon University, Karlsruhe Institute of Technology, University of Bayreuth
Research Area: Explainable AI (XAI), AI-Assisted Decision-Making, Human-AI Collaboration
Discipline: Artificial Intelligence
The study highlights how imperfect explainable AI (XAI), along with human cognitive styles, affects reliance on AI and the performance of human–AI teams, providing design guidelines for better collaboration systems.
Methods: The researchers conducted a study with 136 participants, analyzing the effects of explanation imperfections and cognitive styles on AI-assisted decision-making and human–AI collaboration.
Key Findings: The impact of incorrect explanations and explanation modalities on human reliance, decision-making, and human–AI team performance, as well as the role of cognitive styles.
Citations: 2
Sample Size: 136