Discover 10 peer-reviewed studies in Explainable Ai (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 10 peer-reviewed papers in the research area of Explainable Ai in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction, Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
DOI: https://doi.org/10.1145/3706598.3714020
Citations: 38
Sample Size: 308
-
Authors: M Riveiro, S Thill
Year: 2025
Published in: Proceedings of the 30th ACM Conference on User ..., 2022 - dl.acm.org
Institution: Linköping University, University of Skövde
Research Area: Explainable AI, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Users prefer factual explanations when AI outputs match expectations and mechanistic explanations when outputs deviate, with preferences influenced by response format (multiple-choice vs free text).
Methods: Participants were presented with scenarios involving an automated text classifier and asked to express their preference for explanations either through multiple-choice or free text responses.
Key Findings: User-desired content of AI explanations based on whether system behaviour aligns or deviates from expectations.
DOI: 10.1145/3503252.3531306
Citations: 30
-
Authors: P Spitzer, J Holstein, K Morrison
Year: 2025
Published in: ... Journal of Human ..., 2025 - Taylor & Francis
Institution: Karlsruhe Institute of Technology, Carnegie Mellon University, University of Bayreuth
Research Area: Human-AI Collaboration, Explainable AI (XAI)
Discipline: Human-Computer Interaction (HCI)
Incorrect explanations in AI-assisted decision-making lead to a misinformation effect, negatively impacting human reasoning, procedural knowledge, and collaboration performance.
Methods: A study on human-AI collaboration involving AI-supported decision-making paired with explainable AI (XAI) to assess the effects of incorrect explanations.
Key Findings: Impact of incorrect explanations on human reasoning strategies, procedural knowledge, and team performance in human-AI collaboration.
Citations: 13
Sample Size: 160
-
Authors: P Spitzer, K Morrison, V Turri, M Feng, A Perer
Year: 2025
Published in: ACM Transactions on ..., 2025 - dl.acm.org
Institution: Carnegie Mellon University, Karlsruhe Institute of Technology, University of Bayreuth
Research Area: Explainable AI (XAI), AI-Assisted Decision-Making, Human-AI Collaboration
Discipline: Artificial Intelligence
The study highlights how imperfect explainable AI (XAI), along with human cognitive styles, affects reliance on AI and the performance of human–AI teams, providing design guidelines for better collaboration systems.
Methods: The researchers conducted a study with 136 participants, analyzing the effects of explanation imperfections and cognitive styles on AI-assisted decision-making and human–AI collaboration.
Key Findings: The impact of incorrect explanations and explanation modalities on human reliance, decision-making, and human–AI team performance, as well as the role of cognitive styles.
Citations: 2
Sample Size: 136
-
Authors: J Zhou, R Aloufi, N van Zalk
Year: 2025
Published in: 38th International BCS Human ..., 2025 - scienceopen.com
Institution: The affiliated institutions for the authors are not available in the provided context or search results.
Research Area: High-Stakes Decision-Making, Explainable AI, User Trust, Human-Centered AI, Interaction Design
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
This study explores how human collaboration and communication dynamics vary when interacting with an AI chatbot versus a human partner in a high-stakes decision-making task.
Methods: One-way between-subjects design using the NASA Moon Survival Task to compare behaviors, linguistic coordination, and perceptions in interactions with AI or human partners.
Key Findings: Collaboration processes, communicative dynamics, outcomes, retrospective interaction experience, partner perception, and linguistic coordination, with user profiling for AI benefit variations.
DOI: doi.org/10.14236/ewic/BCSHCI2025.52
Citations: 1
-
Authors: R Zhang, C Flathmann, G Musick, B Schelble
Year: 2024
Published in: ACM Transactions on ..., 2024 - dl.acm.org
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming, Human-Computer Interaction (HCI)
Discipline: Robotics, Artificial Intelligence
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
DOI: https://doi.org/10.1145/3635474
Citations: 28
Sample Size: 156
-
Authors: Z Li, M Yin
Year: 2024
Published in: Advances in Neural Information Processing ..., 2024 - proceedings.neurips.cc
Institution: Purdue University
Research Area: Human Behavior Modeling, Explainable AI, Decision Making in AI systems.
Discipline: Artificial Intelligence, Behavioral Science
DOI: https://doi.org/10.52202/079017-0163
Citations: 7
-
Authors: P Hemmer, M Schemmer, N Kühl, M Vössing, G Satzger
Year: 2024
Published in: ArXiv
Institution: Karlsruhe Institute of Technology
Research Area: Human-AI Collaboration, Explainable AI (XAI), Complementarity in Decision Making
Discipline: Human-Computer Interaction (HCI)
-
Authors: H Vasconcelos, M Jörke
Year: 2023
Published in: Proceedings of the ..., 2023 - dl.acm.org
Institution: Stanford University, University of Washington
Research Area: Human-AI Interaction, Explainable AI (XAI), Decision-Making
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
DOI: https://doi.org/10.1145/3579605
Citations: 405
-
Authors: C Woodcock, B Mittelstadt, D Busbridge
Year: 2021
Published in: Journal of medical Internet ..., 2021 - jmir.org
Institution: Oxford University, Alan Turing Institute, University of Edinburgh
Research Area: Health Informatics, Explainable AI (XAI), Trust in AI, Digital Health
Discipline: Digital Health
DOI: https://doi.org/10.2196/29386
Citations: 52