Authors: R Zhang, C Flathmann, G Musick, B Schelble
Year: 2024
Published in: ACM Transactions on ..., 2024 - dl.acm.org
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming, Human-Computer Interaction (HCI)
Discipline: Robotics, Artificial Intelligence
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
DOI: https://doi.org/10.1145/3635474
Citations: 28
Sample Size: 156