I know this looks bad, but I can explain: Understanding when AI should explain actions in human-AI teams
Authors: R Zhang, C Flathmann, G Musick, B Schelble
Published: 2024
Publication: ACM Transactions on ..., 2024 - dl.acm.org
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
Limitations: Scenarios might not fully replicate real-world complexities; the study's findings may not generalize to other populations or contexts.
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming,Human-Computer Interaction (HCI)
Discipline: Robotics , Artificial Intelligence
Sample Size: 156 participants
Citations: 28
DOI: https://doi.org/10.1145/3635474