Fostering appropriate reliance on large language models: The role of explanations, sources, and inconsistencies
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Published: 2025
Publication: Proceedings of the ..., 2025 - dl.acm.org
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction,Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
Sample Size: 308 participants
Citations: 38
DOI: https://doi.org/10.1145/3706598.3714020