Authors: F Sun, N Li, K Wang, L Goette
Year: 2025
Published in: arXiv preprint arXiv:2505.02151, 2025 - arxiv.org
Institution: HKU Business School
Research Area: LLM Overconfidence and Human Bias Amplification, Bias, LLM
Discipline: Artificial Intelligence, Behavioral Science
Large language models (LLMs) exhibit overconfidence, amplifying human bias, especially in cases where their certainty declines, and their input doubles overconfidence in human decision making despite improving accuracy.
Methods: Algorithmically constructed reasoning problems with known ground truths were used to evaluate LLMs' confidence; comparisons were drawn with human performance using similar experimental protocols.
Key Findings: LLM confidence levels, correctness probabilities, comparison of bias between LLMs and humans, and effects of LLM input on human decision making.
Citations: 21