A meta-analysis of the persuasive power of large language models
Authors: L Hölbling, S Maier, S Feuerriegel
Published: 2025
Publication: Scientific Reports, 2025 - nature.com
Large language models (LLMs) demonstrate similar persuasive performance to humans overall, but their effectiveness varies widely based on contextual factors such as model type, conversation design, and domain.
Methods: Systematic review and meta-analysis using Hedges' g to compute standardized effect sizes, with exploratory moderator analyses and publication bias checks (Egger's test, trim-and-fill analysis).
Key Findings: The persuasive effectiveness of LLMs compared to humans across various contexts and studies.
Limitations: Limited number of studies available; substantial heterogeneity in results due to contextual factors; individual factor analyses lacked statistical significance despite combined model explaining variance well.
Institution: University of Lausanne, University of Zurich, University of St. Gallen
Research Area: LLMs in Persuasion, Meta-Analysis, Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence
Sample Size: 17422 participants