Scaling language model size yields diminishing returns for single-message political persuasion
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Published: 2025
Publication: Proceedings of the ..., 2025 - pnas.org
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Limitations: Findings may not generalize beyond static political messaging or account for dynamic conversational contexts; effectiveness hinges on achieving task completion metrics, which larger models may already saturate.
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science , Political Science
Sample Size: 25982 participants
Citations: 31