Evidence of a log scaling law for political persuasion with large language models
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Published: 2024
Publication: arXiv preprint arXiv ..., 2024 - arxiv.org
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Limitations: Focus on static messages, does not address dynamic interactive persuasion or potential domain-specific variations.
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science , Artificial Intelligence
Sample Size: 25982 participants
Citations: 17