Discover 18 peer-reviewed studies in Persuasion (2023–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 18 peer-reviewed papers in the research area of Persuasion in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: F Salvi, M Horta Ribeiro, R Gallotti, R West
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: EPFL, Fondazione Bruno Kessle, Princeton University
Research Area: Conversational Persuasion of LLM, Human-Computer Interaction (HCI), Behavioral Science, LLM
Discipline: Behavioral Science
GPT-4 can use personalized arguments to be more persuasive in debates, outperforming humans in 64.4% of AI-human comparisons when personalization is applied.
Methods: Preregistered controlled study involving multiround debates with random assignment to conditions focusing on AI-human comparisons, personalization, and opinion strength.
Key Findings: Effectiveness of persuasion by GPT-4, especially when using personalized arguments, compared to humans in debates.
Citations: 65
Sample Size: 900
-
Authors: H Bai, JG Voelkel, S Muldowney, JC Eichstaedt
Year: 2025
Published in: Nature ..., 2025 - nature.com
Institution: Stanford University
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science
LLM-generated messages can effectively persuade humans on policy issues similarly to human-crafted messages, with differences in perceived persuasion mechanisms.
Methods: Three pre-registered experiments were conducted comparing the persuasive effectiveness of LLM-generated and human-generated messages on policy attitudes, using control conditions with neutral messages.
Key Findings: Influence of LLM-generated messages on participants' policy attitudes and perceived characteristics of the message authors.
Citations: 37
Sample Size: 4829
-
Authors: K Hackenburg, L Ibrahim, BM Tappin, M Tsakiris
Year: 2025
Published in: AI & SOCIETY, 2025 - Springer
Institution: Oxford Internet Institute, University of Oxford
Research Area: Political Communication and Persuasion, LLM
Discipline: Political Science, Artificial Intelligence
GPT-4's ability to generate persuasive messages rivaled human experts on polarized US political issues, suggesting AI tools may have significant implications for political campaigns and democracy.
Methods: Pre-registered experiment where GPT-4 generated partisan role-playing persuasive messages, which were compared to those from human persuasion experts.
Key Findings: Persuasive impact of GPT-4-generated messages versus human expert messages on U.S. political issues.
Citations: 35
Sample Size: 4955
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: K Hackenburg, BM Tappin, L Hewitt, E Saunders
Year: 2025
Published in: Science, 2025 - science.org
Institution: London School of Economics and Political Science, Stony Brook University
Research Area: Political Persuasion with Conversational AI, LLM, Factual Accuracy in AI Systems.
Discipline: Political Science, Computational Social Science
This Science paper shows that conversational AI chatbots can systematically influence political opinions at scale, and that techniques like post-training and prompting make them far more persuasive—but that increased persuasion is tied to reduced factual accuracy in what the AI says.
Citations: 12
-
Authors: Z Chen, J Kalla, Q Le, S Nakamura-Sakai
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: The affiliated institutions could not be determined from the provided context or an external search of the URL.
Research Area: Artificial Intelligence and Social Science, Persuasion Studies, Political Persuasion, LLM Chatbots, Democratic Societies
Discipline: Artificial Intelligence, Social Science
The study evaluates the cost-effectiveness and persuasive risks of Large Language Model (LLM) chatbots in political contexts, finding that while LLMs are as persuasive as campaign ads under exposure, their large-scale influence is currently limited by scalability and cost barriers.
Methods: Two survey experiments combined with real-world simulation exercises to measure the persuasiveness of LLM chatbots compared to traditional campaign tactics, focusing on both exposure and acceptance phases of persuasion.
Key Findings: Short- and long-term persuasive effects of LLMs, cost-effectiveness of LLM-based persuasion ($48-$74 per persuaded voter), and scalability compared to traditional campaign approaches.
Citations: 7
Sample Size: 10417
-
Authors: S Lodoen, A Orchard
Year: 2025
Published in: arXiv preprint arXiv:2505.09576, 2025 - arxiv.org
Institution: Embry–Riddle Aeronautical University, University of Waterloo
Research Area: Reinforcement Learning from Human Feedback (RLHF), Procedural Rhetoric, LLM Persuasion, Ethics
Discipline: Artificial Intelligence, AI Ethics, Social Science
The paper uses procedural rhetoric to analyze how RLHF reshapes ethical, social, and rhetorical dimensions of generative AI interactions, raising concerns about biases, hegemonic language, and human relationships.
Methods: The study conducts a theoretical and rhetorical analysis based on Ian Bogost's concept of procedural rhetoric, examining how RLHF mechanisms influence language conventions, information practices, and social expectations.
Key Findings: Ethical and rhetorical implications of RLHF-enhanced LLMs on language usage, information seeking, and interpersonal dynamics.
DOI: https://doi.org/10.48550/arXiv.2505.09576
Citations: 3
-
Authors: H Lin, G Czarnek, B Lewis, JP White, AJ Berinsky
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Massachusetts Institute of Technology
Research Area: Political Persuasion, Human-AI Dialogue, Electoral Behavior
Discipline: Political Science, Artificial Intelligence
The study shows that AI-driven dialogues can significantly influence voter attitudes and candidate preferences, with persuasion effects surpassing traditional political advertisements, though inaccuracies were more prevalent in content generated by AI models supporting right-wing candidates.
Methods: Pre-registered experiments where participants interacted with AI advocating for one of two candidates or a ballot measure, examining persuasion strategies and effects across three elections.
Key Findings: Influence of AI-generated dialogues on voter attitudes and preferences, including analysis of persuasion strategies and accuracy of presented information.
Citations: 3
-
Authors: Z Cheng, J You
Year: 2025
Published in: arXiv preprint arXiv:2509.22989, 2025 - arxiv.org
Institution: University of Southern California, University of California Berkeley
Research Area: Artificial Intelligence, Computers and Society, Computer Science and Game Theory, Strategic Persuasion, Reinforcement Learning, Language Models, LLM, RLHF
Discipline: Artificial Intelligence
This paper introduces a scalable framework, utilizing Bayesian Persuasion, to evaluate and train LLMs for strategic persuasion, demonstrating significant persuasion gains and effective strategies through reinforcement learning.
Methods: Repurposed human-human persuasion datasets for evaluation and training; applied Bayesian Persuasion framework; used reinforcement learning to optimize LLMs for strategic persuasion.
Key Findings: The persuasive capabilities and strategies of large language models (LLMs) in various settings.
Citations: 1
-
Authors: L Hölbling, S Maier, S Feuerriegel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: University of Lausanne, University of Zurich, University of St. Gallen
Research Area: LLMs in Persuasion, Meta-Analysis, Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence
Large language models (LLMs) demonstrate similar persuasive performance to humans overall, but their effectiveness varies widely based on contextual factors such as model type, conversation design, and domain.
Methods: Systematic review and meta-analysis using Hedges' g to compute standardized effect sizes, with exploratory moderator analyses and publication bias checks (Egger's test, trim-and-fill analysis).
Key Findings: The persuasive effectiveness of LLMs compared to humans across various contexts and studies.
Sample Size: 17422
-
Authors: E Meguellati, S Civelli, L Han, A Bernstein
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Oregon Health Sciences University, Oregon University of California, Irvine, Han Institute, NYU School of Law, Bernstein Research
Research Area: Advertising, Persuasion Strategies, Human-AI Interaction in Content Generation
Discipline: Artificial Intelligence
LLM-generated advertisements achieved parity with human-written ads in personalization and demonstrated superiority in persuasion using psychological principles, outperforming human ads even when AI-origin detection impacted results.
Methods: Two-part study: First examined LLM personalization based on personality traits; second tested psychological persuasion principles using universal messages across authority, consensus, cognition, and scarcity.
Key Findings: Effectiveness of LLM-generated ads in personalization and persuasive storytelling compared to human-created ads.
Sample Size: 1200
-
Authors: SC Matz, JD Teeny, SS Vaid, H Peters, GM Harari
Year: 2024
Published in: Scientific Reports, 2024 - nature.com
Institution: Stanford University
Research Area: Personalized Persuasion, Generative AI, Political Influence
Discipline: Artificial Intelligence
Generative AI, specifically large language models like ChatGPT, effectively scale personalized persuasion by matching messages to psychological profiles, demonstrating increased influence across domains and profiles.
Methods: Four studies (with seven sub-studies) tested personalized persuasive messaging crafted by ChatGPT against non-personalized messages across various psychological and domain-specific dimensions.
Key Findings: Effectiveness of personalized persuasive messages crafted by generative AI in different domains, targeting psychological profiles such as personality traits, political ideology, and moral foundations.
Citations: 368
Sample Size: 1788
-
Authors: F Salvi, MH Ribeiro, R Gallotti
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - atelierdesfuturs.org
Institution: EPFL, Fondazione Bruno Kessle
Research Area: Conversational Persuasion in LLM
Discipline: Artificial Intelligence
The study demonstrates that GPT-4 is highly persuasive in direct conversations, especially when equipped with personalized sociodemographic information about its opponent, raising concerns about its potential misuse in personalized persuasion contexts.
Methods: Participants engaged in multiple-round debates on a web-based platform under randomized conditions, with comparisons between human-human and human-AI interactions and the impact of personalization.
Key Findings: The persuasiveness of GPT-4 compared to humans, with and without personalization using sociodemographic data.
Citations: 118
Sample Size: 820
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: M Shin, J Kim
Year: 2024
Published in: Available at SSRN 4725351, 2024 - researchgate.net
Institution: Massachusetts Institute of Technology, Yale University
Research Area: Linguistic Feature Alignment, Persuasion, LLM
Discipline: Artificial Intelligence, Computational Social Science
Citations: 11
-
Authors: K Hackenburg, H Margetts
Year: 2023
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Oxford University, Alan Turing Institute
Research Area: Political Persuasion, LLM, Political Science
Discipline: Political Science
DOI: https://doi.org/10.1073/pnas.2403116121
Citations: 153
-
Authors: H Bai, J Voelkel, J Eichstaedt, R Willer
Year: 2023
Published in: 2023 - researchsquare.com
Institution: Stanford University, London Business School, Dartmouth College, Stanford Graduate School of Business
Research Area: Political Persuasion, Social Influence of AI, Cognitive Science
Discipline: Political Science, Social Science
Citations: 100
-
Authors: L Griffin, B Kleinberg, M Mozes, K Mai
Year: 2023
Published in: Proceedings of the ..., 2023 - aclanthology.org
Institution: University College London, Tilburg University
Research Area: LLM Influence and Persuasion, LLM
Discipline: Social Science
Citations: 25