Discover 4 peer-reviewed studies in Llm Persuasion (2023–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 4 peer-reviewed papers in the research area of Llm Persuasion in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: S Lodoen, A Orchard
Year: 2025
Published in: arXiv preprint arXiv:2505.09576, 2025 - arxiv.org
Institution: Embry–Riddle Aeronautical University, University of Waterloo
Research Area: Reinforcement Learning from Human Feedback (RLHF), Procedural Rhetoric, LLM Persuasion, Ethics
Discipline: Artificial Intelligence, AI Ethics, Social Science
The paper uses procedural rhetoric to analyze how RLHF reshapes ethical, social, and rhetorical dimensions of generative AI interactions, raising concerns about biases, hegemonic language, and human relationships.
Methods: The study conducts a theoretical and rhetorical analysis based on Ian Bogost's concept of procedural rhetoric, examining how RLHF mechanisms influence language conventions, information practices, and social expectations.
Key Findings: Ethical and rhetorical implications of RLHF-enhanced LLMs on language usage, information seeking, and interpersonal dynamics.
DOI: https://doi.org/10.48550/arXiv.2505.09576
Citations: 3
-
Authors: L Hölbling, S Maier, S Feuerriegel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: University of Lausanne, University of Zurich, University of St. Gallen
Research Area: LLMs in Persuasion, Meta-Analysis, Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence
Large language models (LLMs) demonstrate similar persuasive performance to humans overall, but their effectiveness varies widely based on contextual factors such as model type, conversation design, and domain.
Methods: Systematic review and meta-analysis using Hedges' g to compute standardized effect sizes, with exploratory moderator analyses and publication bias checks (Egger's test, trim-and-fill analysis).
Key Findings: The persuasive effectiveness of LLMs compared to humans across various contexts and studies.
Sample Size: 17422
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: L Griffin, B Kleinberg, M Mozes, K Mai
Year: 2023
Published in: Proceedings of the ..., 2023 - aclanthology.org
Institution: University College London, Tilburg University
Research Area: LLM Influence and Persuasion, LLM
Discipline: Social Science
Citations: 25