Explore 6 peer-reviewed studies by K Hackenburg in LLM and Political Persuasion (2023–2025). Discover research powered by Prolific's participant panel.
This page lists 6 peer-reviewed papers authored or co-authored by K Hackenburg in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: K Hackenburg, L Ibrahim, BM Tappin, M Tsakiris
Year: 2025
Published in: AI & SOCIETY, 2025 - Springer
Institution: Oxford Internet Institute, University of Oxford
Research Area: Political Communication and Persuasion, LLM
Discipline: Political Science, Artificial Intelligence
GPT-4's ability to generate persuasive messages rivaled human experts on polarized US political issues, suggesting AI tools may have significant implications for political campaigns and democracy.
Methods: Pre-registered experiment where GPT-4 generated partisan role-playing persuasive messages, which were compared to those from human persuasion experts.
Key Findings: Persuasive impact of GPT-4-generated messages versus human expert messages on U.S. political issues.
Citations: 35
Sample Size: 4955
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: K Hackenburg, BM Tappin, L Hewitt, E Saunders
Year: 2025
Published in: Science, 2025 - science.org
Institution: London School of Economics and Political Science, Stony Brook University
Research Area: Political Persuasion with Conversational AI, LLM, Factual Accuracy in AI Systems.
Discipline: Political Science, Computational Social Science
This Science paper shows that conversational AI chatbots can systematically influence political opinions at scale, and that techniques like post-training and prompting make them far more persuasive—but that increased persuasion is tied to reduced factual accuracy in what the AI says.
Citations: 12
-
Authors: L Luettgau, HR Kirk, K Hackenburg, J Bergs, H Davidson, H Ogden, D Siddarth, S Huang
Year: 2025
Published in: ARXIV
Institution: AI Security Institute, I Policy Directorate, Collective Intelligence Project, Anthropic
Research Area: Experimental evaluation, RCT, Survey Research
Discipline: Computer Science, Human–Computer Interaction (HCI)
Conversational AI is as effective as self-directed internet searches in increasing political knowledge, reducing misinformation beliefs, and promoting accuracy among users in the UK during the 2024 election period.
Methods: A national survey (N=2,499) measured conversational AI usage for political information-seeking, followed by a series of randomised controlled trials (N=2,858) comparing conversational AI to self-directed internet search in improving political knowledge.
Key Findings: Extent of conversational AI usage for political knowledge-seeking in the UK and its efficacy in enhancing political knowledge and reducing misinformation compared to traditional internet searches.
Citations: 3
Sample Size: 5357
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: K Hackenburg, H Margetts
Year: 2023
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Oxford University, Alan Turing Institute
Research Area: Political Persuasion, LLM, Political Science
Discipline: Political Science
DOI: https://doi.org/10.1073/pnas.2403116121
Citations: 153